Glossary of artificial intelligence

This glossary of artificial intelligence terms is about artificial intelligence, its sub-disciplines, and related fields.

Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.







  • Fast-and-frugal trees – a type of classification tree. Fast-and-frugal trees can be used as decision-making tools which operate as lexicographic classifiers, and, if required, associate an action (decision) to each class or category.[172]
  • Feature extraction – In machine learning, pattern recognition and in image processing, feature extraction starts from an initial set of measured data and builds derived values (features) intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations.
  • Feature learning – In machine learning, feature learning or representation learning[173] is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task.
  • Feature selection – In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction.
  • Federated learning – a type of machine learning that allows for training on multiple devices with decentralized data, thus helping preserve the privacy of individual users and their data.
  • First-order logic (also known as first-order predicate calculus and predicate logic) – a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists X such that X is Socrates and X is a man" and there exists is a quantifier while X is a variable.[174] This distinguishes it from propositional logic, which does not use quantifiers or relations.[175]
  • Fluent a condition that can change over time. In logical approaches to reasoning about actions, fluents can be represented in first-order logic by predicates having an argument that depends on time.
  • Formal language – a set of words whose letters are taken from an alphabet and are well-formed according to a specific set of rules.
  • Forward chaining – (or forward reasoning) is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forward chaining is backward chaining. Forward chaining starts with the available data and uses inference rules to extract more data (from an end user, for example) until a goal is reached. An inference engine using forward chaining searches the inference rules until it finds one where the antecedent (If clause) is known to be true. When such a rule is found, the engine can conclude, or infer, the consequent (Then clause), resulting in the addition of new information to its data.[176]
  • Frame an artificial intelligence data structure used to divide knowledge into substructures by representing "stereotyped situations." Frames are the primary data structure used in artificial intelligence frame language.
  • Frame language – a technology used for knowledge representation in artificial intelligence. Frames are stored as ontologies of sets and subsets of the frame concepts. They are similar to class hierarchies in object-oriented languages although their fundamental design goals are different. Frames are focused on explicit and intuitive representation of knowledge whereas objects focus on encapsulation and information hiding. Frames originated in AI research and objects primarily in software engineering. However, in practice the techniques and capabilities of frame and object-oriented languages overlap significantly.
  • Frame problem – is the problem of finding adequate collections of axioms for a viable description of a robot environment.[177]
  • Friendly artificial intelligence (also friendly AI or FAI) – a hypothetical artificial general intelligence (AGI) that would have a positive effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.
  • Futures studies – is the study of postulating possible, probable, and preferable futures and the worldviews and myths that underlie them.[178]
  • Fuzzy control system a control system based on fuzzy logic—a mathematical system that analyzes analog input values in terms of logical variables that take on continuous values between 0 and 1, in contrast to classical or digital logic, which operates on discrete values of either 1 or 0 (true or false, respectively).[179][180]
  • Fuzzy logic a simple form for the many-valued logic, in which the truth values of variables may have any degree of "Truthfulness" that can be represented by any real number in the range between 0 (as in Completely False) and 1 (as in Completely True) inclusive. Consequently, It is employed to handle the concept of partial truth, where the truth value may range between completely true and completely false. In contrast to Boolean logic, where the truth values of variables may have the integer values 0 or 1 only.
  • Fuzzy rule – Fuzzy rules are used within fuzzy logic systems to infer an output based on input variables.
  • Fuzzy set – In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions (aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only take values 0 or 1.[181] In fuzzy set theory, classical bivalent sets are usually called crisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such as bioinformatics.[182]



  • Heuristic – is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an approximate solution when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. A heuristic function, also called simply a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution.[188]
  • Hidden layer – an internal layer of neurons in an artificial neural network, not dedicated to input or output
  • Hidden unit – an neuron in a hidden layer in an artificial neural network
  • Hyper-heuristic – is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several simpler heuristics (or components of such heuristics) to efficiently solve computational search problems. One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.[189][190][191]







  • Naive Bayes classifier – In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features.
  • Naive semantics – is an approach used in computer science for representing basic knowledge about a specific domain, and has been used in applications such as the representation of the meaning of natural language sentences in artificial intelligence applications. In a general setting the term has been used to refer to the use of a limited store of generally understood knowledge about a specific domain in the world, and has been applied to fields such as the knowledge based design of data schemas.[221]
  • Name binding – In programming languages, name binding is the association of entities (data and/or code) with identifiers.[222] An identifier bound to an object is said to reference that object. Machine languages have no built-in notion of identifiers, but name-object bindings as a service and notation for the programmer is implemented by programming languages. Binding is intimately connected with scoping, as scope determines which names bind to which objects – at which locations in the program code (lexically) and in which one of the possible execution paths (temporally). Use of an identifier id in a context that establishes a binding for id is called a binding (or defining) occurrence. In all other occurrences (e.g., in expressions, assignments, and subprogram calls), an identifier stands for what it is bound to; such occurrences are called applied occurrences.
  • Named-entity recognition – (NER), (also known as entity identification, entity chunking and entity extraction) is a subtask of information extraction that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.
  • Named graph – Named graphs are a key concept of Semantic Web architecture in which a set of Resource Description Framework statements (a graph) are identified using a URI,[223] allowing descriptions to be made of that set of statements such as context, provenance information or other such metadata. Named graphs are a simple extension of the RDF data model[224] through which graphs can be created but the model lacks an effective means of distinguishing between them once published on the Web at large.
  • Natural language generation – (NLG), is a software process that transforms structured data into plain-English content. It can be used to produce long-form content for organizations to automate custom reports, as well as produce custom content for a web or mobile application. It can also be used to generate short blurbs of text in interactive conversations (a chatbot) which might even be read out loud by a text-to-speech system.
  • Natural language processing – (NLP), is a subfield of computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.
  • Natural language programming – is an ontology-assisted way of programming in terms of natural-language sentences, e.g. English.[225]
  • Network motif – All networks, including biological networks, social networks, technological networks (e.g., computer networks and electrical circuits) and more, can be represented as graphs, which include a wide variety of subgraphs. One important local property of networks are so-called network motifs, which are defined as recurrent and statistically significant sub-graphs or patterns.
  • Neural machine translation – (NMT), is an approach to machine translation that uses a large artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.
  • Neural Turing machine – (NTMs) is a recurrent neural network model. NTMs combine the fuzzy pattern matching capabilities of neural networks with the algorithmic power of programmable computers. An NTM has a neural network controller coupled to external memory resources, which it interacts with through attentional mechanisms. The memory interactions are differentiable end-to-end, making it possible to optimize them using gradient descent.[226] An NTM with a long short-term memory (LSTM) network controller can infer simple algorithms such as copying, sorting, and associative recall from examples alone.[227]
  • Neuro-fuzzy – refers to combinations of artificial neural networks and fuzzy logic.
  • Neurocybernetics – A brain–computer interface (BCI), sometimes called a neural-control interface (NCI), mind-machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.[228]
  • Neuromorphic engineering – also known as neuromorphic computing,[229][230][231] is a concept describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system.[232] In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perception, motor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors,[233] spintronic memories,[234] threshold switches, and transistors.[235]
  • Node – is a basic unit of a data structure, such as a linked list or tree data structure. Nodes contain data and also may link to other nodes. Links between nodes are often implemented by pointers.
  • Nondeterministic algorithm – is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm.
  • Nouvelle AI – Nouvelle AI differs from classical AI by aiming to produce robots with intelligence levels similar to insects. Researchers believe that intelligence can emerge organically from simple behaviors as these intelligences interacted with the "real world," instead of using the constructed worlds which symbolic AIs typically needed to have programmed into them.[236]
  • NP – In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is "yes", have proofs verifiable in polynomial time.[237][Note 1]
  • NP-completeness – In computational complexity theory, a problem is NP-complete when it can be solved by a restricted class of brute force search algorithms and it can be used to simulate any other problem with a similar algorithm. More precisely, each input to the problem should be associated with a set of solutions of polynomial length, whose validity can be tested quickly (in polynomial time[238]), such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty.
  • NP-hardness – (non-deterministic polynomial-time hardness), in computational complexity theory, is the defining property of a class of problems that are, informally, "at least as hard as the hardest problems in NP". A simple example of an NP-hard problem is the subset sum problem.



  • Partial order reduction – is a technique for reducing the size of the state-space to be searched by a model checking or automated planning and scheduling algorithm. It exploits the commutativity of concurrently executed transitions, which result in the same state when executed in different orders.
  • Partially observable Markov decision process – (POMDP), is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a probability distribution over the set of possible states, based on a set of observations and observation probabilities, and the underlying MDP.
  • Particle swarm optimization – (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions.
  • Pathfinding – or pathing, is the plotting, by a computer application, of the shortest route between two points. It is a more practical variant on solving mazes. This field of research is based heavily on Dijkstra's algorithm for finding a shortest path on a weighted graph.
  • Pattern recognition – is concerned with the automatic discovery of regularities in data through the use of computer algorithms and with the use of these regularities to take actions such as classifying the data into different categories.[244]
  • Predicate logic – First-order logic—also known as predicate logic and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantified variables over non-logical objects and allows the use of sentences that contain variables, so that rather than propositions such as Socrates is a man one can have expressions in the form "there exists x such that x is Socrates and x is a man" and there exists is a quantifier while x is a variable.[174] This distinguishes it from propositional logic, which does not use quantifiers or relations;[245] in this sense, propositional logic is the foundation of first-order logic.
  • Predictive analytics – encompasses a variety of statistical techniques from data mining, predictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events.[246][247]
  • Principal component analysis – (PCA), is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables (entities each of which takes on various numerical values) into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component, in turn, has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors (each being a linear combination of the variables and containing n observations) are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.
  • Principle of rationality – (or rationality principle), was coined by Karl R. Popper in his Harvard Lecture of 1963, and published in his book Myth of Framework.[248] It is related to what he called the 'logic of the situation' in an Economica article of 1944/1945, published later in his book The Poverty of Historicism.[249] According to Popper's rationality principle, agents act in the most adequate way according to the objective situation. It is an idealized conception of human behavior which he used to drive his model of situational analysis.
  • Probabilistic programming – (PP), is a programming paradigm in which probabilistic models are specified and inference for these models is performed automatically.[250] It represents an attempt to unify probabilistic modeling and traditional general-purpose programming in order to make the former easier and more widely applicable.[251][252] It can be used to create systems that help make decisions in the face of uncertainty. Programming languages used for probabilistic programming are referred to as "Probabilistic programming languages" (PPLs).
  • Production system
  • Programming language – is a formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms.
  • Prolog – is a logic programming language associated with artificial intelligence and computational linguistics.[253][254][255] Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily as a declarative programming language: the program logic is expressed in terms of relations, represented as facts and rules. A computation is initiated by running a query over these relations.[256]
  • Propositional calculus – is a branch of logic. It is also called propositional logic, statement logic, sentential calculus, sentential logic, or sometimes zeroth-order logic. It deals with propositions (which can be true or false) and argument flow. Compound propositions are formed by connecting propositions by logical connectives. The propositions without logical connectives are called atomic propositions. Unlike first-order logic, propositional logic does not deal with non-logical objects, predicates about them, or quantifiers. However, all the machinery of propositional logic is included in first-order logic and higher-order logics. In this sense, propositional logic is the foundation of first-order logic and higher-order logic.
  • Python – is an interpreted, high-level, general-purpose programming language. Created by Guido van Rossum and first released in 1991, Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.[257]











See also


  1. For example: Josephson, John R.; Josephson, Susan G., eds. (1994). Abductive Inference: Computation, Philosophy, Technology. Cambridge, UK; New York: Cambridge University Press. doi:10.1017/CBO9780511530128. ISBN 978-0521434614. OCLC 28149683.
  2. "Retroduction | Dictionary | Commens". Commens – Digital Companion to C. S. Peirce. Mats Bergman, Sami Paavola & João Queiroz. Retrieved 24 August 2014.
  3. Colburn, Timothy; Shute, Gary (5 June 2007). "Abstraction in Computer Science". Minds and Machines. 17 (2): 169–184. doi:10.1007/s11023-007-9061-7. ISSN 0924-6495.
  4. Kramer, Jeff (1 April 2007). "Is abstraction the key to computing?". Communications of the ACM. 50 (4): 36–42. CiteSeerX doi:10.1145/1232743.1232745. ISSN 0001-0782.
  5. Michael Gelfond, Vladimir Lifschitz (1998) "Action Languages", Linköping Electronic Articles in Computer and Information Science, vol 3, nr 16.
  6. Jang, Jyh-Shing R (1991). Fuzzy Modeling Using Generalized Neural Networks and Kalman Filter Algorithm (PDF). Proceedings of the 9th National Conference on Artificial Intelligence, Anaheim, CA, USA, July 14–19. 2. pp. 762–767.
  7. Jang, J.-S.R. (1993). "ANFIS: adaptive-network-based fuzzy inference system". IEEE Transactions on Systems, Man and Cybernetics. 23 (3): 665–685. doi:10.1109/21.256541.
  8. Abraham, A. (2005), "Adaptation of Fuzzy Inference System Using Neural Learning", in Nedjah, Nadia; de Macedo Mourelle, Luiza (eds.), Fuzzy Systems Engineering: Theory and Practice, Studies in Fuzziness and Soft Computing, 181, Germany: Springer Verlag, pp. 53–83, CiteSeerX, doi:10.1007/11339366_3, ISBN 978-3-540-25322-8
  9. Jang, Sun, Mizutani (1997) – Neuro-Fuzzy and Soft Computing – Prentice Hall, pp 335–368, ISBN 0-13-261066-3
  10. Tahmasebi, P. (2012). "A hybrid neural networks-fuzzy logic-genetic algorithm for grade estimation". Computers & Geosciences. 42: 18–27. Bibcode:2012CG.....42...18T. doi:10.1016/j.cageo.2012.02.004. PMC 4268588. PMID 25540468.
  11. Tahmasebi, P. (2010). "Comparison of optimized neural network with fuzzy logic for ore grade estimation". Australian Journal of Basic and Applied Sciences. 4: 764–772.
  12. Russell, S.J.; Norvig, P. (2002). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-790395-5.
  13. Rana el Kaliouby (November–December 2017). "We Need Computers with Empathy". Technology Review. 120 (6). p. 8.
  14. Tao, Jianhua; Tieniu Tan (2005). "Affective Computing: A Review". Affective Computing and Intelligent Interaction. LNCS 3784. Springer. pp. 981–995. doi:10.1007/11573548.
  15. Comparison of Agent Architectures Archived August 27, 2008, at the Wayback Machine
  16. "Intel unveils Movidius Compute Stick USB AI Accelerator". 21 July 2017.
  17. "Inspurs unveils GX4 AI Accelerator". 21 June 2017.
  18. Shapiro, Stuart C. (1992). Artificial Intelligence In Stuart C. Shapiro (Ed.), Encyclopedia of Artificial Intelligence (Second Edition, pp. 54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)
  19. Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma. (Nov. 1960 revision of the Feb. 4, 1960 report).
  20. "Artificial intelligence: Google's AlphaGo beats Go master Lee Se-dol". BBC News. 12 March 2016. Retrieved 17 March 2016.
  21. "AlphaGo | DeepMind". DeepMind.
  22. "Research Blog: AlphaGo: Mastering the ancient game of Go with Machine Learning". Google Research Blog. 27 January 2016.
  23. "Google achieves AI 'breakthrough' by beating Go champion". BBC News. 27 January 2016.
  24. See Dung (1995)
  25. See Besnard and Hunter (2001)
  26. see Bench-Capon (2002)
  27. Definition of AI as the study of intelligent agents:
  28. Russell & Norvig 2009, p. 2.
  29. "AAAI Corporate Bylaws".
  30. "The Lengthy History of Augmented Reality". Huffington Post. 15 May 2016.
  31. Schueffel, Patrick (2017). The Concise Fintech Compendium. Fribourg: School of Management Fribourg/Switzerland.
  32. Ghallab, Malik; Nau, Dana S.; Traverso, Paolo (2004), Automated Planning: Theory and Practice, Morgan Kaufmann, ISBN 978-1-55860-856-6
  33. Kephart, J.O.; Chess, D.M. (2003), "The vision of autonomic computing", Computer, 36: 41–52, CiteSeerX, doi:10.1109/MC.2003.1160055
  34. "Self-driving Uber car kills Arizona woman crossing street". Reuters. 20 March 2018 via
  35. Thrun, Sebastian (2010). "Toward Robotic Cars". Communications of the ACM. 53 (4): 99–106. doi:10.1145/1721654.1721679.
  36. Gehrig, Stefan K.; Stein, Fridtjof J. (1999). Dead reckoning and cartography using stereo vision for an automated car. IEEE/RSJ International Conference on Intelligent Robots and Systems. 3. Kyongju. pp. 1507–1512. doi:10.1109/IROS.1999.811692. ISBN 0-7803-5184-3.
  37. "Information Engineering Main/Home Page". Retrieved 3 October 2018.
  38. Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016) Deep Learning. MIT Press. p. 196. ISBN 9780262035613
  39. Nielsen, Michael A. (2015). "Chapter 6". Neural Networks and Deep Learning.
  40. "Deep Networks: Overview - Ufldl". Retrieved 4 August 2017.
  41. Mozer, M. C. (1995). "A Focused Backpropagation Algorithm for Temporal Pattern Recognition". In Chauvin, Y.; Rumelhart, D. (eds.). Backpropagation: Theory, architectures, and applications. ResearchGate. Hillsdale, NJ: Lawrence Erlbaum Associates. pp. 137–169. Retrieved 21 August 2017.
  42. Robinson, A. J. & Fallside, F. (1987). The utility driven dynamic error propagation network (Technical report). Cambridge University, Engineering Department. CUED/F-INFENG/TR.1.
  43. Werbos, Paul J. (1988). "Generalization of backpropagation with application to a recurrent gas market model". Neural Networks. 1 (4): 339–356. doi:10.1016/0893-6080(88)90007-x.
  44. Feigenbaum, Edward (1988). The Rise of the Expert Company. Times Books. p. 317. ISBN 978-0-8129-1731-4.
  45. Sivic, Josef (April 2009). "Efficient visual search of videos cast as text retrieval" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (4): 591–605. doi:10.1109/TPAMI.2008.111. PMID 19229077.
  46. McTear et al 2016, p. 167.
  47. "Understanding the backward pass through Batch Normalization Layer". Retrieved 24 April 2018.
  48. Ioffe, Sergey; Szegedy, Christian (2015). "Batch Normalization: Accelerating Deep Network Training b y Reducing Internal Covariate Shift". arXiv:1502.03167. Bibcode:2015arXiv150203167I. Cite journal requires |journal= (help)
  49. "Glossary of Deep Learning: Batch Normalisation". 27 June 2017. Retrieved 24 April 2018.
  50. "Batch normalization in Neural Networks". 20 October 2017. Retrieved 24 April 2018.
  51. Pham DT, Ghanbarzadeh A, Koc E, Otri S, Rahim S and Zaidi M. The Bees Algorithm. Technical Note, Manufacturing Engineering Centre, Cardiff University, UK, 2005.
  52. Pham, D.T., Castellani, M. (2009), The Bees Algorithm – Modelling Foraging Behaviour to Solve Continuous Optimisation Problems. Proc. ImechE, Part C, 223(12), 2919-2938.
  53. Pham, D. T.; Castellani, M. (2014). "Benchmarking and comparison of nature-inspired population-based continuous optimisation algorithms". Soft Computing. 18 (5): 871–903. doi:10.1007/s00500-013-1104-9.
  54. Pham, Duc Truong; Castellani, Marco (2015). "A comparative study of the Bees Algorithm as a tool for function optimisation". Cogent Engineering. 2. doi:10.1080/23311916.2015.1091540.
  55. Nasrinpour, H. R., Massah Bavani, A., Teshnehlab, M., (2017), Grouped Bees Algorithm: A Grouped Version of the Bees Algorithm, Computers 2017, 6(1), 5; (doi: 10.3390/computers6010005)
  56. Cao, Longbing (2010). "In-depth Behavior Understanding and Use: the Behavior Informatics Approach". Information Science. 180 (17): 3067–3085. doi:10.1016/j.ins.2010.03.025.
  57. Colledanchise Michele, and Ögren Petter 2016. How Behavior Trees Modularize Hybrid Control Systems and Generalize Sequential Behavior Compositions, the Subsumption Architecture, and Decision Trees. In IEEE Transactions on Robotics vol.PP, no.99, pp.1-18 (2016)
  58. Colledanchise Michele, and Ögren Petter 2017. Behavior Trees in Robotics and AI: An Introduction.
  59. Breur, Tom (July 2016). "Statistical Power Analysis and the contemporary "crisis" in social sciences". Journal of Marketing Analytics. 4 (2–3): 61–65. doi:10.1057/s41270-016-0001-3. ISSN 2050-3318.
  60. Bachmann, Paul (1894). Analytische Zahlentheorie [Analytic Number Theory] (in German). 2. Leipzig: Teubner.
  61. Landau, Edmund (1909). Handbuch der Lehre von der Verteilung der Primzahlen [Handbook on the theory of the distribution of the primes] (in German). Leipzig: B. G. Teubner. p. 883.
  62. Rowan Garnier; John Taylor (2009). Discrete Mathematics: Proofs, Structures and Applications, Third Edition. CRC Press. p. 620. ISBN 978-1-4398-1280-8.
  63. Steven S Skiena (2009). The Algorithm Design Manual. Springer Science & Business Media. p. 77. ISBN 978-1-84800-070-4.
  64. Erman, L. D.; Hayes-Roth, F.; Lesser, V. R.; Reddy, D. R. (1980). "The Hearsay-II Speech-Understanding System: Integrating Knowledge to Resolve Uncertainty". ACM Computing Surveys. 12 (2): 213. doi:10.1145/356810.356816.
  65. Corkill, Daniel D. (September 1991). "Blackboard Systems" (PDF). AI Expert. 6 (9): 40–47.
    • Nii, H. Yenny (1986). Blackboard Systems (PDF) (Technical report). Department of Computer Science, Stanford University. STAN-CS-86-1123. Retrieved 12 April 2013.
  66. Hayes-Roth, B. (1985). "A blackboard architecture for control". Artificial Intelligence. 26 (3): 251–321. doi:10.1016/0004-3702(85)90063-3.
  67. Hinton, Geoffrey E. (24 May 2007). "Boltzmann machine". Scholarpedia. 2 (5): 1668. Bibcode:2007SchpJ...2.1668H. doi:10.4249/scholarpedia.1668. ISSN 1941-6016.
  68. NZZ- Die Zangengeburt eines möglichen Stammvaters. Website Neue Zürcher Zeitung. Seen 16. August 2013.
  69. Official Homepage Roboy Archived 2013-08-03 at the Wayback Machine. Website Roboy. Seen 16. August 2013.
  70. Official Homepage Starmind. Website Starmind. Seen 16. August 2013.
  71. Sabour, Sara; Frosst, Nicholas; Hinton, Geoffrey E. (26 October 2017). "Dynamic Routing Between Capsules". arXiv:1710.09829 [cs.CV].
  72. "What is a chatbot?". Retrieved 30 January 2017.
  73. Civera, Javier; Ciocarlie, Matei; Aydemir, Alper; Bekris, Kostas; Sarma, Sanjay (2015). "Guest Editorial Special Issue on Cloud Robotics and Automation". IEEE Transactions on Automation Science and Engineering. 12 (2): 396–397. doi:10.1109/TASE.2015.2409511.
  74. "Robo Earth - Tech News". Robo Earth.
  75. Goldberg, Ken. "Cloud Robotics and Automation".
  76. Li, R. "Cloud Robotics-Enable cloud computing for robots". Retrieved 7 December 2014.
  77. Fisher, Douglas (1987). "Knowledge acquisition via incremental conceptual clustering". Machine Learning. 2 (2): 139–172. doi:10.1007/BF00114265.
  78. Fisher, Douglas H. (July 1987). "Improving inference through conceptual clustering". Proceedings of the 1987 AAAI Conferences. AAAI Conference. Seattle Washington. pp. 461–465.
  79. William Iba and Pat Langley (27 January 2011). "Cobweb models of categorization and probabilistic concept formation". In Emmanuel M. Pothos and Andy J. Wills (ed.). Formal approaches in categorization. Cambridge: Cambridge University Press. pp. 253–273. ISBN 9780521190480.
  80. Refer to the ICT website:
  81. "Hewlett Packard Labs".
  82. Terdiman, Daniel (2014) .IBM's TrueNorth processor mimics the human brain.
  83. Knight, Shawn (2011). IBM unveils cognitive computing chips that mimic human brain TechSpot: August 18, 2011, 12:00 PM
  84. Hamill, Jasper (2013). Cognitive computing: IBM unveils software for its brain-like SyNAPSE chips The Register: August 8, 2013
  85. Denning. P.J. (2014). "Surfing Toward the Future". Communications of the ACM. 57 (3): 26–29. doi:10.1145/2566967.
  86. Dr. Lars Ludwig (2013). "Extended Artificial Memory. Toward an integral cognitive theory of memory and technology" (pdf). Technical University of Kaiserslautern. Retrieved 7 February 2017. Cite journal requires |journal= (help)
  87. "Research at HP Labs".
  88. "Automate Complex Workflows Using Tactical Cognitive Computing: Coseer". Retrieved 31 July 2017.
  89. Cognitive science is an interdisciplinary field of researchers from Linguistics, psychology, neuroscience, philosophy, computer science, and anthropology that seek to understand the mind. How We Learn: Ask the Cognitive Scientist
  90. Schrijver, Alexander (February 1, 2006). A Course in Combinatorial Optimization (PDF), page 1.
  91. HAYKIN, S. Neural Networks - A Comprehensive Foundation. Second edition. Pearson Prentice Hall: 1999.
  92. "PROGRAMS WITH COMMON SENSE". Retrieved 11 April 2018.
  93. Ernest Davis; Gary Marcus (2015). "Commonsense reasoning". Communications of the ACM. Vol. 58 no. 9. pp. 92–103. doi:10.1145/2701413.
  94. Hulstijn, J, and Nijholt, A. (eds.). Proceedings of the International Workshop on Computational Humor. Number 12 in Twente Workshops on Language Technology, Enschede, Netherlands. University of Twente, 1996.
  95. "ACL - Association for Computational Learning".
  96. Trappenberg, Thomas P. (2002). Fundamentals of Computational Neuroscience. United States: Oxford University Press Inc. p. 1. ISBN 978-0-19-851582-1.
  97. What is computational neuroscience? Patricia S. Churchland, Christof Koch, Terrence J. Sejnowski. in Computational Neuroscience pp.46-55. Edited by Eric L. Schwartz. 1993. MIT Press "Archived copy". Archived from the original on 4 June 2011. Retrieved 11 June 2009.CS1 maint: archived copy as title (link)
  98. Press, The MIT. "Theoretical Neuroscience". The MIT Press. Retrieved 24 May 2018.
  99. Gerstner, W.; Kistler, W.; Naud, R.; Paninski, L. (2014). Neuronal Dynamics. Cambridge, UK: Cambridge University Press. ISBN 9781107447615.
  100. Kamentsky, L.A., and Liu, C.-N. (1963). Computer-Automated Design of Multifont Print Recognition Logic, IBM Journal of Research and Development, 7(1), p.2
  101. Brncick, M. (2000). Computer automated design and computer automated manufacture, Phys Med Rehabil Clin N Am, Aug, 11(3), 701-13.
  102. Li, Y., et al. (2004). CAutoCSD - Evolutionary search and optimisation enabled computer automated control system design Archived 2015-08-31 at the Wayback Machine. International Journal of Automation and Computing, 1(1). 76-88. ISSN 1751-8520
  106. Barsan, GM; Dinsoreanu, M, (1997). Computer-automated design based on structural performance criteria, Mouchel Centenary Conference on Innovation in Civil and Structural Engineering, AUG 19-21, CAMBRIDGE ENGLAND, INNOVATION IN CIVIL AND STRUCTURAL ENGINEERING, 167-172
  107. Li, Y., et al. (1996). Genetic algorithm automated approach to the design of sliding mode control systems, Int J Control, 63(4), 721-739.
  108. Li, Y., et al. (1995). Automation of Linear and Nonlinear Control Systems Design by Evolutionary Computation, Proc. IFAC Youth Automation Conf., Beijing, China, August 1995, 53-58.
  109. Barsan, GM, (1995) Computer-automated design of semirigid steel frameworks according to EUROCODE-3, Nordic Steel Construction Conference 95, JUN 19-21, 787-794
  110. Gary J. Gray, David J. Murray-Smith, Yun Li, et al. (1998). Nonlinear model structure identification using genetic programming, Control Engineering Practice 6 (1998) 1341—1352
  111. Zhan, Z.H., et al. (2011). Evolutionary computation meets machine learning: a survey, IEEE Computational Intelligence Magazine, 6(4), 68-75.
  112. Gregory S. Hornby (2003). Generative Representations for Computer-Automated Design Systems, NASA Ames Research Center, Mail Stop 269-3, Moffett Field, CA 94035-1000
  113. J. Clune and H. Lipson (2011). Evolving three-dimensional objects with a generative encoding inspired by developmental biology. Proceedings of the European Conference on Artificial Life. 2011.
  114. Zhan, Z.H., et al. (2009). Adaptive Particle Swarm Optimization, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), Vol.39, No.6. 1362-1381
  115. "WordNet Search—3.1". Retrieved 14 May 2012.
  116. Dana H. Ballard; Christopher M. Brown (1982). Computer Vision. Prentice Hall. ISBN 0-13-165316-4.
  117. Huang, T. (1996-11-19). Vandoni, Carlo, E, ed. Computer Vision : Evolution And Promise (PDF). 19th CERN School of Computing. Geneva: CERN. pp. 21–25. doi:10.5170/CERN-1996-008.21. ISBN 978-9290830955.
  118. Milan Sonka; Vaclav Hlavac; Roger Boyle (2008). Image Processing, Analysis, and Machine Vision. Thomson. ISBN 0-495-08252-X.
  119. Garson, James (27 November 2018). Zalta, Edward N. (ed.). The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University via Stanford Encyclopedia of Philosophy.
  120. "Ishtar for Belgium to Belgrade". European Broadcasting Union. Retrieved 19 May 2013.
  121. LeCun, Yann. "LeNet-5, convolutional neural networks". Retrieved 16 November 2013.
  122. Zhang, Wei (1988). "Shift-invariant pattern recognition neural network and its optical architecture". Proceedings of annual conference of the Japan Society of Applied Physics.
  123. Zhang, Wei (1990). "Parallel distributed processing model with local space-invariant interconnections and its optical architecture". Applied Optics. 29 (32): 4790–7. Bibcode:1990ApOpt..29.4790Z. doi:10.1364/AO.29.004790. PMID 20577468.,
  124. Tian, Yuandong; Zhu, Yan (2015). "Better Computer Go Player with Neural Network and Long-term Prediction". arXiv:1511.06410v1 [cs.LG].
  125. "How Facebook's AI Researchers Built a Game-Changing Go Engine". MIT Technology Review. 4 December 2015. Retrieved 3 February 2016.
  126. "Facebook AI Go Player Gets Smarter With Neural Network And Long-Term Prediction To Master World's Hardest Game". Tech Times. 28 January 2016. Retrieved 24 April 2016.
  127. "Facebook's artificially intelligent Go player is getting smarter". VentureBeat. 27 January 2016. Retrieved 24 April 2016.
  128. Solomonoff, R.J.The Time Scale of Artificial Intelligence; Reflections on Social Effects, Human Systems Management, Vol 5 1985, Pp 149-153
  129. Moor, J., The Dartmouth College Artificial Intelligence Conference: The Next Fifty years, AI Magazine, Vol 27, No., 4, Pp. 87-9, 2006
  130. Kline, Ronald R., Cybernetics, Automata Studies and the Dartmouth Conference on Artificial Intelligence, IEEE Annals of the History of Computing, October–December, 2011, IEEE Computer Society
  131. Haghighat, Mohammad; Abdel-Mottaleb, Mohamed; Alhalabi, Wadee (2016). "Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition". IEEE Transactions on Information Forensics and Security. 11 (9): 1984–1996. doi:10.1109/TIFS.2016.2569061.
  132. Maurizio Lenzerini (2002). "Data Integration: A Theoretical Perspective" (PDF). PODS 2002. pp. 233–246.
  133. Frederick Lane (2006). "IDC: World Created 161 Billion Gigs of Data in 2006".
  134. Dhar, V. (2013). "Data science and prediction". Communications of the ACM. 56 (12): 64–73. doi:10.1145/2500499.
  135. Jeff Leek (12 December 2013). "The key word in "Data Science" is not Data, it is Science". Simply Statistics.
  136. Hayashi, Chikio (1 January 1998). "What is Data Science? Fundamental Concepts and a Heuristic Example". In Hayashi, Chikio; Yajima, Keiji; Bock, Hans-Hermann; Ohsumi, Noboru; Tanaka, Yutaka; Baba, Yasumasa (eds.). Data Science, Classification, and Related Methods. Studies in Classification, Data Analysis, and Knowledge Organization. Springer Japan. pp. 40–51. doi:10.1007/978-4-431-65950-1_3. ISBN 9784431702085.
  137. Dedić, Nedim; Stanier, Clare (2016). Hammoudi, Slimane; Maciaszek, Leszek; Missikoff, Michele M. Missikoff; Camp, Olivier; Cordeiro, José (eds.). An Evaluation of the Challenges of Multilingualism in Data Warehouse Development. International Conference on Enterprise Information Systems, 25–28 April 2016, Rome, Italy (PDF). Proceedings of the 18th International Conference on Enterprise Information Systems (ICEIS 2016). 1. SciTePress. pp. 196–206. doi:10.5220/0005858401960206. ISBN 978-989-758-187-8.
  138. "9 Reasons Data Warehouse Projects Fail". 4 December 2014. Retrieved 30 April 2017.
  139. Huang, Green, and Loo, "Datalog and Emerging applications", SIGMOD 2011 (PDF), UC DavisCS1 maint: multiple names: authors list (link).
  140. Steele, Katie and Stefánsson, H. Orri, "Decision Theory", The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), Edward N. Zalta (ed.), URL =
  141. Lloyd, J.W., Practical Advantages of Declarative Programming
  142. Bengio, Y.; Courville, A.; Vincent, P. (2013). "Representation Learning: A Review and New Perspectives". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (8): 1798–1828. arXiv:1206.5538. doi:10.1109/tpami.2013.50.
  143. Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. PMID 25462637.
  144. Bengio, Yoshua; LeCun, Yann; Hinton, Geoffrey (2015). "Deep Learning". Nature. 521 (7553): 436–444. Bibcode:2015Natur.521..436L. doi:10.1038/nature14539. PMID 26017442.
  145. "About Us | DeepMind". DeepMind.
  146. "A return to Paris | DeepMind". DeepMind.
  147. "The Last AI Breakthrough DeepMind Made Before Google Bought It". The Physics arXiv Blog. 29 January 2014. Retrieved 12 October 2014.
  148. Graves, Alex; Wayne, Greg; Danihelka, Ivo (2014). "Neural Turing Machines". arXiv:1410.5401 [cs.NE].
  149. Best of 2014: Google's Secretive DeepMind Startup Unveils a "Neural Turing Machine", MIT Technology Review
  150. Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago (12 October 2016). "Hybrid computing using a neural network with dynamic external memory". Nature. 538 (7626): 471–476. Bibcode:2016Natur.538..471G. doi:10.1038/nature20101. ISSN 1476-4687. PMID 27732574.
  151. Kohs, Greg (29 September 2017), AlphaGo, Ioannis Antonoglou, Lucas Baker, Nick Bostrom, retrieved 9 January 2018
  152. Silver, David; Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent; Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy; Simonyan, Karen; Hassabis, Demis (5 December 2017). "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm". arXiv:1712.01815 [cs.AI].
  153. Sikos, Leslie F. (2017). Description Logics in Multimedia Reasoning. Cham: Springer International Publishing. doi:10.1007/978-3-319-54066-5. ISBN 978-3-319-54066-5.
  154. Roweis, S. T.; Saul, L. K. (2000). "Nonlinear Dimensionality Reduction by Locally Linear Embedding". Science. 290 (5500): 2323–2326. Bibcode:2000Sci...290.2323R. CiteSeerX doi:10.1126/science.290.5500.2323. PMID 11125150.
  155. Pudil, P.; Novovičová, J. (1998). "Novel Methods for Feature Subset Selection with Respect to Problem Knowledge". In Liu, Huan; Motoda, Hiroshi (eds.). Feature Extraction, Construction and Selection. p. 101. doi:10.1007/978-1-4615-5725-8_7. ISBN 978-1-4613-7622-4.
  156. Demazeau, Yves, and J-P. Müller, eds. Decentralized Ai. Vol. 2. Elsevier, 1990.
  157. Hendrickx, Iris; Van den Bosch, Antal (October 2005). "Hybrid algorithms with Instance-Based Classification". Machine Learning: ECML2005. Springer. pp. 158–169.
  158. Adam Ostrow (5 March 2011). "Roger Ebert's Inspiring Digital Transformation". Mashable Entertainment. Retrieved 12 September 2011. With the help of his wife, two colleagues and the Alex-equipped MacBook that he uses to generate his computerized voice, famed film critic Roger Ebert delivered the final talk at the TED conference on Friday in Long Beach, California....
  159. JENNIFER 8. LEE (7 March 2011). "Roger Ebert Tests His Vocal Cords, and Comedic Delivery". The New York Times. Retrieved 12 September 2011. Now perhaps, there is the Ebert Test, a way to see if a synthesized voice can deliver humor with the timing to make an audience laugh.... He proposed the Ebert Test as a way to gauge the humanness of a synthesized voice.
  160. "Roger Ebert's Inspiring Digital Transformation". Tech News. 5 March 2011. Retrieved 12 September 2011. Meanwhile, the technology that enables Ebert to "speak" continues to see improvements – for example, adding more realistic inflection for question marks and exclamation points. In a test of that, which Ebert called the "Ebert test" for computerized voices,
  161. Alex_Pasternack (18 April 2011). "A MacBook May Have Given Roger Ebert His Voice, But An iPod Saved His Life (Video)". Motherboard. Retrieved 12 September 2011. He calls it the "Ebert Test," after Turing's AI standard...
  162. Herbert Jaeger and Harald Haas. Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication. Science 2 April 2004: Vol. 304. no. 5667, pp. 78 – 80 doi:10.1126/science.1091277 PDF
  163. Herbert Jaeger (2007) Echo State Network. Scholarpedia.
  164. Serenko, Alexander; Bontis, Nick; Detlor, Brian (2007). "End-user adoption of animated interface agents in everyday work applications" (PDF). Behaviour and Information Technology. 26 (2): 119–132. doi:10.1080/01449290500260538.
  165. Vikhar, P. A. "Evolutionary algorithms: A critical review and its future prospects". Proceedings of the 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC). Jalgaon, 2016, pp. 261-265. ISBN 978-1-5090-0467-6.
  166. Russell, Stuart; Norvig, Peter (2009). "26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  167. Bostrom, Nick (2002). "Existential risks". Journal of Evolution and Technology. 9 (1): 1–31.
  168. "Your Artificial Intelligence Cheat Sheet". Slate. 1 April 2016. Retrieved 16 May 2016.
  169. Jackson, Peter (1998), Introduction To Expert Systems (3 ed.), Addison Wesley, p. 2, ISBN 978-0-201-87686-4
  170. "Conventional programming". Retrieved 15 September 2013.
  171. Martignon, Laura; Vitouch, Oliver; Takezawa, Masanori; Forster, Malcolm. "Naive and Yet Enlightened: From Natural Frequencies to Fast and Frugal Decision Trees", published in Thinking : Psychological perspectives on reasoning, judgement and decision making (David Hardman and Laura Macchi; editors), Chichester: John Wiley & Sons, 2003.
  172. Y. Bengio; A. Courville; P. Vincent (2013). "Representation Learning: A Review and New Perspectives". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (8): 1798–1828. arXiv:1206.5538. doi:10.1109/tpami.2013.50. PMID 23787338.
  173. Hodgson, Dr. J. P. E., "First Order Logic", Saint Joseph's University, Philadelphia, 1995.
  174. Hughes, G. E., & Cresswell, M. J., A New Introduction to Modal Logic (London: Routledge, 1996), p.161.
  175. Feigenbaum, Edward (1988). The Rise of the Expert Company. Times Books. p. 318. ISBN 978-0-8129-1731-4.
  176. Hayes, Patrick. "The Frame Problem and Related Problems in Artificial Intelligence" (PDF). University of Edinburgh.
  177. Sardar, Z. (2010) The Namesake: Futures; futures studies; futurology; futuristic; Foresight -- What’s in a name? Futures, 42 (3), pp. 177–184.
  178. Pedrycz, Witold (1993). Fuzzy control and fuzzy systems (2 ed.). Research Studies Press Ltd.
  179. Hájek, Petr (1998). Metamathematics of fuzzy logic (4 ed.). Springer Science & Business Media.
  180. D. Dubois and H. Prade (1988) Fuzzy Sets and Systems. Academic Press, New York.
  181. Liang, Lily R.; Lu, Shiyong; Wang, Xuena; Lu, Yi; Mandal, Vinay; Patacsil, Dorrelyn; Kumar, Deepak (2006). "FM-test: A fuzzy-set-theory-based approach to differential gene expression data analysis". BMC Bioinformatics. 7: S7. doi:10.1186/1471-2105-7-S4-S7. PMC 1780132. PMID 17217525.
  182. Myerson, Roger B. (1991). Game Theory: Analysis of Conflict, Harvard University Press, p. 1. Chapter-preview links, pp. vii–xi.
  183. Mitchell 1996, p. 2.
  184. Trudeau, Richard J. (1993). Introduction to Graph Theory (Corrected, enlarged republication. ed.). New York: Dover Pub. p. 19. ISBN 978-0-486-67870-2. Retrieved 8 August 2012. A graph is an object consisting of two sets called its vertex set and its edge set.
  185. Nikolaos G. Bourbakis (1998). Artificial Intelligence and Automation. World Scientific. p. 381. ISBN 9789810226374. Retrieved 20 April 2018.
  186. Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young (March 2017). "Use of Graph Database for the Integration of Heterogeneous Biological Data". Genomics & Informatics. 15 (1): 19–27. doi:10.5808/GI.2017.15.1.19. ISSN 1598-866X. PMC 5389944. PMID 28416946.
  187. Pearl, Judea (1984). Heuristics: intelligent search strategies for computer problem solving. United States: Addison-Wesley Pub. Co., Inc., Reading, MA. p. 3. OSTI 5127296.
  188. E. K. Burke, E. Hart, G. Kendall, J. Newall, P. Ross, and S. Schulenburg, Hyper-heuristics: An emerging direction in modern search technology, Handbook of Metaheuristics (F. Glover and G. Kochenberger, eds.), Kluwer, 2003, pp. 457–474.
  189. P. Ross, Hyper-heuristics, Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques (E. K. Burke and G. Kendall, eds.), Springer, 2005, pp. 529-556.
  190. E. Ozcan, B. Bilgin, E. E. Korkmaz, A Comprehensive Analysis of Hyper-heuristics, Intelligent Data Analysis, 12:1, pp. 3-23, 2008.
  191. "IEEE CIS Scope".
  192. "Control of Machining Processes - Purdue ME Manufacturing Laboratories".
  193. Hoy, Matthew B. (2018). "Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants". Medical Reference Services Quarterly. 37 (1): 81–88. doi:10.1080/02763869.2018.1404391. PMID 29327988.
  194. Chevallier, Arnaud (2016). Strategic thinking in complex problem solving. Oxford; New York: Oxford University Press. doi:10.1093/acprof:oso/9780190463908.001.0001. ISBN 9780190463908. OCLC 940455195.
  195. "Strategy survival guide: Issue trees". London: Prime Minister's Strategy Unit. July 2004. Archived from the original on 17 February 2012. Retrieved 6 October 2018. Also available in PDF format.
  196. Paskin, Mark. "A Short Course on Graphical Models" (PDF). Standford.
  197. Woods, W. A.; Schmolze, J. G. (1992). "The KL-ONE family". Computers & Mathematics with Applications. 23 (2–5): 133. doi:10.1016/0898-1221(92)90139-9.
  198. Brachman, R. J.; Schmolze, J. G. (1985). "An Overview of the KL-ONE Knowledge Representation System" (PDF). Cognitive Science. 9 (2): 171. doi:10.1207/s15516709cog0902_1.
  199. D.A. Duce, G.A. Ringland (1988). Approaches to Knowledge Representation, An Introduction. Research Studies Press, Ltd. ISBN 978-0-86380-064-1.
  200. Roger Schank; Robert Abelson (1977). Scripts, Plans, Goals, and Understanding: An Inquiry Into Human Knowledge Structures. Lawrence Erlbaum Associates, Inc.
  201. "Knowledge Representation in Neural Networks - deepMinds". deepMinds. 16 August 2018. Retrieved 16 August 2018.
  202. Edwin D. Reilly (2003). Milestones in computer science and information technology. Greenwood Publishing Group. pp. 156–157. ISBN 978-1-57356-521-9.
  203. Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID 9377276
  204. Siegelmann, Hava T.; Sontag, Eduardo D. (1992). On the Computational Power of Neural Nets. ACM. COLT '92. pp. 440–449. doi:10.1145/130385.130432. ISBN 978-0897914970.
  205. Gagniuc, Paul A. (2017). Markov Chains: From Theory to Implementation and Experimentation. USA, NJ: John Wiley & Sons. pp. 1–235. ISBN 978-1-119-38755-8.
  206. "Markov chain | Definition of Markov chain in US English by Oxford Dictionaries". Oxford Dictionaries | English. Retrieved 14 December 2017.
  207. Definition at "Brilliant Math and Science Wiki". Retrieved on 12 May 2019
  208. "The Nature of Mathematical Programming Archived 2014-03-05 at the Wayback Machine," Mathematical Programming Glossary, INFORMS Computing Society.
  209. Wang, Wenwu (1 July 2010). Machine Audition: Principles, Algorithms and Systems. IGI Global. ISBN 9781615209194 via
  210. "Machine Audition: Principles, Algorithms and Systems" (PDF).
  211. Malcolm Tatum (October 3, 2012). "What is Machine Perception".
  212. Alexander Serov (January 29, 2013). "Subjective Reality and Strong Artificial Intelligence" (PDF).
  213. "Machine Perception & Cognitive Robotics Laboratory". Retrieved 18 June 2016.
  214. Mechanical and Mechatronics Engineering Department. "What is Mechatronics Engineering?". Prospective Student Information. University of Waterloo. Retrieved 30 May 2011.
  215. Faculty of Mechatronics, Informatics and Interdisciplinary Studies TUL. "Mechatronics (Bc., Ing., PhD.)". Retrieved 15 April 2011.
  216. Franke; Siezen, Teusink (2005). "Reconstructing the metabolic network of a bacterium from its genome". Trends in Microbiology. 13 (11): 550–558. doi:10.1016/j.tim.2005.09.001. PMID 16169729.
  217. R. Balamurugan; A.M. Natarajan; K. Premalatha (2015). "Stellar-Mass Black Hole Optimization for Biclustering Microarray Gene Expression Data". Applied Artificial Intelligence an International Journal. 29 (4): 353–381. doi:10.1080/08839514.2015.1016391.
  218. Bianchi, Leonora; Marco Dorigo; Luca Maria Gambardella; Walter J. Gutjahr (2009). "A survey on metaheuristics for stochastic combinatorial optimization". Natural Computing. 8 (2): 239–287. doi:10.1007/s11047-008-9098-4.
  219. Herbert B. Enderton, 2001, A Mathematical Introduction to Logic Second Edition Enderton:110, Harcourt Academic Press, Burlington MA, ISBN 978-0-12-238452-3.
  220. "Naive Semantics to Support Automated Database Design", 'IEEE Transactions on Knowledge and Data Engineering, Volume 14, issue 1 (January 2002) by V. C. Storey, R. C. Goldstein and H. Ullrich
  221. Microsoft (11 May 2007), Using early binding and late binding in Automation, Microsoft, retrieved 11 May 2009
  222. strictly speaking a URIRef
  223. "Resource Description Framework (RDF) Model and Syntax Specification"
  224. Miller, Lance A. "Natural language programming: Styles, strategies, and contrasts." IBM Systems Journal 20.2 (1981): 184–215.
  225. "Deep Minds: An Interview with Google's Alex Graves & Koray Kavukcuoglu". Retrieved 17 May 2016.
  226. Graves, Alex; Wayne, Greg; Danihelka, Ivo (2014). "Neural Turing Machines". arXiv:1410.5401 [cs.NE].
  227. Krucoff, Max O.; Rahimpour, Shervin; Slutzky, Marc W.; Edgerton, V. Reggie; Turner, Dennis A. (1 January 2016). "Enhancing Nervous System Recovery through Neurobiologics, Neural Interface Training, and Neurorehabilitation". Frontiers in Neuroscience. 10: 584. doi:10.3389/fnins.2016.00584. PMC 5186786. PMID 28082858.
  228. Monroe, D. (2014). "Neuromorphic computing gets ready for the (really) big time". Communications of the ACM. 57 (6): 13–15. doi:10.1145/2601069.
  229. Zhao, W. S.; Agnus, G.; Derycke, V.; Filoramo, A.; Bourgoin, J. -P.; Gamrat, C. (2010). "Nanotube devices based crossbar architecture: Toward neuromorphic computing". Nanotechnology. 21 (17): 175202. Bibcode:2010Nanot..21q5202Z. doi:10.1088/0957-4484/21/17/175202. PMID 20368686.
  230. The Human Brain Project SP 9: Neuromorphic Computing Platform on YouTube
  231. Mead, Carver (1990). "Neuromorphic electronic systems" (PDF). Proceedings of the IEEE. 78 (10): 1629–1636. doi:10.1109/5.58356.
  232. Maan, A. K.; Jayadevi, D. A.; James, A. P. (1 January 2016). "A Survey of Memristive Threshold Logic Circuits". IEEE Transactions on Neural Networks and Learning Systems. PP (99): 1734–1746. arXiv:1604.07121. Bibcode:2016arXiv160407121M. doi:10.1109/TNNLS.2016.2547842. ISSN 2162-237X. PMID 27164608.
  233. "A Survey of Spintronic Architectures for Processing-in-Memory and Neural Networks", JSA, 2018
  234. Zhou, You; Ramanathan, S. (1 August 2015). "Mott Memory and Neuromorphic Devices". Proceedings of the IEEE. 103 (8): 1289–1310. doi:10.1109/JPROC.2015.2431914. ISSN 0018-9219.
  235. Copeland, Jack (May 2000). "What is Artificial Intelligence?". Retrieved 7 November 2015.
  236. Kleinberg, Jon; Tardos, Éva (2006). Algorithm Design (2nd ed.). Addison-Wesley. p. 464. ISBN 0-321-37291-3.
  237. Cobham, Alan (1965). "The intrinsic computational difficulty of functions". Proc. Logic, Methodology, and Philosophy of Science II. North Holland.
  238. "What is Occam's Razor?". Retrieved 1 June 2019.
  239. "OpenAI shifts from nonprofit to 'capped-profit' to attract capital". TechCrunch. Retrieved 2019-05-10.
  240. "OpenCog: Open-Source Artificial General Intelligence for Virtual Worlds | CyberTech News". 6 March 2009. Archived from the original on 6 March 2009. Retrieved 1 October 2016.CS1 maint: BOT: original-url status unknown (link)
  241. St. Laurent, Andrew M. (2008). Understanding Open Source and Free Software Licensing. O'Reilly Media. p. 4. ISBN 9780596553951.
  242. Levine, Sheen S.; Prietula, Michael J. (30 December 2013). "Open Collaboration for Innovation: Principles and Performance". Organization Science. 25 (5): 1414–1433. arXiv:1406.7541. doi:10.1287/orsc.2013.0872. ISSN 1047-7039.
  243. Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning (PDF). Springer. p. vii. Pattern recognition has its origins in engineering, whereas machine learning grew out of computer science. However, these activities can be viewed as two facets of the same field, and together they have undergone substantial development over the past ten years.
  244. Hughes, G. E., & Cresswell, M. J., A New Introduction to Modal Logic (London: Routledge, 1996), p.161.
  245. Nyce, Charles (2007), Predictive Analytics White Paper (PDF), American Institute for Chartered Property Casualty Underwriters/Insurance Institute of America, p. 1
  246. Eckerson, Wayne (10 May 2007), Extending the Value of Your Data Warehousing Investment, The Data Warehouse Institute
  247. Karl R. Popper, The Myth of Framework, London (Routledge) 1994, chap. 8.
  248. Karl R. Popper, The Poverty of Historicism, London (Routledge) 1960, chap. iv, sect. 31.
  249. "Probabilistic programming does in 50 lines of code what used to take thousands". 13 April 2015. Retrieved 13 April 2015.
  250. "Probabilistic Programming".
  251. Pfeffer, Avrom (2014), Practical Probabilistic Programming, Manning Publications. p.28. ISBN 978-1 6172-9233-0
  252. Clocksin, William F.; Mellish, Christopher S. (2003). Programming in Prolog. Berlin ; New York: Springer-Verlag. ISBN 978-3-540-00678-7.
  253. Bratko, Ivan (2012). Prolog programming for artificial intelligence (4th ed.). Harlow, England ; New York: Addison Wesley. ISBN 978-0-321-41746-6.
  254. Covington, Michael A. (1994). Natural language processing for Prolog programmers. Englewood Cliffs, N.J.: Prentice Hall. ISBN 978-0-13-629213-5.
  255. Lloyd, J. W. (1984). Foundations of logic programming. Berlin: Springer-Verlag. ISBN 978-3-540-13299-8.
  256. Kuhlman, Dave. "A Python Book: Beginning Python, Advanced Python, and Python Exercises". Section 1.1. Archived from the original (PDF) on 23 June 2012.
  257. Reiter, Raymond (2001). Knowledge in Action: Logical Foundations for Specifying and Implementing Dynamical Systems. Cambridge, Massachusetts: The MIT Press. pp. 20–22. ISBN 9780262527002.
  258. Thielscher, Michael (September 2001). "The Qualification Problem: A solution to the problem of anomalous models". Artificial Intelligence. 131 (1–2): 1–37. doi:10.1016/S0004-3702(01)00131-X.
  259. The National Academies of Sciences, Engineering, and Medicine (2019). Grumbling, Emily; Horowitz, Mark (eds.). Quantum Computing : Progress and Prospects (2018). Washington, DC: National Academies Press. p. I-5. doi:10.17226/25196. ISBN 978-0-309-47969-1. OCLC 1081001288.CS1 maint: multiple names: authors list (link)
  260. R language and environment
    • Hornik, Kurt (4 October 2017). "R FAQ". The Comprehensive R Archive Network. 2.1 What is R?. Retrieved 6 August 2018.
    R Foundation
    • Hornik, Kurt (4 October 2017). "R FAQ". The Comprehensive R Archive Network. 2.13 What is the R Foundation?. Retrieved 6 August 2018.
    The R Core Team asks authors who use R in their data analysis to cite the software using:
    • R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL
  261. widely used
  262. Vance, Ashlee (6 January 2009). "Data Analysts Captivated by R's Power". New York Times. Retrieved 6 August 2018. R is also the name of a popular programming language used by a growing number of data analysts inside corporations and academia. It is becoming their lingua franca...
  263. Broomhead, D. S.; Lowe, David (1988). Radial basis functions, multi-variable functional interpolation and adaptive networks (Technical report). RSRE. 4148.
  264. Broomhead, D. S.; Lowe, David (1988). "Multivariable functional interpolation and adaptive networks" (PDF). Complex Systems. 2: 321–355.
  265. Schwenker, Friedhelm; Kestler, Hans A.; Palm, Günther (2001). "Three learning phases for radial-basis-function networks". Neural Networks. 14 (4–5): 439–458. CiteSeerX doi:10.1016/s0893-6080(01)00027-2.
  266. Ho, Tin Kam (1995). Random Decision Forests (PDF). Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, QC, 14–16 August 1995. pp. 278–282. Archived from the original (PDF) on 17 April 2016. Retrieved 5 June 2016.
  267. Ho TK (1998). "The Random Subspace Method for Constructing Decision Forests" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 20 (8): 832–844. doi:10.1109/34.709601.
  268. Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome(2008). The Elements of Statistical Learning (2nd ed.). Springer. ISBN 0-387-95284-5.
  269. Graves, A.; Liwicki, M.; Fernandez, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. (2009). "A Novel Connectionist System for Improved Unconstrained Handwriting Recognition" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (5): 855–868. CiteSeerX doi:10.1109/tpami.2008.137. PMID 19299860.
  270. Sak, Hasim; Senior, Andrew; Beaufays, Francoise (2014). "Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling" (PDF).
  271. Li, Xiangang; Wu, Xihong (15 October 2014). "Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition". arXiv:1410.4281 [cs.CL].
  272. Kaelbling, Leslie P.; Littman, Michael L.; Moore, Andrew W. (1996). "Reinforcement Learning: A Survey". Journal of Artificial Intelligence Research. 4: 237–285. arXiv:cs/9605103. doi:10.1613/jair.301. Archived from the original on 20 November 2001.
  273. Schrauwen, Benjamin, David Verstraeten, and Jan Van Campenhout. "An overview of reservoir computing: theory, applications, and implementations." Proceedings of the European Symposium on Artificial Neural Networks ESANN 2007, pp. 471-482.
  274. Mass, Wolfgang, T. Nachtschlaeger, and H. Markram. "Real-time computing without stable states: A new framework for neural computation based on perturbations." Neural Computation 14(11): 2531–2560 (2002).
  275. Jaeger, Herbert, "The echo state approach to analyzing and training recurrent neural networks." Technical Report 154 (2001), German National Research Center for Information Technology.
  276. Echo state network, Scholarpedia
  277. "XML and Semantic Web W3C Standards Timeline" (PDF). 4 February 2012.
  278. See, for example, Boolos and Jeffrey, 1974, chapter 11.
  279. John F. Sowa (1987). "Semantic Networks". In Stuart C Shapiro (ed.). Encyclopedia of Artificial Intelligence. Retrieved 29 April 2008.
  280. O'Hearn, P. W.; Pym, D. J. (June 1999). "The Logic of Bunched Implications". Bulletin of Symbolic Logic. 5 (2): 215–244. CiteSeerX doi:10.2307/421090. JSTOR 421090.
  281. Abran et al. 2004, pp. 1–1
  282. ACM (2007). "Computing Degrees & Careers". ACM. Retrieved 23 November 2010.
  283. Laplante, Phillip (2007). What Every Engineer Should Know about Software Engineering. Boca Raton: CRC. ISBN 978-0-8493-7228-5. Retrieved 21 January 2011.
  284. Jim Rapoza (2 May 2006). "SPARQL Will Make the Web Shine". eWeek. Retrieved 17 January 2007.
  285. Segaran, Toby; Evans, Colin; Taylor, Jamie (2009). Programming the Semantic Web. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. p. 84. ISBN 978-0-596-15381-6.
  286. Maass, Wolfgang (1997). "Networks of spiking neurons: The third generation of neural network models". Neural Networks. 10 (9): 1659–1671. doi:10.1016/S0893-6080(97)00011-7. ISSN 0893-6080.
  287. "What is stateless? - Definition from".
  288. Lise Getoor and Ben Taskar: Introduction to statistical relational learning, MIT Press, 2007
  289. Ryan A. Rossi, Luke K. McDowell, David W. Aha, and Jennifer Neville, "Transforming Graph Data for Statistical Relational Learning." Journal of Artificial Intelligence Research (JAIR), Volume 45 (2012), pp. 363-441.
  290. Spall, J. C. (2003). Introduction to Stochastic Search and Optimization. Wiley. ISBN 978-0-471-33052-3.
  291. Language Understanding Using Two-Level Stochastic Models by F. Pla, et al, 2001, Springer Lecture Notes in Computer Science ISBN 978-3-540-42557-1
  292. Stuart J. Russell, Peter Norvig (2010) Artificial Intelligence: A Modern Approach, Third Edition, Prentice Hall ISBN 9780136042594.
  293. Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012) Foundations of Machine Learning, The MIT Press ISBN 9780262018258.
  294. Cortes, Corinna; Vapnik, Vladimir N. (1995). "Support-vector networks" (PDF). Machine Learning. 20 (3): 273–297. CiteSeerX doi:10.1007/BF00994018.
  295. Beni, G., Wang, J. (1993). "Swarm Intelligence in Cellular Robotic Systems". Proceed. NATO Advanced Workshop on Robots and Biological Systems, Tuscany, Italy, June 26–30 (1989). pp. 703–712. doi:10.1007/978-3-642-58069-7_38. ISBN 978-3-642-63461-1.CS1 maint: multiple names: authors list (link)
  296. Haugeland 1985, p. 255.
  297. Poole, Mackworth & Goebbel 1998, p. 1.
  298. Cadwalladr, Carole (2014). "Are the robots about to rise? Google's new director of engineering thinks so…" The Guardian. Guardian News and Media Limited.
  299. "Collection of sources defining "singularity"". Retrieved 17 April 2019.
  300. Eden, Amnon H.; Moor, James H. (2012). Singularity hypotheses: A Scientific and Philosophical Assessment. Dordrecht: Springer. pp. 1–2. ISBN 9783642325601.
  301. Richard Sutton & Andrew Barto (1998). Reinforcement Learning. MIT Press. ISBN 978-0-585-02445-5. Archived from the original on 30 March 2017.
  302. Pellionisz, A., Llinás, R. (1980). "Tensorial Approach To The Geometry Of Brain Function: Cerebellar Coordination Via A Metric Tensor" (PDF). Neuroscience. 5 (7): 1125––1136. doi:10.1016/0306-4522(80)90191-8. PMID 6967569.CS1 maint: multiple names: authors list (link)
  303. Pellionisz, A., Llinás, R. (1985). "Tensor Network Theory Of The Metaorganization Of Functional Geometries In The Central Nervous System". Neuroscience. 16 (2): 245–273. doi:10.1016/0306-4522(85)90001-6. PMID 4080158.CS1 maint: multiple names: authors list (link)
  304. "TensorFlow: Open source machine learning" "It is machine learning software being used for various kinds of perceptual and language understanding tasks" — Jeffrey Dean, minute 0:47 / 2:17 from YouTube clip
  305. Michael Sipser (2013). Introduction to the Theory of Computation 3rd. Cengage Learning. ISBN 978-1-133-18779-0. central areas of the theory of computation: automata, computability, and complexity. (Page 1)
  306. Thompson, William R. "On the likelihood that one unknown probability exceeds another in view of the evidence of two samples". Biometrika, 25(3–4):285–294, 1933.
  307. Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband and Zheng Wen (2018), "A Tutorial on Thompson Sampling", Foundations and Trends in Machine Learning: Vol. 11: No. 1, pp 1-96.
  308. Mercer, Calvin. Religion and Transhumanism: The Unknown Future of Human Enhancement. Praeger.
  309. Bostrom, Nick (2005). "A history of transhumanist thought" (PDF). Journal of Evolution and Technology. Retrieved 21 February 2006.
  310. Turing originally suggested a teleprinter, one of the few text-only communication systems available in 1950. (Turing 1950, p. 433)
  311. Pierce 2002, p. 1: "A type system is a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute."
  312. Cardelli 2004, p. 1: "The fundamental purpose of a type system is to prevent the occurrence of execution errors during the running of a program."
  313. Hinton, Jeffrey; Sejnowski, Terrence (1999). Unsupervised Learning: Foundations of Neural Computation. MIT Press. ISBN 978-0262581684.
  314. Seth Colaner; Matthew Humrick (3 January 2016). "A third type of processor for AR/VR: Movidius' Myriad 2 VPU". Tom's Hardware.
  315. Prasid Banerje (28 March 2016). "The rise of VPUs: Giving Eyes to Machines".
  316. "DeepQA Project: FAQ". IBM. Retrieved 11 February 2011.
  317. Ferrucci, David; Levas, Anthony; Bagchi, Sugato; Gondek, David; Mueller, Erik T. (1 June 2013). "Watson: Beyond Jeopardy!". Artificial Intelligence. 199: 93–105. doi:10.1016/j.artint.2012.06.009.
  318. Hale, Mike (8 February 2011). "Actors and Their Roles for $300, HAL? HAL!". The New York Times. Retrieved 11 February 2011.
  319. "The DeepQA Project". IBM Research. Retrieved 18 February 2011.
  320. mentions narrow AI. Published 1 April 2013, retrieved 16 February 2014:
  321. AI researcher Ben Goertzel explains why he became interested in AGI instead of narrow AI. Published 18 Oct 2013. Retrieved 16 February 2014.
  322. TechCrunch discusses AI App building regarding Narrow AI. Published 16 Oct 2015, retrieved 17 Oct 2015.


  1. polynomial time refers to how quickly the number of operations needed by an algorithm, relative to the size of the problem, grows. It is therefore a measure of efficiency of an algorithm.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.