[
{
"node_id": 0,
"label": 2,
"text": "Title: The megaprior heuristic for discovering protein sequence patterns \nAbstract: Several computer algorithms for discovering patterns in groups of protein sequences are in use that are based on fitting the parameters of a statistical model to a group of related sequences. These include hidden Markov model (HMM) algorithms for multiple sequence alignment, and the MEME and Gibbs sampler algorithms for discovering motifs. These algorithms are sometimes prone to producing models that are incorrect because two or more patterns have been combined. The statistical model produced in this situation is a convex combination (weighted average) of two or more different models. This paper presents a solution to the problem of convex combinations in the form of a heuristic based on using extremely low variance Dirichlet mixture priors as part of the statistical model. This heuristic, which we call the megaprior heuristic, increases the strength (i.e., decreases the variance) of the prior in proportion to the size of the sequence dataset. This causes each column in the final model to strongly resemble the mean of a single component of the prior, regardless of the size of the dataset. We describe the cause of the convex combination problem, analyze it mathematically, motivate and describe the implementation of the megaprior heuristic, and show how it can effectively eliminate the problem of convex combinations in protein sequence pattern discovery. ",
"neighbors": [
8,
14,
258,
435,
544
],
"mask": "Test"
},
{
"node_id": 1,
"label": 5,
"text": "Title: Applications of machine learning: a medical follow up study \nAbstract: This paper describes preliminary work that aims to apply some learning strategies to a medical follow-up study. An investigation of the application of three machine learning algorithms-1R, FOIL and InductH to identify risk factors that govern the colposuspension cure rate has been made. The goal of this study is to induce a generalised description or explanation of the classification attribute, colposuspension cure rate (completely cured, improved, unchanged and worse) from the 767 examples in the questionnaires. We looked for a set of rules that described which risk factors result in differences of cure rate. The results were encouraging, and indicate that machine learning can play a useful role in large scale medical problem solving. ",
"neighbors": [
344
],
"mask": "Train"
},
{
"node_id": 2,
"label": 4,
"text": "Title: Submitted to NIPS96, Section: Applications. Preference: Oral presentation Reinforcement Learning for Dynamic Channel Allocation in\nAbstract: In cellular telephone systems, an important problem is to dynamically allocate the communication resource (channels) so as to maximize service in a stochastic caller environment. This problem is naturally formulated as a dynamic programming problem and we use a reinforcement learning (RL) method to find dynamic channel allocation policies that are better than previous heuristic solutions. The policies obtained perform well for a broad variety of call traffic patterns. We present results on a large cellular system In cellular communication systems, an important problem is to allocate the communication resource (bandwidth) so as to maximize the service provided to a set of mobile callers whose demand for service changes stochastically. A given geographical area is divided into mutually disjoint cells, and each cell serves the calls that are within its boundaries (see Figure 1a). The total system bandwidth is divided into channels, with each channel centered around a frequency. Each channel can be used simultaneously at different cells, provided these cells are sufficiently separated spatially, so that there is no interference between them. The minimum separation distance between simultaneous reuse of the same channel is called the channel reuse constraint . When a call requests service in a given cell either a free channel (one that does not violate the channel reuse constraint) may be assigned to the call, or else the call is blocked from the system; this will happen if no free channel can be found. Also, when a mobile caller crosses from one cell to another, the call is \"handed off\" to the cell of entry; that is, a new free channel is provided to the call at the new cell. If no such channel is available, the call must be dropped/disconnected from the system. One objective of a channel allocation policy is to allocate the available channels to calls so that the number of blocked calls is minimized. An additional objective is to minimize the number of calls that are dropped when they are handed off to a busy cell. These two objectives must be weighted appropriately to reflect their relative importance, since dropping existing calls is generally more undesirable than blocking new calls. with approximately 70 49 states.",
"neighbors": [
410,
471,
552,
565
],
"mask": "Train"
},
{
"node_id": 3,
"label": 4,
"text": "Title: Planning and Acting in Partially Observable Stochastic Domains \nAbstract: In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable mdps (pomdps). We then outline a novel algorithm for solving pomdps off line and show how, in some cases, a finite-memory controller can be extracted from the solution to a pomdp. We conclude with a discussion of how our approach relates to previous work, the complexity of finding exact solutions to pomdps, and of some possibilities for finding approximate solutions. Consider the problem of a robot navigating in a large office building. The robot can move from hallway intersection to intersection and can make local observations of its world. Its actions are not completely reliable, however. Sometimes, when it intends to move, it stays where it is or goes too far; sometimes, when it intends to turn, it overshoots. It has similar problems with observation. Sometimes a corridor looks like a corner; sometimes a T-junction looks like an L-junction. How can such an error-plagued robot navigate, even given a map of the corridors? In general, the robot will have to remember something about its history of actions and observations and use this information, together with its knowledge of the underlying dynamics of the world (the map and other information), to maintain an estimate of its location. Many engineering applications follow this approach, using methods like the Kalman filter [18] to maintain a running estimate of the robot's spatial uncertainty, expressed as an ellipsoid or normal distribution in Cartesian space. This approach will not do for our robot, though. Its uncertainty may be discrete: it might be almost certain that it is in the north-east corner of either the fourth or the seventh floors, though it admits a chance that it is on the fifth floor, as well. Then, given an uncertain estimate of its location, the robot has to decide what actions to take. In some cases, it might be sufficient to ignore its uncertainty and take actions that would be appropriate for the most likely location. In other cases, it might be better for ",
"neighbors": [
197,
463,
601
],
"mask": "Test"
},
{
"node_id": 4,
"label": 3,
"text": "Note: c Massachusetts Institute of Technology The thesis consists of the development of this Michael I. Jordan Title: Professor \nAbstract: Graphical models enhance the representational power of probability models through qualitative characterization of their properties. This also leads to greater efficiency in terms of the computational algorithms that empower such representations. The increasing complexity of these models, however, quickly renders exact probabilistic calculations infeasible. We propose a principled framework for approximating graphical models based on variational methods. We develop variational techniques from the perspective that unifies and expands their applicability to graphical models. These methods allow the (recursive) computation of upper and lower bounds on the quantities of interest. Such bounds yield considerably more information than mere approximations and provide an inherent error metric for tailoring the approximations individually to the cases considered. These desirable properties, concomitant to the variational methods, are unlikely to arise as a result of other deterministic or stochastic approximations. ",
"neighbors": [
170
],
"mask": "Train"
},
{
"node_id": 5,
"label": 3,
"text": "Title: Some Experiments with Real-time Decision Algorithms \nAbstract: Real-time Decision algorithms are a class of incremental resource-bounded [Horvitz, 89] or anytime [Dean, 93] algorithms for evaluating influence diagrams. We present a test domain for real-time decision algorithms, and the results of experiments with several Real-time Decision Algorithms in this domain. The results demonstrate high performance for two algorithms, a decision-evaluation variant of Incremental Probabilisitic Inference [DAmbrosio, 93] and a variant of an algorithm suggested by Goldszmidt, [Goldszmidt, 95], PK-reduced. We discuss the implications of these experimental results and explore the broader applicability of these algorithms.",
"neighbors": [
490,
2164
],
"mask": "Train"
},
{
"node_id": 6,
"label": 6,
"text": "Title: A Formal Framework for Speedup Learning from Problems and Solutions \nAbstract: Speedup learning seeks to improve the computational efficiency of problem solving with experience. In this paper, we develop a formal framework for learning efficient problem solving from random problems and their solutions. We apply this framework to two different representations of learned knowledge, namely control rules and macro-operators, and prove theorems that identify sufficient conditions for learning in each representation. Our proofs are constructive in that they are accompanied with learning algorithms. Our framework captures both empirical and explanation-based speedup learning in a unified fashion. We illustrate our framework with implementations in two domains: symbolic integration and Eight Puzzle. This work integrates many strands of experimental and theoretical work in machine learning, including empirical learning of control rules, macro-operator learning, ",
"neighbors": [
251,
490
],
"mask": "Train"
},
{
"node_id": 7,
"label": 2,
"text": "Title: Optimal Alignments in Linear Space using Automaton-derived Cost Functions (Extended Abstract) Submitted to CPM'96 \nAbstract: In a previous paper [SM95], we showed how finite automata could be used to define objective functions for assessing the quality of an alignment of two (or more) sequences. In this paper, we show some results of using such cost functions. We also show how to extend Hischberg's linear space algorithm [Hir75] to this setting, thus generalizing a result of Myers and Miller [MM88b]. ",
"neighbors": [
258
],
"mask": "Test"
},
{
"node_id": 8,
"label": 2,
"text": "Title: Meta-MEME: Motif-based Hidden Markov Models of Protein Families \nAbstract: In a previous paper [SM95], we showed how finite automata could be used to define objective functions for assessing the quality of an alignment of two (or more) sequences. In this paper, we show some results of using such cost functions. We also show how to extend Hischberg's linear space algorithm [Hir75] to this setting, thus generalizing a result of Myers and Miller [MM88b]. ",
"neighbors": [
0,
14,
258,
435,
751
],
"mask": "Validation"
},
{
"node_id": 9,
"label": 6,
"text": "Title: Online Learning versus O*ine Learning \nAbstract: We present an off-line variant of the mistake-bound model of learning. Just like in the well studied on-line model, a learner in the offline model has to learn an unknown concept from a sequence of elements of the instance space on which he makes \"guess and test\" trials. In both models, the aim of the learner is to make as few mistakes as possible. The difference between the models is that, while in the on-line model only the set of possible elements is known, in the off-line model the sequence of elements (i.e., the identity of the elements as well as the order in which they are to be presented) is known to the learner in advance. We give a combinatorial characterization of the number of mistakes in the off-line model. We apply this characterization to solve several natural questions that arise for the new model. First, we compare the mistake bounds of an off-line learner to those of a learner learning the same concept classes in the on-line scenario. We show that the number of mistakes in the on-line learning is at most a log n factor more than the off-line learning, where n is the length of the sequence. In addition, we show that if there is an off-line algorithm that does not make more than a constant number of mistakes for each sequence then there is an online algorithm that also does not make more than a constant number of mistakes. The second issue we address is the effect of the ordering of the elements on the number of mistakes of an off-line learner. It turns out that there are sequences on which an off-line learner can guarantee at most one mistake, yet a permutation of the same sequence forces him to err on many elements. We prove, however, that the gap, between the off-line mistake bounds on permutations of the same sequence of n-many elements, cannot be larger than a multiplicative factor of log n, and we present examples that obtain such a gap. ",
"neighbors": [
308,
453,
481,
761
],
"mask": "Train"
},
{
"node_id": 10,
"label": 2,
"text": "Title: GRKPACK: FITTING SMOOTHING SPLINE ANOVA MODELS FOR EXPONENTIAL FAMILIES \nAbstract: Wahba, Wang, Gu, Klein and Klein (1995) introduced Smoothing Spline ANalysis of VAriance (SS ANOVA) method for data from exponential families. Based on RKPACK, which fits SS ANOVA models to Gaussian data, we introduce GRKPACK: a collection of Fortran subroutines for binary, binomial, Poisson and Gamma data. We also show how to calculate Bayesian confidence intervals for SS ANOVA estimates. ",
"neighbors": [
192,
193,
280,
420,
519
],
"mask": "Train"
},
{
"node_id": 11,
"label": 1,
"text": "Title: Simple Genetic Programming for Supervised Learning Problems \nAbstract: This paper presents an evolutionary approach to finding learning rules to several supervised tasks. In this approach potential solutions are represented as variable length mathematical LISP S-expressions. Thus, it is similar to Genetic Programming (GP) but it employs a fixed set of non-problem-specific functions to solve a variety of problems. In this paper three Monk's and parity problems are tested. The results indicate the usefulness of the encoding schema in discovering learning rules for supervised learning problems with the emphasis on hard learning problems. The problems and future research directions are discussed within the context of GP practices. ",
"neighbors": [
624,
659
],
"mask": "Train"
},
{
"node_id": 12,
"label": 3,
"text": "Title: Estimating Bayes Factors via Posterior Simulation with the Laplace-Metropolis Estimator \nAbstract: The key quantity needed for Bayesian hypothesis testing and model selection is the marginal likelihood for a model, also known as the integrated likelihood, or the marginal probability of the data. In this paper we describe a way to use posterior simulation output to estimate marginal likelihoods. We describe the basic Laplace-Metropolis estimator for models without random effects. For models with random effects the compound Laplace-Metropolis estimator is introduced. This estimator is applied to data from the World Fertility Survey and shown to give accurate results. Batching of simulation output is used to assess the uncertainty involved in using the compound Laplace-Metropolis estimator. The method allows us to test for the effects of independent variables in a random effects model, and also to test for the presence of the random effects.",
"neighbors": [
84,
155
],
"mask": "Validation"
},
{
"node_id": 13,
"label": 0,
"text": "Title: Unifying Empirical and Explanation-Based Learning by Modeling the Utility of Learned Knowledge \nAbstract: The overfit problem in empirical learning and the utility problem in explanation-based learning describe a similar phenomenon: the degradation of performance due to an increase in the amount of learned knowledge. Plotting the performance of learned knowledge during the course of learning (the performance response) reveals a common trend for several learning methods. Modeling this trend allows a control system to constrain the amount of learned knowledge to achieve peak performance and avoid the general utility problem. Experiments evaluate a particular empirical model of the trend, and analysis of the learners derive several formal models. If, as evidence suggests, the general utility problem can be modeled using the same mechanisms for different learning paradigms, then the model serves to unify the paradigms into one framework capable of comparing and selecting different learning methods based on predicted achievable performance.",
"neighbors": [
482,
578,
1234
],
"mask": "Train"
},
{
"node_id": 14,
"label": 2,
"text": "Title: Hidden Markov Models in Computational Biology: Applications to Protein Modeling UCSC-CRL-93-32 Keywords: Hidden Markov Models,\nAbstract: Hidden Markov Models (HMMs) are applied to the problems of statistical modeling, database searching and multiple sequence alignment of protein families and protein domains. These methods are demonstrated on the globin family, the protein kinase catalytic domain, and the EF-hand calcium binding motif. In each case the parameters of an HMM are estimated from a training set of unaligned sequences. After the HMM is built, it is used to obtain a multiple alignment of all the training sequences. It is also used to search the SWISS-PROT 22 database for other sequences that are members of the given protein family, or contain the given domain. The HMM produces multiple alignments of good quality that agree closely with the alignments produced by programs that incorporate three-dimensional structural information. When employed in discrimination tests (by examining how closely the sequences in a database fit the globin, kinase and EF-hand HMMs), the HMM is able to distinguish members of these families from non-members with a high degree of accuracy. Both the HMM and PRO-FILESEARCH (a technique used to search for relationships between a protein sequence and multiply aligned sequences) perform better in these tests than PROSITE (a dictionary of sites and patterns in proteins). The HMM appears to have a slight advantage ",
"neighbors": [
0,
8,
31,
232,
242,
258,
268,
384,
393,
400,
435,
437,
443,
544,
613,
708,
736,
746,
751
],
"mask": "Train"
},
{
"node_id": 15,
"label": 2,
"text": "Title: Back Propagation is Sensitive to Initial Conditions \nAbstract: This paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique. We first demonstrate, through the use of Monte Carlo techniques, that the magnitude of the initial condition vector (in weight space) is a very significant parameter in convergence time variability. In order to further understand this result, additional deterministic experiments were performed. The results of these experiments demonstrate the extreme sensitivity of back propagation to initial weight configuration. ",
"neighbors": [
80,
129,
152,
234,
253,
254,
322,
399,
538,
701
],
"mask": "Train"
},
{
"node_id": 16,
"label": 4,
"text": "Title: Exploration in Active Learning \nAbstract: This paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique. We first demonstrate, through the use of Monte Carlo techniques, that the magnitude of the initial condition vector (in weight space) is a very significant parameter in convergence time variability. In order to further understand this result, additional deterministic experiments were performed. The results of these experiments demonstrate the extreme sensitivity of back propagation to initial weight configuration. ",
"neighbors": [
466,
552,
566,
1697
],
"mask": "Train"
},
{
"node_id": 17,
"label": 2,
"text": "Title: A Neural Network Model of Memory Consolidation \nAbstract: Some forms of memory rely temporarily on a system of brain structures located in the medial temporal lobe that includes the hippocampus. The recall of recent events is one task that relies crucially on the proper functioning of this system. As the event becomes less recent, the medial temporal lobe becomes less critical to the recall of the event, and the recollection appears to rely more upon the neocortex. It has been proposed that a process called consolidation is responsible for transfer of memory from the medial temporal lobe to the neocortex. We examine a network model proposed by P. Alvarez and L. Squire designed to incorporate some of the known features of consolidation, and propose several possible experiments intended to help evaluate the performance of this model under more realistic conditions. Finally, we implement an extended version of the model that can accommodate varying assumptions about the number of areas and connections within the brain and memory capacity, and examine the performance of our model on Alvarez and Squire's original task. ",
"neighbors": [
146
],
"mask": "Train"
},
{
"node_id": 18,
"label": 2,
"text": "Title: Topography And Ocular Dominance: A Model Exploring Positive Correlations \nAbstract: The map from eye to brain in vertebrates is topographic, i.e. neighbouring points in the eye map to neighbouring points in the brain. In addition, when two eyes innervate the same target structure, the two sets of fibres segregate to form ocular dominance stripes. Experimental evidence from the frog and goldfish suggests that these two phenomena may be subserved by the same mechanisms. We present a computational model that addresses the formation of both topography and ocular dominance. The model is based on a form of competitive learning with subtractive enforcement of a weight normalization rule. Inputs to the model are distributed patterns of activity presented simultaneously in both eyes. An important aspect of this model is that ocular dominance segregation can occur when the two eyes are positively correlated, whereas previous models have tended to assume zero or negative correlations between the eyes. This allows investigation of the dependence of the pattern of stripes on the degree of correlation between the eyes: we find that increasing correlation leads to narrower stripes. Experiments are suggested to test this prediction.",
"neighbors": [
127,
427,
745,
747,
866,
890,
1932
],
"mask": "Train"
},
{
"node_id": 19,
"label": 2,
"text": "Title: Validation of Average Error Rate Over Classifiers \nAbstract: We examine methods to estimate the average and variance of test error rates over a set of classifiers. We begin with the process of drawing a classifier at random for each example. Given validation data, the average test error rate can be estimated as if validating a single classifier. Given the test example inputs, the variance can be computed exactly. Next, we consider the process of drawing a classifier at random and using it on all examples. Once again, the expected test error rate can be validated as if validating a single classifier. However, the variance must be estimated by validating all classifers, which yields loose or uncertain bounds. ",
"neighbors": [
74,
571
],
"mask": "Train"
},
{
"node_id": 20,
"label": 6,
"text": "Title: 25 Learning in Hybrid Noise Environments Using Statistical Queries \nAbstract: We consider formal models of learning from noisy data. Specifically, we focus on learning in the probability approximately correct model as defined by Valiant. Two of the most widely studied models of noise in this setting have been classification noise and malicious errors. However, a more realistic model combining the two types of noise has not been formalized. We define a learning environment based on a natural combination of these two noise models. We first show that hypothesis testing is possible in this model. We next describe a simple technique for learning in this model, and then describe a more powerful technique based on statistical query learning. We show that the noise tolerance of this improved technique is roughly optimal with respect to the desired learning accuracy and that it provides a smooth tradeoff between the tolerable amounts of the two types of noise. Finally, we show that statistical query simulation yields learning algorithms for other combinations of noise models, thus demonstrating that statistical query specification truly An important goal of research in machine learning is to determine which tasks can be automated, and for those which can, to determine their information and computation requirements. One way to answer these questions is through the development and investigation of formal models of machine learning which capture the task of learning under plausible assumptions. In this work, we consider the formal model of learning from examples called \"probably approximately correct\" (PAC) learning as defined by Valiant [Val84]. In this setting, a learner attempts to approximate an unknown target concept simply by viewing positive and negative examples of the concept. An adversary chooses, from some specified function class, a hidden f0; 1g-valued target function defined over some specified domain of examples and chooses a probability distribution over this domain. The goal of the learner is to output in both polynomial time and with high probability, an hypothesis which is \"close\" to the target function with respect to the distribution of examples. The learner gains information about the target function and distribution by interacting with an example oracle. At each request by the learner, this oracle draws an example randomly according to the hidden distribution, labels it according to the hidden target function, and returns the labelled example to the learner. A class of functions F is said to be PAC learnable if captures the generic fault tolerance of a learning algorithm.",
"neighbors": [
25,
267,
334,
640,
732
],
"mask": "Train"
},
{
"node_id": 21,
"label": 4,
"text": "Title: Decision Tree Function Approximation in Reinforcement Learning \nAbstract: We present a decision tree based approach to function approximation in reinforcement learning. We compare our approach with table lookup and a neural network function approximator on three problems: the well known mountain car and pole balance problems as well as a simulated automobile race car. We find that the decision tree can provide better learning performance than the neural network function approximation and can solve large problems that are infeasible using table lookup.",
"neighbors": [
294,
438,
567,
1378
],
"mask": "Test"
},
{
"node_id": 22,
"label": 1,
"text": "Title: Discovering Complex Othello Strategies Through Evolutionary Neural Networks \nAbstract: An approach to develop new game playing strategies based on artificial evolution of neural networks is presented. Evolution was directed to discover strategies in Othello against a random-moving opponent and later against an ff-fi search program. The networks discovered first a standard positional strategy, and subsequently a mobility strategy, an advanced strategy rarely seen outside of tournaments. The latter discovery demonstrates how evolutionary neural networks can develop novel solutions by turning an initial disadvantage into an advantage in a changed environment. ",
"neighbors": [
129,
163,
191,
1790,
2257
],
"mask": "Train"
},
{
"node_id": 23,
"label": 3,
"text": "Title: Applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses \nAbstract: Technical Report No. 670 December, 1997 ",
"neighbors": [
41,
759
],
"mask": "Train"
},
{
"node_id": 24,
"label": 4,
"text": "Title: The Role of Transfer in Learning (extended abstract) \nAbstract: Technical Report No. 670 December, 1997 ",
"neighbors": [
39,
269
],
"mask": "Test"
},
{
"node_id": 25,
"label": 6,
"text": "Title: General Bounds on Statistical Query Learning and PAC Learning with Noise via Hypothesis Boosting \nAbstract: We derive general bounds on the complexity of learning in the Statistical Query model and in the PAC model with classification noise. We do so by considering the problem of boosting the accuracy of weak learning algorithms which fall within the Statistical Query model. This new model was introduced by Kearns [12] to provide a general framework for efficient PAC learning in the presence of classification noise. We first show a general scheme for boosting the accuracy of weak SQ learning algorithms, proving that weak SQ learning is equivalent to strong SQ learning. The boosting is efficient and is used to show our main result of the first general upper bounds on the complexity of strong SQ learning. Specifically, we derive simultaneous upper bounds with respect to * on the number of queries, O(log 2 1 * ), the Vapnik-Chervonenkis dimension of the query space, O(log 1 * ), and the inverse of the minimum tolerance, O( 1 * log 1 * ). In addition, we show that these general upper bounds are nearly optimal by describing a class of learning problems for which we simultaneously lower bound the number of queries by (log 1 * ) We further apply our boosting results in the SQ model to learning in the PAC model with classification noise. Since nearly all PAC learning algorithms can be cast in the SQ model, we can apply our boosting techniques to convert these PAC algorithms into highly efficient SQ algorithms. By simulating these efficient SQ algorithms in the PAC model with classification noise, we show that nearly all PAC algorithms can be converted into highly efficient PAC algorithms which tolerate classification noise. We give an upper bound on the sample complexity of these noise-tolerant PAC algorithms which is nearly optimal with respect to the noise rate. We also give upper bounds on space complexity and hypothesis size and show that these two measures are in fact independent of the noise rate. We note that the running times of these noise-tolerant PAC algorithms are efficient. This sequence of simulations also demonstrates that it is possible to boost the accuracy of nearly all PAC algorithms even in the presence of noise. This provides a partial answer to an open problem of Schapire [15] and the first theoretical evidence for an empirical result of Drucker, Schapire and Simard [4]. ",
"neighbors": [
20,
456,
778,
1181,
1897
],
"mask": "Test"
},
{
"node_id": 26,
"label": 2,
"text": "Title: Neural Network Applicability: Classifying the Problem Space \nAbstract: The tremendous current effort to propose neurally inspired methods of computation forces closer scrutiny of real world application potential of these models. This paper categorizes applications into classes and particularly discusses features of applications which make them efficiently amenable to neural network methods. Computational machines do deterministic mappings of inputs to outputs and many computational mechanisms have been proposed for problem solutions. Neural network features include parallel execution, adaptive learning, generalization, and fault tolerance. Often, much effort is given to a model and applications which can already be implemented in a much more efficient way with an alternate technology. Neural networks are potentially powerful devices for many classes of applications, but not all. However, it is proposed that the class of applications for which neural networks are efficient is both large and commonly occurring in nature. Comparison of supervised, unsupervised, and generalizing systems is also included. ",
"neighbors": [
747,
1129,
2612
],
"mask": "Test"
},
{
"node_id": 27,
"label": 3,
"text": "Title: Formal Rules for Selecting Prior Distributions: A Review and Annotated Bibliography \nAbstract: Subjectivism has become the dominant philosophical foundation for Bayesian inference. Yet, in practice, most Bayesian analyses are performed with so-called \"noninfor-mative\" priors, that is, priors constructed by some formal rule. We review the plethora of techniques for constructing such priors, and discuss some of the practical and philosophical issues that arise when they are used. We give special emphasis to Jeffreys's rules and discuss the evolution of his point of view about the interpretation of priors, away from unique representation of ignorance toward the notion that they should be chosen by convention. We conclude that the problems raised by the research on priors chosen by formal rules are serious and may not be dismissed lightly; when sample sizes are small (relative to the number of parameters being estimated) it is dangerous to put faith in any \"default\" solution; but when asymptotics take over, Jeffreys's rules and their variants remain reasonable choices. We also provide an annotated bibliography. fl Robert E. Kass is Professor and Larry Wasserman is Associate Professor, Department of Statistics, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213-2717. The work of both authors was supported by NSF grant DMS-9005858 and NIH grant R01-CA54852-01. The authors thank Nick Polson for helping with a few annotations, and Jim Berger, Teddy Seidenfeld and Arnold Zellner for useful comments and discussion. ",
"neighbors": [
84,
532
],
"mask": "Train"
},
{
"node_id": 28,
"label": 2,
"text": "Title: A Delay Damage Model Selection Algorithm for NARX Neural Networks \nAbstract: Recurrent neural networks have become popular models for system identification and time series prediction. NARX (Nonlinear AutoRegressive models with eXogenous inputs) neural network models are a popular subclass of recurrent networks and have been used in many applications. Though embedded memory can be found in all recurrent network models, it is particularly prominent in NARX models. We show that using intelligent memory order selection through pruning and good initial heuristics significantly improves the generalization and predictive performance of these nonlinear systems on problems as diverse as grammatical inference and time series prediction. ",
"neighbors": [
586,
611,
1606,
1718
],
"mask": "Validation"
},
{
"node_id": 29,
"label": 5,
"text": "Title: Stochastically Guided Disjunctive Version Space Learning \nAbstract: This paper presents an incremental concept learning approach to identiflcation of concepts with high overall accuracy. The main idea is to address the concept overlap as a central problem when learning multiple descriptions. Many traditional inductive algorithms, as those from the disjunctive version space family considered here, face this problem. The approach focuses on combinations of confldent, possibly overlapping, concepts with an original stochastic complexity formula. The focusing is e-cient because it is organized as a simulated annealing-based beam search. The experiments show that the approach is especially suitable for developing incremental learning algorithms with the following advantages: flrst, it generates highly accurate concepts; second, it overcomes to a certain degree the sensitivity to the order of examples; and third, it handles noisy examples. ",
"neighbors": [
319,
382,
426,
429
],
"mask": "Train"
},
{
"node_id": 30,
"label": 0,
"text": "Title: Towards More Creative Case-Based Design Systems \nAbstract: Case-based reasoning (CBR) has a great deal to offer in supporting creative design, particularly processes that rely heavily on previous design experience, such as framing the problem and evaluating design alternatives. However, most existing CBR systems are not living up to their potential. They tend to adapt and reuse old solutions in routine ways, producing robust but uninspired results. Little research effort has been directed towards the kinds of situation assessment, evaluation, and assimilation processes that facilitate the exploration of ideas and the elaboration and redefinition of problems that are crucial to creative design. Also, their typically rigid control structures do not facilitate the kinds of strategic control and opportunism inherent in creative reasoning. In this paper, we describe the types of behavior we would like case-based design systems to support, based on a study of designers working on a mechanical engineering problem. We show how the standard CBR framework should be extended and we describe an architecture we are developing to experiment with these ideas. 1 ",
"neighbors": [
231,
285,
679,
1148,
2276
],
"mask": "Train"
},
{
"node_id": 31,
"label": 2,
"text": "Title: GIBBS-MARKOV MODELS \nAbstract: In this paper we present a framework for building probabilistic automata parameterized by context-dependent probabilities. Gibbs distributions are used to model state transitions and output generation, and parameter estimation is carried out using an EM algorithm where the M-step uses a generalized iterative scaling procedure. We discuss relations with certain classes of stochastic feedforward neural networks, a geometric interpretation for parameter estimation, and a simple example of a statistical language model constructed using this methodology. ",
"neighbors": [
14,
250,
1116
],
"mask": "Train"
},
{
"node_id": 32,
"label": 0,
"text": "Title: Design by Interactive Exploration Using Memory-Based Techniques \nAbstract: One of the characteristics of design is that designers rely extensively on past experience in order to create new designs. Because of this, memory-based techniques from artificial intelligence, which help store, organise, retrieve, and reuse experiential knowledge held in memory, are good candidates for aiding designers. Another characteristic of design is the phenomenon of exploration in the early stages of design configuration. A designer begins with an ill-structured, partially defined, problem specification, and through a process of exploration gradually refines and modifies it as his/her understanding of the problem improves. In this paper we describe demex, an interactive computer-aided design system that employs memory-based techniques to help its users explore the design problems they pose to the system, in order to acquire a better understanding of the requirements of the problems. demex has been applied in the domain of structural design of buildings. ",
"neighbors": [
679
],
"mask": "Validation"
},
{
"node_id": 33,
"label": 2,
"text": "Title: Learning Generative Models with the Up-Propagation Algorithm \nAbstract: Up-propagation is an algorithm for inverting and learning neural network generative models. Sensory input is processed by inverting a model that generates patterns from hidden variables using top-down connections. The inversion process is iterative, utilizing a negative feedback loop that depends on an error signal propagated by bottom-up connections. The error signal is also used to learn the generative model from examples. The algorithm is benchmarked against principal component analysis in In his doctrine of unconscious inference, Helmholtz argued that perceptions are formed by the interaction of bottom-up sensory data with top-down expectations. According to one interpretation of this doctrine, perception is a procedure of sequential hypothesis testing. We propose a new algorithm, called up-propagation, that realizes this interpretation in layered neural networks. It uses top-down connections to generate hypotheses, and bottom-up connections to revise them. It is important to understand the difference between up-propagation and its ancestor, the backpropagation algorithm[1]. Backpropagation is a learning algorithm for recognition models. As shown in Figure 1a, bottom-up connections recognize patterns, while top-down connections propagate an error signal that is used to learn the recognition model. In contrast, up-propagation is an algorithm for inverting and learning generative models, as shown in Figure 1b. Top-down connections generate patterns from a set of hidden variables. Sensory input is processed by inverting the generative model, recovering hidden variables that could have generated the sensory data. This operation is called either pattern recognition or pattern analysis, depending on the meaning of the hidden variables. Inversion of the generative model is done iteratively, through a negative feedback loop driven by an error signal from the bottom-up connections. The error signal is also used for learning the connections experiments on images of handwritten digits.",
"neighbors": [
250,
1591,
1701
],
"mask": "Train"
},
{
"node_id": 34,
"label": 4,
"text": "Title: Using a Case Base of Surfaces to Speed-Up Reinforcement Learning \nAbstract: This paper demonstrates the exploitation of certain vision processing techniques to index into a case base of surfaces. The surfaces are the result of reinforcement learning and represent the optimum choice of actions to achieve some goal from anywhere in the state space. This paper shows how strong features that occur in the interaction of the system with its environment can be detected early in the learning process. Such features allow the system to identify when an identical, or very similar, task has been solved previously and to retrieve the relevant surface. This results in an orders of magnitude increase in learning rate. ",
"neighbors": [
35,
66,
559,
565,
566
],
"mask": "Train"
},
{
"node_id": 35,
"label": 4,
"text": "Title: A Teaching Strategy for Memory-Based Control \nAbstract: Combining different machine learning algorithms in the same system can produce benefits above and beyond what either method could achieve alone. This paper demonstrates that genetic algorithms can be used in conjunction with lazy learning to solve examples of a difficult class of delayed reinforcement learning problems better than either method alone. This class, the class of differential games, includes numerous important control problems that arise in robotics, planning, game playing, and other areas, and solutions for differential games suggest solution strategies for the general class of planning and control problems. We conducted a series of experiments applying three learning approaches|lazy Q-learning, k-nearest neighbor (k-NN), and a genetic algorithm|to a particular differential game called a pursuit game. Our experiments demonstrate that k-NN had great difficulty solving the problem, while a lazy version of Q-learning performed moderately well and the genetic algorithm performed even better. These results motivated the next step in the experiments, where we hypothesized k-NN was having difficulty because it did not have good examples-a common source of difficulty for lazy learning. Therefore, we used the genetic algorithm as a bootstrapping method for k-NN to create a system to provide these examples. Our experiments demonstrate that the resulting joint system learned to solve the pursuit games with a high degree of accuracy-outperforming either method alone-and with relatively small memory requirements.",
"neighbors": [
34
],
"mask": "Train"
},
{
"node_id": 36,
"label": 2,
"text": "Title: Generative Models for Discovering Sparse Distributed Representations \nAbstract: We describe a hierarchical, generative model that can be viewed as a non-linear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demon strate that the network learns to extract sparse, distributed, hierarchical representations.",
"neighbors": [
257,
745,
1066,
1591,
1701,
1933,
1974,
2072,
2390
],
"mask": "Train"
},
{
"node_id": 37,
"label": 4,
"text": "Title: Hierarchical Evolution of Neural Networks \nAbstract: In most applications of neuro-evolution, each individual in the population represents a complete neural network. Recent work on the SANE system, however, has demonstrated that evolving individual neurons often produces a more efficient genetic search. This paper demonstrates that while SANE can solve easy tasks very quickly, it often stalls in larger problems. A hierarchical approach to neuro-evolution is presented that overcomes SANE's difficulties by integrating both a neuron-level exploratory search and a network-level exploitive search. In a robot arm manipulation task, the hierarchical approach outperforms both a neuron-based search and a network-based search. ",
"neighbors": [
563
],
"mask": "Train"
},
{
"node_id": 38,
"label": 1,
"text": "Title: HOW TO EVOLVE AUTONOMOUS ROBOTS: DIFFERENT APPROACHES IN EVOLUTIONARY ROBOTICS \nAbstract: In most applications of neuro-evolution, each individual in the population represents a complete neural network. Recent work on the SANE system, however, has demonstrated that evolving individual neurons often produces a more efficient genetic search. This paper demonstrates that while SANE can solve easy tasks very quickly, it often stalls in larger problems. A hierarchical approach to neuro-evolution is presented that overcomes SANE's difficulties by integrating both a neuron-level exploratory search and a network-level exploitive search. In a robot arm manipulation task, the hierarchical approach outperforms both a neuron-based search and a network-based search. ",
"neighbors": [
219,
273,
372,
563,
1689
],
"mask": "Validation"
},
{
"node_id": 39,
"label": 4,
"text": "Title: Finding Structure in Reinforcement Learning \nAbstract: Reinforcement learning addresses the problem of learning to select actions in order to maximize one's performance in unknown environments. To scale reinforcement learning to complex real-world tasks, such as typically studied in AI, one must ultimately be able to discover the structure in the world, in order to abstract away the myriad of details and to operate in more tractable problem spaces. This paper presents the SKILLS algorithm. SKILLS discovers skills, which are partially defined action policies that arise in the context of multiple, related tasks. Skills collapse whole action sequences into single operators. They are learned by minimizing the compactness of action policies, using a description length argument on their representation. Empirical results in simple grid navigation tasks illustrate the successful discovery of structure in reinforcement learning.",
"neighbors": [
24,
82
],
"mask": "Train"
},
{
"node_id": 40,
"label": 6,
"text": "Title: On-line Learning with Linear Loss Constraints \nAbstract: We consider a generalization of the mistake-bound model (for learning f0; 1g-valued functions) in which the learner must satisfy a general constraint on the number M + of incorrect 1 predictions and the number M of incorrect 0 predictions. We describe a general-purpose optimal algorithm for our formulation of this problem. We describe several applications of our general results, involving situations in which the learner wishes to satisfy linear inequalities in M + and M .",
"neighbors": [
761
],
"mask": "Train"
},
{
"node_id": 41,
"label": 3,
"text": "Title: Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review \nAbstract: A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and cross-correlations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution. Mary Kathryn Cowles is Assistant Professor of Biostatistics, Harvard School of Public Health, Boston, MA 02115. Bradley P. Carlin is Associate Professor, Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN 55455. Much of the work was done while the first author was a graduate student in the Divison of Biostatistics at the University of Minnesota and then Assistant Professor, Biostatistics Section, Department of Preventive and Societal Medicine, University of Nebraska Medical Center, Omaha, NE 68198. The work of both authors was supported in part by National Institute of Allergy and Infectious Diseases FIRST Award 1-R29-AI33466. The authors thank the developers of the diagnostics studied here for sharing their insights, experiences, and software, and Drs. Thomas Louis and Luke Tierney for helpful discussions and suggestions which greatly improved the manuscript. ",
"neighbors": [
23,
94,
115,
352,
533,
725,
888,
889,
892,
1713,
1716,
1733,
1982,
2456
],
"mask": "Train"
},
{
"node_id": 42,
"label": 1,
"text": "Title: Evolutionary Module Acquisition \nAbstract: A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise for the future but currently has yielded relatively little that is of practical use in applied work. Consequently, most MCMC users address the convergence problem by applying diagnostic tools to the output produced by running their samplers. After giving a brief overview of the area, we provide an expository review of thirteen convergence diagnostics, describing the theoretical basis and practical implementation of each. We then compare their performance in two simple models and conclude that all the methods can fail to detect the sorts of convergence failure they were designed to identify. We thus recommend a combination of strategies aimed at evaluating and accelerating MCMC sampler convergence, including applying diagnostic procedures to a small number of parallel chains, monitoring autocorrelations and cross-correlations, and modifying parameterizations or sampling algorithms appropriately. We emphasize, however, that it is not possible to say with certainty that a finite sample from an MCMC algorithm is representative of an underlying stationary distribution. Mary Kathryn Cowles is Assistant Professor of Biostatistics, Harvard School of Public Health, Boston, MA 02115. Bradley P. Carlin is Associate Professor, Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN 55455. Much of the work was done while the first author was a graduate student in the Divison of Biostatistics at the University of Minnesota and then Assistant Professor, Biostatistics Section, Department of Preventive and Societal Medicine, University of Nebraska Medical Center, Omaha, NE 68198. The work of both authors was supported in part by National Institute of Allergy and Infectious Diseases FIRST Award 1-R29-AI33466. The authors thank the developers of the diagnostics studied here for sharing their insights, experiences, and software, and Drs. Thomas Louis and Luke Tierney for helpful discussions and suggestions which greatly improved the manuscript. ",
"neighbors": [
163,
188,
189,
793
],
"mask": "Train"
},
{
"node_id": 43,
"label": 2,
"text": "Title: Competitive Anti-Hebbian Learning of Invariants \nAbstract: Although the detection of invariant structure in a given set of input patterns is vital to many recognition tasks, connectionist learning rules tend to focus on directions of high variance (principal components). The prediction paradigm is often used to reconcile this dichotomy; here we suggest a more direct approach to invariant learning based on an anti-Hebbian learning rule. An unsupervised two-layer network implementing this method in a competitive setting learns to extract coherent depth information from random-dot stereograms.",
"neighbors": [
576,
747
],
"mask": "Train"
},
{
"node_id": 44,
"label": 0,
"text": "Title: Competitive Anti-Hebbian Learning of Invariants \nAbstract: Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost.",
"neighbors": [
88,
686,
2428
],
"mask": "Validation"
},
{
"node_id": 45,
"label": 4,
"text": "Title: Acting under Uncertainty: Discrete Bayesian Models for Mobile-Robot Navigation \nAbstract: Discrete Bayesian models have been used to model uncertainty for mobile-robot navigation, but the question of how actions should be chosen remains largely unexplored. This paper presents the optimal solution to the problem, formulated as a partially observable Markov decision process. Since solving for the optimal control policy is intractable, in general, it goes on to explore a variety of heuristic control strategies. The control strategies are compared experimentally, both in simulation and in runs on a robot. ",
"neighbors": [
220,
490,
492,
1459
],
"mask": "Train"
},
{
"node_id": 46,
"label": 2,
"text": "Title: The Pandemonium System of Reflective Agents \nAbstract: In IEEE Transactions on Neural Networks, 7(1):97-106, 1996 Also available as GMD report #794 ",
"neighbors": [
238,
301,
489,
867
],
"mask": "Validation"
},
{
"node_id": 47,
"label": 0,
"text": "Title: An Implementation and Experiment with the Nested Generalized Exemplars Algorithm \nAbstract: This NRL NCARAI technical note (AIC-95-003) describes work with Salzberg's (1991) NGE. I recently implemented this algorithm and have run a few case studies. The purpose of this note is to publicize this implementation and note a curious result while using it. This implementation of NGE is available at under my WWW address ",
"neighbors": [
87
],
"mask": "Train"
},
{
"node_id": 48,
"label": 3,
"text": "Title: Sampling from Multimodal Distributions Using Tempered Transitions \nAbstract: Technical Report No. 9421, Department of Statistics, University of Toronto Abstract. I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently-developed method of \"simulated tempering\", the \"tempered transition\" method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the inefficiency of a random walk, an advantage that unfortunately is cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling efficiency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are \"deceptive\". ",
"neighbors": [
725,
1783,
2456,
2682
],
"mask": "Train"
},
{
"node_id": 49,
"label": 0,
"text": "Title: Abstract \nAbstract: We describe an ongoing project to develop an adaptive training system (ATS) that dynamically models a students learning processes and can provide specialized tutoring adapted to a students knowledge state and learning style. The student modeling component of the ATS, ML-Modeler, uses machine learning (ML) techniques to emulate the students novice-to-expert transition. ML-Modeler infers which learning methods the student has used to reach the current knowledge state by comparing the students solution trace to an expert solution and generating plausible hypotheses about what misconceptions and errors the student has made. A case-based approach is used to generate hypotheses through incorrectly applying analogy, overgeneralization, and overspecialization. The student and expert models use a network-based representation that includes abstract concepts and relationships as well as strategies for problem solving. Fuzzy methods are used to represent the uncertainty in the student model. This paper describes the design of the ATS and ML-Modeler, and gives a detailed example of how the system would model and tutor the student in a typical session. The domain we use for this example is high-school level chemistry. ",
"neighbors": [
581,
643
],
"mask": "Validation"
},
{
"node_id": 50,
"label": 0,
"text": "Title: Abstract \nAbstract: Metacognition addresses the issues of knowledge about cognition and regulating cognition. We argue that the regulation process should be improved with growing experience. Therefore mental models are needed which facilitate the re-use of previous regulation processes. We will satisfy this requirement by describing a case-based approach to Introspection Planning which utilises previous experience obtained during reasoning at the meta-level and at the object level. The introspection plans used in this approach support various metacognitive tasks which are identified by the generation of self-questions. As an example of introspection planning, the metacognitive behaviour of our system, IULIAN, is described. ",
"neighbors": [
150,
581,
583,
643
],
"mask": "Train"
},
{
"node_id": 51,
"label": 3,
"text": "Title: An Alternative Markov Property for Chain Graphs \nAbstract: Graphical Markov models use graphs, either undirected, directed, or mixed, to represent possible dependences among statistical variables. Applications of undirected graphs (UDGs) include models for spatial dependence and image analysis, while acyclic directed graphs (ADGs), which are especially convenient for statistical analysis, arise in such fields as genetics and psychometrics and as models for expert systems and Bayesian belief networks. Lauritzen, Wer-muth, and Frydenberg (LWF) introduced a Markov property for chain graphs, which are mixed graphs that can be used to represent simultaneously both causal and associative dependencies and which include both UDGs and ADGs as special cases. In this paper an alternative Markov property (AMP) for chain graphs is introduced, which in some ways is a more direct extension of the ADG Markov property than is the LWF property for chain graph.",
"neighbors": [
645,
772,
2559
],
"mask": "Validation"
},
{
"node_id": 52,
"label": 6,
"text": "Title: Theory Revision in Fault Hierarchies \nAbstract: The fault hierarchy representation is widely used in expert systems for the diagnosis of complex mechanical devices. On the assumption that an appropriate bias for a knowledge representation language is also an appropriate bias for learning in this domain, we have developed a theory revision method that operates directly on a fault hierarchy. This task presents several challenges: A typical training instance is missing most feature values, and the pattern of missing features is significant, rather than merely an effect of noise. Moreover, the accuracy of a candidate theory is measured by considering both the sequence of tests required to arrive at a diagnosis and its agreement with the diagnostic endpoints provided by an expert. This paper first describes the algorithm for theory revision of fault hierarchies that was designed to address these challenges, then discusses its application in knowledge base maintenance and reports on experiments that use to revise a fielded diagnostic system. ",
"neighbors": [
228,
430,
478,
2487,
2580
],
"mask": "Train"
},
{
"node_id": 53,
"label": 1,
"text": "Title: DISTRIBUTED GENETIC ALGORITHMS FOR PARTITIONING UNIFORM GRIDS \nAbstract: The fault hierarchy representation is widely used in expert systems for the diagnosis of complex mechanical devices. On the assumption that an appropriate bias for a knowledge representation language is also an appropriate bias for learning in this domain, we have developed a theory revision method that operates directly on a fault hierarchy. This task presents several challenges: A typical training instance is missing most feature values, and the pattern of missing features is significant, rather than merely an effect of noise. Moreover, the accuracy of a candidate theory is measured by considering both the sequence of tests required to arrive at a diagnosis and its agreement with the diagnostic endpoints provided by an expert. This paper first describes the algorithm for theory revision of fault hierarchies that was designed to address these challenges, then discusses its application in knowledge base maintenance and reports on experiments that use to revise a fielded diagnostic system. ",
"neighbors": [
243,
803,
1439,
1563
],
"mask": "Train"
},
{
"node_id": 54,
"label": 6,
"text": "Title: A Competitive Approach to Game Learning \nAbstract: Machine learning of game strategies has often depended on competitive methods that continually develop new strategies capable of defeating previous ones. We use a very inclusive definition of game and consider a framework within which a competitive algorithm makes repeated use of a strategy learning component that can learn strategies which defeat a given set of opponents. We describe game learning in terms of sets H and X of first and second player strategies, and connect the model with more familiar models of concept learning. We show the importance of the ideas of teaching set [20] and specification number [19] k in this new context. The performance of several competitive algorithms is investigated, using both worst-case and randomized strategy learning algorithms. Our central result (Theorem 4) is a competitive algorithm that solves games in a total number of strategies polynomial in lg(jHj), lg(jX j), and k. Its use is demonstrated, including an application in concept learning with a new kind of counterexample oracle. We conclude with a complexity analysis of game learning, and list a number of new questions arising from this work. ",
"neighbors": [
523,
615,
712,
1687,
2334
],
"mask": "Train"
},
{
"node_id": 55,
"label": 1,
"text": "Title: A Comparison of Selection Schemes used in Genetic Algorithms \nAbstract: TIK-Report Nr. 11, December 1995 Version 2 (2. Edition) ",
"neighbors": [
163,
361,
844,
1784,
1832,
1905
],
"mask": "Train"
},
{
"node_id": 56,
"label": 6,
"text": "Title: Self bounding learning algorithms \nAbstract: Most of the work which attempts to give bounds on the generalization error of the hypothesis generated by a learning algorithm is based on methods from the theory of uniform convergence. These bounds are a-priori bounds that hold for any distribution of examples and are calculated before any data is observed. In this paper we propose a different approach for bounding the generalization error after the data has been observed. A self-bounding learning algorithm is an algorithm which, in addition to the hypothesis that it outputs, outputs a reliable upper bound on the generalization error of this hypothesis. We first explore the idea in the statistical query learning framework of Kearns [10]. After that we give an explicit self bounding algorithm for learning algorithms that are based on local search.",
"neighbors": [
778,
967,
1027
],
"mask": "Train"
},
{
"node_id": 57,
"label": 4,
"text": "Title: Markov Decision Processes in Large State Spaces \nAbstract: In this paper we propose a new framework for studying Markov decision processes (MDPs), based on ideas from statistical mechanics. The goal of learning in MDPs is to find a policy that yields the maximum expected return over time. In choosing policies, agents must therefore weigh the prospects of short-term versus long-term gains. We study a simple MDP in which the agent must constantly decide between exploratory jumps and local reward mining in state space. The number of policies to choose from grows exponentially with the size of the state space, N . We view the expected returns as defining an energy landscape over policy space. Methods from statistical mechanics are used to analyze this landscape in the thermodynamic limit N ! 1. We calculate the overall distribution of expected returns, as well as the distribution of returns for policies at a fixed Hamming distance from the optimal one. We briefly discuss the problem of learning optimal policies from empirical estimates of the expected return. As a first step, we relate our findings for the entropy to the limit of high-temperature learning. Numerical simulations support the theoretical results. ",
"neighbors": [
306,
552,
565,
967,
1459
],
"mask": "Train"
},
{
"node_id": 58,
"label": 2,
"text": "Title: Neural Networks with Quadratic VC Dimension \nAbstract: This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights w. This result settles a long-standing open question, namely whether the well-known O(w log w) bound, known for hard-threshold nets, also held for more general sigmoidal nets. Implications for the number of samples needed for valid generalization are discussed. ",
"neighbors": [
536,
990,
1891,
2495
],
"mask": "Train"
},
{
"node_id": 59,
"label": 2,
"text": "Title: SELF-ADAPTIVE NEURAL NETWORKS FOR BLIND SEPARATION OF SOURCES \nAbstract: Novel on-line learning algorithms with self adaptive learning rates (parameters) for blind separation of signals are proposed. The main motivation for development of new learning rules is to improve convergence speed and to reduce cross-talking, especially for non-stationary signals. Furthermore, we have discovered that under some conditions the proposed neural network models with associated learning algorithms exhibit a random switch of attention, i.e. they have ability of chaotic or random switching or cross-over of output signals in such way that a specified separated signal may appear at various outputs at different time windows. Validity, performance and dynamic properties of the proposed learning algorithms are investigated by computer simulation experiments. ",
"neighbors": [
570,
576,
839,
872,
874,
1520,
1709
],
"mask": "Validation"
},
{
"node_id": 60,
"label": 4,
"text": "Title: The Efficient Learning of Multiple Task Sequences \nAbstract: I present a modular network architecture and a learning algorithm based on incremental dynamic programming that allows a single learning agent to learn to solve multiple Markovian decision tasks (MDTs) with significant transfer of learning across the tasks. I consider a class of MDTs, called composite tasks, formed by temporally concatenating a number of simpler, elemental MDTs. The architecture is trained on a set of composite and elemental MDTs. The temporal structure of a composite task is assumed to be unknown and the architecture learns to produce a temporal decomposition. It is shown that under certain conditions the solution of a composite MDT can be constructed by computationally inexpensive modifications of the solutions of its constituent elemental MDTs.",
"neighbors": [
252,
552,
562,
565,
688
],
"mask": "Train"
},
{
"node_id": 61,
"label": 0,
"text": "Title: Program Synthesis and Transformation Techniques for Simpuation, Optimization and Constraint Satisfaction Deductive Synthesis of Numerical\nAbstract: Scientists and engineers face recurring problems of constructing, testing and modifying numerical simulation programs. The process of coding and revising such simulators is extremely time-consuming, because they are almost always written in conventional programming languages. Scientists and engineers can therefore benefit from software that facilitates construction of programs for simulating physical systems. Our research adapts the methodology of deductive program synthesis to the problem of constructing numerical simulation codes. We have focused on simulators that can be represented as second order functional programs composed of numerical integration and root extraction routines. We have developed a system that uses first order Horn logic to synthesize numerical simulators built from these components. Our approach is based on two ideas: First, we axiomatize only the relationship between integration and differentiation. We neither attempt nor require a complete axiomatization of mathematical analysis. Second, our system uses a representation in which functions are reified as objects. Function objects are encoded as lambda expressions. Our knowledge base includes an axiomatization of term equality in the lambda calculus. It also includes axioms defining the semantics of numerical integration and root extraction routines. We use depth bounded SLD resolution to construct proofs and synthesize programs. Our system has successfully constructed numerical simulators for computational design of jet engine nozzles and sailing yachts, among others. Our results demonstrate that deductive synthesis techniques can be used to construct numerical simulation programs for realistic applications (Ellman and Murata 1998). Automatic design optimization is highly sensitive to problem formulation. The choice of objective function, constraints and design parameters can dramatically impact the computational cost of optimization and the quality of the resulting design. The best formulation varies from one application to another. A design engineer will usually not know the best formulation in advance. In order to address this problem, we have developed a system that supports interactive formulation, testing and reformulation of design optimization strategies. Our system includes an executable, data-flow language for representing optimization strategies. The language allows an engineer to define multiple stages of optimization, each using different approximations of the objective and constraints or different abstractions of the design space. We have also developed a set of transformations that reformulate strategies represented in our language. The transformations can approximate objective and constraint functions, abstract or reparameterize search spaces, or divide an optimization process into multiple stages. The system is applicable in principle to any design problem that can be expressed in terms of constrained op ",
"neighbors": [
240
],
"mask": "Train"
},
{
"node_id": 62,
"label": 3,
"text": "Title: Context-Specific Independence in Bayesian Networks \nAbstract: Bayesiannetworks provide a languagefor qualitatively representing the conditional independence properties of a distribution. This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian network structure: independencies that hold only in certain contexts, i.e., given a specific assignment of values to certain variables. In this paper, we propose a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node. We present a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network. We then focus on a particular qualitative representation schemetree-structured CPTs for capturing CSI. We suggest ways in which this representation can be used to support effective inference algorithms. In particular, we present a structural decomposition of the resulting network which can improve the performance of clustering algorithms, and an alternative algorithm based on cutset conditioning.",
"neighbors": [
324,
332,
423,
945,
2425,
2474,
2566
],
"mask": "Validation"
},
{
"node_id": 63,
"label": 4,
"text": "Title: Machine Learning, Reinforcement Learning with Replacing Eligibility Traces \nAbstract: The eligibility trace is one of the basic mechanisms used in reinforcement learning to handle delayed reward. In this paper we introduce a new kind of eligibility trace, the replacing trace, analyze it theoretically, and show that it results in faster, more reliable learning than the conventional trace. Both kinds of trace assign credit to prior events according to how recently they occurred, but only the conventional trace gives greater credit to repeated events. Our analysis is for conventional and replace-trace versions of the o*ine TD(1) algorithm applied to undiscounted absorbing Markov chains. First, we show that these methods converge under repeated presentations of the training set to the same predictions as two well known Monte Carlo methods. We then analyze the relative efficiency of the two Monte Carlo methods. We show that the method corresponding to conventional TD is biased, whereas the method corresponding to replace-trace TD is unbiased. In addition, we show that the method corresponding to replacing traces is closely related to the maximum likelihood solution for these tasks, and that its mean squared error is always lower in the long run. Computational results confirm these analyses and show that they are applicable more generally. In particular, we show that replacing traces significantly improve performance and reduce parameter sensitivity on the \"Mountain-Car\" task, a full reinforcement-learning problem with a continuous state space, when using a feature-based function approximator. ",
"neighbors": [
153,
210,
739,
1546
],
"mask": "Test"
},
{
"node_id": 64,
"label": 0,
"text": "Title: Integrating Creativity and Reading: A Functional Approach \nAbstract: Reading has been studied for decades by a variety of cognitive disciplines, yet no theories exist which sufficiently describe and explain how people accomplish the complete task of reading real-world texts. In particular, a type of knowledge intensive reading known as creative reading has been largely ignored by the past research. We argue that creative reading is an aspect of practically all reading experiences; as a result, any theory which overlooks this will be insufficient. We have built on results from psychology, artificial intelligence, and education in order to produce a functional theory of the complete reading process. The overall framework describes the set of tasks necessary for reading to be performed. Within this framework, we have developed a theory of creative reading. The theory is implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a reading system which reads science fiction stories. ",
"neighbors": [
284,
289,
486,
583
],
"mask": "Train"
},
{
"node_id": 65,
"label": 1,
"text": "Title: Integrating Creativity and Reading: A Functional Approach \nAbstract: dvitps ERROR: reno98b.dvi @ puccini.rutgers.edu Certain fonts that you requested in your dvi file could not be found on the system. In order to print your document, other fonts that are installed were substituted for these missing fonts. Below is a list of the substitutions that were made. /usr/local/lib/fonts/gf/cmbx12.518pk substituted for cmbx12.519pk ",
"neighbors": [
743,
744
],
"mask": "Train"
},
{
"node_id": 66,
"label": 0,
"text": "Title: (1994); Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. Case-Based Reasoning: Foundational Issues, Methodological\nAbstract: Case-based reasoning is a recent approach to problem solving and learning that has got a lot of attention over the last few years. Originating in the US, the basic idea and underlying theories have spread to other continents, and we are now within a period of highly active research in case-based reasoning in Europe, as well. This paper gives an overview of the foundational issues related to case-based reasoning, describes some of the leading methodological approaches within the field, and exemplifies the current state through pointers to some systems. Initially, a general framework is defined, to which the subsequent descriptions and discussions will refer. The framework is influenced by recent methodologies for knowledge level descriptions of intelligent systems. The methods for case retrieval, reuse, solution testing, and learning are summarized, and their actual realization is discussed in the light of a few example systems that represent different CBR approaches. We also discuss the role of case-based methods as one type of reasoning and learning method within an integrated system architecture. ",
"neighbors": [
34,
69,
149,
183,
215,
288,
1248,
1531,
1698,
2122,
2157,
2294,
2310,
2359,
2441,
2520
],
"mask": "Train"
},
{
"node_id": 67,
"label": 3,
"text": "Title: Updates and Counterfactuals \nAbstract: We study the problem of combining updates |a special instance of theory change| and counterfactual conditionals in propositional knowledgebases. Intuitively, an update means that the world described by the knowledgebase has changed. This is opposed to revisions |another instance of theory change| where our knowledge about a static world changes. A counterfactual implication is a statement of the form `If A were the case, then B would also be the case', where the negation of A may be derivable from our current knowledge. We present a decidable logic, called VCU 2 , that has both update and counterfactual implication as connectives in the object language. Our update operator is a generalization of operators previously proposed and studied in the literature. We show that our operator satisfies certain postulates set forth for any reasonable update. The logic VCU 2 is an extension of D. K. Lewis' logic VCU for counterfactual conditionals. The semantics of VCU 2 is that of a multimodal propositional calculus, and is based on possible worlds. The infamous Ramsey Rule becomes a derivation rule in our sound and complete axiomatization. We then show that Gardenfors' Triviality Theorem, about the impossibility to combine theory change and counterfactual conditionals via the Ramsey Rule, does not hold in our logic. It is thus seen that the Triviality Theorem applies only to revision operators, not to updates. fl A preliminary version of this paper was presented at the Second International Conference on Principles of Knowledge Representation and Reasoning, Cambridge, Massachusetts, April 22-25, 1991. The work was partially performed while the author was visiting the Department of Computer Science at the University of Toronto. ",
"neighbors": [
729
],
"mask": "Test"
},
{
"node_id": 68,
"label": 2,
"text": "Title: DISCOVERING NEURAL NETS WITH LOW KOLMOGOROV COMPLEXITY AND HIGH GENERALIZATION CAPABILITY Neural Networks 10(5):857-873, 1997 \nAbstract: Many neural net learning algorithms aim at finding \"simple\" nets to explain training data. The expectation is: the \"simpler\" the networks, the better the generalization on test data (! Occam's razor). Previous implementations, however, use measures for \"simplicity\" that lack the power, universality and elegance of those based on Kolmogorov complexity and Solomonoff's algorithmic probability. Likewise, most previous approaches (especially those of the \"Bayesian\" kind) suffer from the problem of choosing appropriate priors. This paper addresses both issues. It first reviews some basic concepts of algorithmic complexity theory relevant to machine learning, and how the Solomonoff-Levin distribution (or universal prior) deals with the prior problem. The universal prior leads to a probabilistic method for finding \"algorithmically simple\" problem solutions with high generalization capability. The method is based on Levin complexity (a time-bounded generalization of Kolmogorov complexity) and inspired by Levin's optimal universal search algorithm. For a given problem, solution candidates are computed by efficient \"self-sizing\" programs that influence their own runtime and storage size. The probabilistic search algorithm finds the \"good\" programs (the ones quickly computing algorithmically probable solutions fitting the training data). Simulations focus on the task of discovering \"algorithmically simple\" neural networks with low Kolmogorov complexity and high generalization capability. It is demonstrated that the method, at least with certain toy problems where it is computationally feasible, can lead to generalization results unmatchable by previous neural net algorithms. Much remains do be done, however, to make large scale applications and \"incremental learning\" feasible.",
"neighbors": [
522,
979,
1632,
1825,
1845,
1979,
2007
],
"mask": "Train"
},
{
"node_id": 69,
"label": 0,
"text": "Title: SaxEx a case-based reasoning system for generating expressive musical performances \nAbstract: We have studied the problem of generating expressive musical performances in the context of tenor saxophone interpretations. We have done several recordings of a tenor sax playing different Jazz ballads with different degrees of expressiveness including an inexpressive interpretation of each ballad. These recordings are analyzed, using SMS spectral modeling techniques, to extract information related to several expressive parameters. This set of parameters and the scores constitute the set of cases (examples) of a case-based system. From this set of cases, the system infers a set of possible expressive transformations for a given new phrase applying similarity criteria, based on background musical knowledge, between this new phrase and the set of cases. Finally, SaxEx applies the inferred expressive transformations to the new phrase using the synthesis capabilities of SMS.",
"neighbors": [
66
],
"mask": "Train"
},
{
"node_id": 70,
"label": 6,
"text": "Title: Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods \nAbstract: One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition. ",
"neighbors": [
255,
931,
999,
1521,
1692,
1986
],
"mask": "Train"
},
{
"node_id": 71,
"label": 3,
"text": "Title: Supervised learning from incomplete data via an EM approach \nAbstract: Real-world learning tasks may involve high-dimensional data sets with arbitrary patterns of missing data. In this paper we present a framework based on maximum likelihood density estimation for learning from such data sets. We use mixture models for the density estimates and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster et al., 1977) in deriving a learning algorithm|EM is used both for the estimation of mixture components and for coping with missing data. The resulting algorithm is applicable to a wide range of supervised as well as unsupervised learning problems. Results from a classification benchmark|the iris data set|are presented.",
"neighbors": [
74,
661,
677,
929,
1559,
1641,
1923,
2442
],
"mask": "Train"
},
{
"node_id": 72,
"label": 2,
"text": "Title: SCRIPT RECOGNITION WITH HIERARCHICAL FEATURE MAPS \nAbstract: The hierarchical feature map system recognizes an input story as an instance of a particular script by classifying it at three levels: scripts, tracks and role bindings. The recognition taxonomy, i.e. the breakdown of each script into the tracks and roles, is extracted automatically and independently for each script from examples of script instantiations in an unsupervised self-organizing process. The process resembles human learning in that the differentiation of the most frequently encountered scripts become gradually the most detailed. The resulting structure is a hierachical pyramid of feature maps. The hierarchy visualizes the taxonomy and the maps lay out the topology of each level. The number of input lines and the self-organization time are considerably reduced compared to the ordinary single-level feature mapping. The system can recognize incomplete stories and recover the missing events. The taxonomy also serves as memory organization for script-based episodic memory. The maps assign a unique memory location for each script instantiation. The most salient parts of the input data are separated and most resources are concentrated on representing them accurately. ",
"neighbors": [
202,
745,
747,
771
],
"mask": "Train"
},
{
"node_id": 73,
"label": 4,
"text": "Title: LEARNING TO GENERATE ARTIFICIAL FOVEA TRAJECTORIES FOR TARGET DETECTION \nAbstract: It is shown how `static' neural approaches to adaptive target detection can be replaced by a more efficient and more sequential alternative. The latter is inspired by the observation that biological systems employ sequential eye-movements for pattern recognition. A system is described which builds an adaptive model of the time-varying inputs of an artificial fovea controlled by an adaptive neural controller. The controller uses the adaptive model for learning the sequential generation of fovea trajectories causing the fovea to move to a target in a visual scene. The system also learns to track moving targets. No teacher provides the desired activations of `eye-muscles' at various times. The only goal information is the shape of the target. Since the task is a `reward-only-at-goal' task , it involves a complex temporal credit assignment problem. Some implications for adaptive attentive systems in general are discussed. ",
"neighbors": [
747
],
"mask": "Train"
},
{
"node_id": 74,
"label": 3,
"text": "Title: Hierarchical Mixtures of Experts and the EM Algorithm \nAbstract: We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. This report describes research done at the Dept. of Brain and Cognitive Sciences, the Center for Biological and Computational Learning, and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for CBCL is provided in part by a grant from the NSF (ASC-9217041). Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Dept. of Defense. The authors were supported by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, by grant N00014-90-J-1942 from the Office of Naval Research, and by NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. ",
"neighbors": [
19,
71,
154,
193,
252,
263,
310,
345,
377,
505,
511,
547,
604,
622,
661,
680,
787,
867,
871,
881,
906,
949,
975,
987,
1017,
1024,
1103,
1220,
1634,
1676,
1928,
2013,
2124,
2266,
2284,
2335,
2390,
2421,
2513,
2654
],
"mask": "Validation"
},
{
"node_id": 75,
"label": 0,
"text": "Title: A Memory Model for Case Retrieval by Activation Passing \nAbstract: We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain. This report describes research done at the Dept. of Brain and Cognitive Sciences, the Center for Biological and Computational Learning, and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for CBCL is provided in part by a grant from the NSF (ASC-9217041). Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Dept. of Defense. The authors were supported by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, by grant N00014-90-J-1942 from the Office of Naval Research, and by NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. ",
"neighbors": [
288,
1123,
1354,
1854,
1855,
2122,
2299
],
"mask": "Train"
},
{
"node_id": 76,
"label": 3,
"text": "Title: A VIEW OF THE EM ALGORITHM THAT JUSTIFIES INCREMENTAL, SPARSE, AND OTHER VARIANTS \nAbstract: The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the distribution over the unobserved variables. From this perspective, it is easy to justify an incremental variant of the EM algorithm in which the distribution for only one of the unobserved variables is recalculated in each E step. This variant is shown empirically to give faster convergence in a mixture estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible. ",
"neighbors": [
131,
181,
250,
345,
392,
518,
661,
694,
869,
975,
1128,
1548,
1934,
2327,
2390,
2532
],
"mask": "Train"
},
{
"node_id": 77,
"label": 2,
"text": "Title: Synchronization and Desynchronization in a Network of Locally Coupled Wilson-Cowan Oscillators \nAbstract: A network of Wilson-Cowan oscillators is constructed, and its emergent properties of synchronization and desynchronization are investigated by both computer simulation and formal analysis. The network is a two-dimensional matrix, where each oscillator is coupled only to its neighbors. We show analytically that a chain of locally coupled oscillators (the piece-wise linear approximation to the Wilson-Cowan oscillator) synchronizes, and present a technique to rapidly entrain finite numbers of oscillators. The coupling strengths change on a fast time scale based on a Hebbian rule. A global separator is introduced which receives input from and sends feedback to each oscillator in the matrix. The global separator is used to desynchronize different oscillator groups. Unlike many other models, the properties of this network emerge from local connections, that preserve spatial relationships among components, and are critical for encoding Gestalt principles of feature grouping. The ability to synchronize and desynchronize oscillator groups within this network offers a promising approach for pattern segmentation and figure/ground segregation based on oscillatory correlation. ",
"neighbors": [
337
],
"mask": "Train"
},
{
"node_id": 78,
"label": 6,
"text": "Title: Probabilistic Networks: New Models and New Methods \nAbstract: In this paper I describe the implementation of a probabilistic regression model in BUGS. BUGS is a program that carries out Bayesian inference on statistical problems using a simulation technique known as Gibbs sampling. It is possible to implement surprisingly complex regression models in this environment. I demonstrate the simultaneous inference of an interpolant and an input-dependent noise level. ",
"neighbors": [
157,
214,
469,
560,
766,
2681
],
"mask": "Validation"
},
{
"node_id": 79,
"label": 6,
"text": "Title: A hierarchical ensemble of decision trees applied to classifying data from a psychological experiment \nAbstract: Classifying by hand complex data coming from psychology experiments can be a long and difficult task, because of the quantity of data to classify and the amount of training it may require. One way to alleviate this problem is to use machine learning techniques. We built a classifier based on decision trees that reproduces the classifying process used by two humans on a sample of data and that learns how to classify unseen data. The automatic classifier proved to be more accurate, more constant and much faster than classification by hand. ",
"neighbors": [
438,
2207
],
"mask": "Validation"
},
{
"node_id": 80,
"label": 2,
"text": "Title: Neural Network Implementation in SAS R Software \nAbstract: The estimation or training methods in the neural network literature are usually some simple form of gradient descent algorithm suitable for implementation in hardware using massively parallel computations. For ordinary computers that are not massively parallel, optimization algorithms such as those in several SAS procedures are usually far more efficient. This talk shows how to fit neural networks using SAS/OR R fl , SAS/ETS R fl , and SAS/STAT R fl software. ",
"neighbors": [
15,
2044
],
"mask": "Train"
},
{
"node_id": 81,
"label": 3,
"text": "Title: A Modification to Evidential Probability \nAbstract: Selecting the right reference class and the right interval when faced with conflicting candidates and no possibility of establishing subset style dominance has been a problem for Kyburg's Evidential Probability system. Various methods have been proposed by Loui and Kyburg to solve this problem in a way that is both intuitively appealing and justifiable within Kyburg's framework. The scheme proposed in this paper leads to stronger statistical assertions without sacrificing too much of the intuitive appeal of Kyburg's latest proposal. ",
"neighbors": [
647
],
"mask": "Validation"
},
{
"node_id": 82,
"label": 4,
"text": "Title: A Reinforcement Learning Approach to Job-shop Scheduling \nAbstract: We apply reinforcement learning methods to learn domain-specific heuristics for job shop scheduling. A repair-based scheduler starts with a critical-path schedule and incrementally repairs constraint violations with the goal of finding a short conflict-free schedule. The temporal difference algorithm T D() is applied to train a neural network to learn a heuristic evaluation function over states. This evaluation function is used by a one-step looka-head search procedure to find good solutions to new scheduling problems. We evaluate this approach on synthetic problems and on problems from a NASA space shuttle payload processing task. The evaluation function is trained on problems involving a small number of jobs and then tested on larger problems. The TD sched-uler performs better than the best known existing algorithm for this task|Zweben's iterative repair method based on simulated annealing. The results suggest that reinforcement learning can provide a new method for constructing high-performance scheduling systems.",
"neighbors": [
39,
239,
295,
305,
410,
548,
565,
1378,
1440,
1553
],
"mask": "Train"
},
{
"node_id": 83,
"label": 2,
"text": "Title: A Neural Network Pole Balancer that Learns and Operates on a Real Robot in Real Time \nAbstract: A neural network approach to the classic inverted pendulum task is presented. This task is the task of keeping a rigid pole, hinged to a cart and free to fall in a plane, in a roughly vertical orientation by moving the cart horizontally in the plane while keeping the cart within some maximum distance of its starting position. This task constitutes a difficult control problem if the parameters of the cart-pole system are not known precisely or are variable. It also forms the basis of an even more complex control-learning problem if the controller must learn the proper actions for successfully balancing the pole given only the current state of the system and a failure signal when the pole angle from the vertical becomes too great or the cart exceeds one of the boundaries placed on its position. The approach presented is demonstrated to be effective for the real-time control of a small, self-contained mini-robot, specially outfitted for the task. Origins and details of the learning scheme, specifics of the mini-robot hardware, and results of actual learning trials are presented. ",
"neighbors": [
294,
747
],
"mask": "Validation"
},
{
"node_id": 84,
"label": 3,
"text": "Title: Approximate Bayes Factors and Accounting for Model Uncertainty in Generalized Linear Models \nAbstract: Technical Report no. 255 Department of Statistics, University of Washington August 1993; Revised March 1994 ",
"neighbors": [
12,
27,
155,
347,
715,
950,
998,
1240,
1241,
1347,
1550,
1803
],
"mask": "Test"
},
{
"node_id": 85,
"label": 4,
"text": "Title: Q-Learning with Hidden-Unit Restarting \nAbstract: Platt's resource-allocation network (RAN) (Platt, 1991a, 1991b) is modified for a reinforcement-learning paradigm and to \"restart\" existing hidden units rather than adding new units. After restarting, units continue to learn via back-propagation. The resulting restart algorithm is tested in a Q-learning network that learns to solve an inverted pendulum problem. Solutions are found faster on average with the restart algorithm than without it.",
"neighbors": [
294,
425,
465,
552,
565,
2368
],
"mask": "Train"
},
{
"node_id": 86,
"label": 5,
"text": "Title: THE EXPANDABLE SPLIT WINDOW PARADIGM FOR EXPLOITING FINE-GRAIN PARALLELISM \nAbstract: We propose a new processing paradigm, called the Expandable Split Window (ESW) paradigm, for exploiting fine-grain parallelism. This paradigm considers a window of instructions (possibly having dependencies) as a single unit, and exploits fine-grain parallelism by overlapping the execution of multiple windows. The basic idea is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. This processing paradigm shares a number of properties of the restricted dataflow machines, but was derived from the sequential von Neumann architecture. We also present an implementation of the Expandable Split Window execution model, and preliminary performance results. ",
"neighbors": [
249,
373,
652,
735
],
"mask": "Test"
},
{
"node_id": 87,
"label": 5,
"text": "Title: A Hybrid Nearest-Neighbor and Nearest-Hyperrectangle Algorithm \nAbstract: We propose a new processing paradigm, called the Expandable Split Window (ESW) paradigm, for exploiting fine-grain parallelism. This paradigm considers a window of instructions (possibly having dependencies) as a single unit, and exploits fine-grain parallelism by overlapping the execution of multiple windows. The basic idea is to connect multiple sequential processors, in a decoupled and decentralized manner, to achieve overall multiple issue. This processing paradigm shares a number of properties of the restricted dataflow machines, but was derived from the sequential von Neumann architecture. We also present an implementation of the Expandable Split Window execution model, and preliminary performance results. ",
"neighbors": [
47,
383,
719,
2021,
2245
],
"mask": "Train"
},
{
"node_id": 88,
"label": 6,
"text": "Title: Hoeffding Races: Accelerating Model Selection Search for Classification and Function Approximation \nAbstract: Selecting a good model of a set of input points by cross validation is a computationally intensive process, especially if the number of possible models or the number of training points is high. Techniques such as gradient descent are helpful in searching through the space of models, but problems such as local minima, and more importantly, lack of a distance metric between various models reduce the applicability of these search methods. Hoeffding Races is a technique for finding a good model for the data by quickly discarding bad models, and concentrating the computational effort at differentiating between the better ones. This paper focuses on the special case of leave-one-out cross validation applied to memory-based learning algorithms, but we also argue that it is applicable to any class of model selection problems. ",
"neighbors": [
44,
116,
208,
225,
251,
371,
587,
682,
760,
762
],
"mask": "Validation"
},
{
"node_id": 89,
"label": 6,
"text": "Title: NP-Completeness of Searches for Smallest Possible Feature Sets a subset of the set of all\nAbstract: In many learning problems, the learning system is presented with values for features that are actually irrelevant to the concept it is trying to learn. The FOCUS algorithm, due to Almuallim and Dietterich, performs an explicit search for the smallest possible input feature set S that permits a consistent mapping from the features in S to the output feature. The FOCUS algorithm can also be seen as an algorithm for learning determinations or functional dependencies, as suggested in [6]. Another algorithm for learning determinations appears in [7]. The FOCUS algorithm has superpolynomial runtime, but Almuallim and Di-etterich leave open the question of tractability of the underlying problem. In this paper, the problem is shown to be NP-complete. We also describe briefly some experiments that demonstrate the benefits of determination learning, and show that finding lowest-cardinality determinations is easier in practice than finding minimal determi Define the MIN-FEATURES problem as follows: given a set X of examples (which are each composed of a a binary value specifying the value of the target feature and a vector of binary values specifying the values of the other features) and a number n, determine whether or not there exists some feature set S such that: We show that MIN-FEATURES is NP-complete by reducing VERTEX-COVER to MIN-FEATURES. 1 The VERTEX-COVER problem may be stated as the question: given a graph G with vertices V and edges E, is there a subset V 0 of V , of size m, such that each edge in E is connected to at least one vertex in V 0 ? We may reduce an instance of VERTEX-COVER to an instance of MIN-FEATURES by mapping each edge in E to an example in X, with one input feature for every vertex in V . 1 In [8], a \"proof\" is reported for this result by reduction to set covering. The proof therefore fails to show NP-completeness. nations.",
"neighbors": [
430,
635,
638
],
"mask": "Train"
},
{
"node_id": 90,
"label": 2,
"text": "Title: The wake-sleep algorithm for unsupervised neural networks \nAbstract: We describe an unsupervised learning algorithm for a multilayer network of stochastic neurons. Bottom-up recognition connections convert the input into representations in successive hidden layers and top-down generative connections reconstruct the representation in one layer from the representation in the layer above. In the wake phase, neurons are driven by recognition connections, and generative connections are adapted to increase the probability that they would reconstruct the correct activity vector in the layer below. In the sleep phase, neurons are driven by generative connections and recognition connections are adapted to increase the probability that they would produce Supervised learning algorithms for multilayer neural networks face two problems: They require a teacher to specify the desired output of the network and they require some method of communicating error information to all of the connections. The wake-sleep algorithm finesses both these problems. When there is no teaching signal to be matched, some other goal is required to force the hidden units to extract underlying structure. In the wake-sleep algorithm the goal is to learn representations that are economical to describe but allow the input to be reconstructed accurately. Each input vector could be communicated to a receiver by first sending its hidden representation and then sending the difference between the input vector and its top-down reconstruction from the hidden representation. The aim of learning is to minimize the description length which is the total number of bits that would be required to communicate the input vectors in this way [1]. No communication actually takes place, but minimizing the description length that would be required forces the network to learn economical representations that capture the underlying regularities in the data [2]. the correct activity vector in the layer above.",
"neighbors": [
680
],
"mask": "Validation"
},
{
"node_id": 91,
"label": 2,
"text": "Title: IEEE Learning the Semantic Similarity of Reusable Software Components \nAbstract: Properly structured software libraries are crucial for the success of software reuse. Specifically, the structure of the software library ought to reect the functional similarity of the stored software components in order to facilitate the retrieval process. We propose the application of artificial neural network technology to achieve such a structured library. In more detail, we utilize an artificial neural network adhering to the unsupervised learning paradigm. The distinctive feature of this very model is to make the semantic relationship between the stored software components geographically explicit. Thus, the actual user of the software library gets a notion of the semantic relationship between the components in terms of their geographical closeness. ",
"neighbors": [
745,
747
],
"mask": "Test"
},
{
"node_id": 92,
"label": 4,
"text": "Title: Learning Analytically and Inductively \nAbstract: Learning is a fundamental component of intelligence, and a key consideration in designing cognitive architectures such as Soar [ Laird et al., 1986 ] . This chapter considers the question of what constitutes an appropriate general-purpose learning mechanism. We are interested in mechanisms that might explain and reproduce the rich variety of learning capabilities of humans, ranging from learning perceptual-motor skills such as how to ride a bicycle, to learning highly cognitive tasks such as how to play chess. Research on learning in fields such as cognitive science, artificial intelligence, neurobiology, and statistics has led to the identification of two distinct classes of learning methods: inductive and analytic. Inductive methods, such as neural network Backpropagation, learn general laws by finding statistical correlations and regularities among a large set of training examples. In contrast, analytical methods, such as Explanation-Based Learning, acquire general laws from many fewer training examples. They rely instead on prior knowledge to analyze individual training examples in detail, then use this analysis to distinguish relevant example features from the irrelevant. The question considered in this chapter is how to best combine inductive and analytical learning in an architecture that seeks to cover the range of learning exhibited by intelligent systems such as humans. We present a specific learning mechanism, Explanation Based Neural Network learning (EBNN), that blends these two types of learning, and present experimental results demonstrating its ability to learn control strategies for a mobile robot using ",
"neighbors": [
136,
479,
552,
565,
1259,
2438
],
"mask": "Train"
},
{
"node_id": 93,
"label": 3,
"text": "Title: Blocking Gibbs Sampling for Linkage Analysis in Large Pedigrees with Many Loops \nAbstract: Learning is a fundamental component of intelligence, and a key consideration in designing cognitive architectures such as Soar [ Laird et al., 1986 ] . This chapter considers the question of what constitutes an appropriate general-purpose learning mechanism. We are interested in mechanisms that might explain and reproduce the rich variety of learning capabilities of humans, ranging from learning perceptual-motor skills such as how to ride a bicycle, to learning highly cognitive tasks such as how to play chess. Research on learning in fields such as cognitive science, artificial intelligence, neurobiology, and statistics has led to the identification of two distinct classes of learning methods: inductive and analytic. Inductive methods, such as neural network Backpropagation, learn general laws by finding statistical correlations and regularities among a large set of training examples. In contrast, analytical methods, such as Explanation-Based Learning, acquire general laws from many fewer training examples. They rely instead on prior knowledge to analyze individual training examples in detail, then use this analysis to distinguish relevant example features from the irrelevant. The question considered in this chapter is how to best combine inductive and analytical learning in an architecture that seeks to cover the range of learning exhibited by intelligent systems such as humans. We present a specific learning mechanism, Explanation Based Neural Network learning (EBNN), that blends these two types of learning, and present experimental results demonstrating its ability to learn control strategies for a mobile robot using ",
"neighbors": [
725,
759
],
"mask": "Test"
},
{
"node_id": 94,
"label": 3,
"text": "Title: Perfect Simulation in Stochastic Geometry \nAbstract: Simulation plays an important role in stochastic geometry and related fields, because all but the simplest random set models tend to be intractable to analysis. Many simulation algorithms deliver (approximate) samples of such random set models, for example by simulating the equilibrium distribution of a Markov chain such as a spatial birth-and-death process. The samples usually fail to be exact because the algorithm simulates the Markov chain for a long but finite time, and thus convergence to equilibrium is only approximate. The seminal work by Propp and Wilson made an important contribution to simulation by proposing a coupling method, Coupling from the Past (CFTP), which delivers perfect, that is to say exact, simulations of Markov chains. In this paper we introduce this new idea of perfect simulation and illustrate it using two common models in stochastic geometry: the dead leaves model and a Boolean model conditioned to cover a finite set of points. ",
"neighbors": [
41,
126
],
"mask": "Test"
},
{
"node_id": 95,
"label": 3,
"text": "Title: Bayesian Detection of Clusters and Discontinuities in Disease Maps \nAbstract: Simulation plays an important role in stochastic geometry and related fields, because all but the simplest random set models tend to be intractable to analysis. Many simulation algorithms deliver (approximate) samples of such random set models, for example by simulating the equilibrium distribution of a Markov chain such as a spatial birth-and-death process. The samples usually fail to be exact because the algorithm simulates the Markov chain for a long but finite time, and thus convergence to equilibrium is only approximate. The seminal work by Propp and Wilson made an important contribution to simulation by proposing a coupling method, Coupling from the Past (CFTP), which delivers perfect, that is to say exact, simulations of Markov chains. In this paper we introduce this new idea of perfect simulation and illustrate it using two common models in stochastic geometry: the dead leaves model and a Boolean model conditioned to cover a finite set of points. ",
"neighbors": [
161,
358,
759,
1255
],
"mask": "Validation"
},
{
"node_id": 96,
"label": 0,
"text": "Title: Lazy Induction Triggered by CBR \nAbstract: In recent years, case-based reasoning has been demonstrated to be highly useful for problem solving in complex domains. Also, mixed paradigm approaches emerged for combining CBR and induction techniques aiming at verifying the knowledge and/or building an efficient case memory. However, in complex domains induction over the whole problem space is often not possible or too time consuming. In this paper, an approach is presented which (owing to a close interaction with the CBR part) attempts to induce rules only for a particular context, i.e. for a problem just being solved by a CBR-oriented system. These rules may then be used for indexing purposes or similarity assessment in order to support the CBR process in the future. ",
"neighbors": [
438,
478,
649,
2061
],
"mask": "Train"
},
{
"node_id": 97,
"label": 2,
"text": "Title: Adaptive Tuning of Numerical Weather Prediction Models: Simultaneous Estimation of Weighting, Smoothing and Physical Parameters 1 \nAbstract: In recent years, case-based reasoning has been demonstrated to be highly useful for problem solving in complex domains. Also, mixed paradigm approaches emerged for combining CBR and induction techniques aiming at verifying the knowledge and/or building an efficient case memory. However, in complex domains induction over the whole problem space is often not possible or too time consuming. In this paper, an approach is presented which (owing to a close interaction with the CBR part) attempts to induce rules only for a particular context, i.e. for a problem just being solved by a CBR-oriented system. These rules may then be used for indexing purposes or similarity assessment in order to support the CBR process in the future. ",
"neighbors": [
439
],
"mask": "Validation"
},
{
"node_id": 98,
"label": 6,
"text": "Title: Planning and Learning in an Adversarial Robotic Game \nAbstract: 1 This paper demonstrates the tandem use of a finite automata learning algorithm and a utility planner for an adversarial robotic domain. For many applications, robot agents need to predict the movement of objects in the environment and plan to avoid them. When the robot has no reasoning model of the object, machine learning techniques can be used to generate one. In our project, we learn a DFA model of an adversarial robot and use the automaton to predict the next move of the adversary. The robot agent plans a path to avoid the adversary at the predicted location while fulfilling the goal requirements. ",
"neighbors": [
615,
1954,
2696
],
"mask": "Train"
},
{
"node_id": 99,
"label": 3,
"text": "Title: Bayesian Forecasting of Multinomial Time Series through Conditionally Gaussian Dynamic Models \nAbstract: Claudia Cargnoni is with the Dipartimento Statistico, Universita di Firenze, 50100 Firenze, Italy. Peter Muller is Assistant Professor, and Mike West is Professor, in the Institute of Statistics and Decision Sciences at Duke University, Durham NC 27708-0251. Research of Cargnoni was performed while visiting ISDS during 1995. Muller and West were partially supported by NSF under grant DMS-9305699. ",
"neighbors": [
759,
1255,
1613,
1619,
1722,
1803,
1852,
2578,
2592,
2679
],
"mask": "Test"
},
{
"node_id": 100,
"label": 1,
"text": "Title: Using Markov Chains to Analyze GAFOs \nAbstract: Our theoretical understanding of the properties of genetic algorithms (GAs) being used for function optimization (GAFOs) is not as strong as we would like. Traditional schema analysis provides some first order insights, but doesn't capture the non-linear dynamics of the GA search process very well. Markov chain theory has been used primarily for steady state analysis of GAs. In this paper we explore the use of transient Markov chain analysis to model and understand the behavior of finite population GAFOs observed while in transition to steady states. This approach appears to provide new insights into the circumstances under which GAFOs will (will not) perform well. Some preliminary results are presented and an initial evaluation of the merits of this approach is provided. ",
"neighbors": [
265,
758,
1611
],
"mask": "Test"
},
{
"node_id": 101,
"label": 2,
"text": "Title: Adaptive Noise Injection for Input Variables Relevance Determination \nAbstract: In this paper we consider the application of training with noise in multi-layer perceptron to input variables relevance determination. Noise injection is modified in order to penalize irrelevant features. The proposed algorithm is attractive as it requires the tuning of a single parameter. This parameter controls the penalization of the inputs together with the complexity of the model. After the presentation of the method, experimental evidences are given on simulated data sets.",
"neighbors": [
331,
1112,
2680,
2686
],
"mask": "Validation"
},
{
"node_id": 102,
"label": 2,
"text": "Title: Multivariate versus Univariate Decision Trees \nAbstract: COINS Technical Report 92-8 January 1992 Abstract In this paper we present a new multivariate decision tree algorithm LMDT, which combines linear machines with decision trees. LMDT constructs each test in a decision tree by training a linear machine and then eliminating irrelevant and noisy variables in a controlled manner. To examine LMDT's ability to find good generalizations we present results for a variety of domains. We compare LMDT empirically to a univariate decision tree algorithm and observe that when multivariate tests are the appropriate bias for a given data set, LMDT finds small accurate trees. ",
"neighbors": [
404,
1824,
1893,
1895,
1964,
2012,
2333,
2583
],
"mask": "Train"
},
{
"node_id": 103,
"label": 4,
"text": "Title: NEUROCONTROL BY REINFORCEMENT LEARNING \nAbstract: Reinforcement learning (RL) is a model-free tuning and adaptation method for control of dynamic systems. Contrary to supervised learning, based usually on gradient descent techniques, RL does not require any model or sensitivity function of the process. Hence, RL can be applied to systems that are poorly understood, uncertain, nonlinear or for other reasons untractable with conventional methods. In reinforcement learning, the overall controller performance is evaluated by a scalar measure, called reinforcement. Depending on the type of the control task, reinforcement may represent an evaluation of the most recent control action or, more often, of an entire sequence of past control moves. In the latter case, the RL system learns how to predict the outcome of each individual control action. This prediction is then used to adjust the parameters of the controller. The mathematical background of RL is closely related to optimal control and dynamic programming. This paper gives a comprehensive overview of the RL methods and presents an application to the attitude control of a satellite. Some well known applications from the literature are reviewed as well. ",
"neighbors": [
128,
294,
465,
471,
565
],
"mask": "Validation"
},
{
"node_id": 104,
"label": 2,
"text": "Title: How Lateral Interaction Develops in a Self-Organizing Feature Map \nAbstract: A biologically motivated mechanism for self-organizing a neural network with modifiable lateral connections is presented. The weight modification rules are purely activity-dependent, unsupervised and local. The lateral interaction weights are initially random but develop into a \"Mexican hat\" shape around each neuron. At the same time, the external input weights self-organize to form a topological map of the input space. The algorithm demonstrates how self-organization can bootstrap itself using input information. Predictions of the algorithm agree very well with experimental observations on the development of lateral connections in cortical feature maps. ",
"neighbors": [
747,
771
],
"mask": "Train"
},
{
"node_id": 105,
"label": 3,
"text": "Title: The New Challenge: From a Century of Statistics to an Age of Causation \nAbstract: Some of the main users of statistical methods - economists, social scientists, and epidemiologists are discovering that their fields rest not on statistical but on causal foundations. The blurring of these foundations over the years follows from the lack of mathematical notation capable of distinguishing causal from equational relationships. By providing formal and natural explication of such relations, graphical methods have the potential to revolutionize how statistics is used in knowledge-rich applications. Statisticians, in response, are beginning to realize that causality is not a metaphysical dead-end but a meaningful concept with clear mathematical underpinning. The paper surveys these developments and outlines future challenges. ",
"neighbors": [
248,
1326,
2434
],
"mask": "Validation"
},
{
"node_id": 106,
"label": 5,
"text": "Title: Combining Top-down and Bottom-up Techniques in Inductive Logic Programming \nAbstract: This paper describes a new method for inducing logic programs from examples which attempts to integrate the best aspects of existing ILP methods into a single coherent framework. In particular, it combines a bottom-up method similar to Golem with a top-down method similar to Foil. It also includes a method for predicate invention similar to Champ and an elegant solution to the \"noisy oracle\" problem which allows the system to learn recursive programs without requiring a complete set of positive examples. Systematic experimental comparisons to both Golem and Foil on a range of problems are used to clearly demonstrate the ad vantages of the approach.",
"neighbors": [
597
],
"mask": "Train"
},
{
"node_id": 107,
"label": 3,
"text": "Title: Computing upper and lower bounds on likelihoods in intractable networks \nAbstract: We present deterministic techniques for computing upper and lower bounds on marginal probabilities in sigmoid and noisy-OR networks. These techniques become useful when the size of the network (or clique size) precludes exact computations. We illustrate the tightness of the bounds by numerical experi ments.",
"neighbors": [
108,
250,
498,
898,
1288,
1937
],
"mask": "Train"
},
{
"node_id": 108,
"label": 3,
"text": "Title: Recursive algorithms for approximating probabilities in graphical models \nAbstract: MIT Computational Cognitive Science Technical Report 9604 Abstract We develop a recursive node-elimination formalism for efficiently approximating large probabilistic networks. No constraints are set on the network topologies. Yet the formalism can be straightforwardly integrated with exact methods whenever they are/become applicable. The approximations we use are controlled: they maintain consistently upper and lower bounds on the desired quantities at all times. We show that Boltzmann machines, sigmoid belief networks, or any combination (i.e., chain graphs) can be handled within the same framework. The accuracy of the methods is verified exper imentally.",
"neighbors": [
107,
250,
304,
498,
898,
1288
],
"mask": "Train"
},
{
"node_id": 109,
"label": 6,
"text": "Title: A General Lower Bound on the Number of Examples Needed for Learning \nAbstract: We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor. ",
"neighbors": [
171,
459,
488,
507,
535,
635,
640,
672,
778,
884,
955,
967,
1074,
1296,
1661,
1888,
2053,
2054,
2315
],
"mask": "Train"
},
{
"node_id": 110,
"label": 2,
"text": "Title: Data Exploration Using Self-Organizing Maps \nAbstract: We prove a lower bound of ( 1 * ln 1 ffi + VCdim(C) * ) on the number of random examples required for distribution-free learning of a concept class C, where VCdim(C) is the Vapnik-Chervonenkis dimension and * and ffi are the accuracy and confidence parameters. This improves the previous best lower bound of ( 1 * ln 1 ffi + VCdim(C)), and comes close to the known general upper bound of O( 1 ffi + VCdim(C) * ln 1 * ) for consistent algorithms. We show that for many interesting concept classes, including kCNF and kDNF, our bound is actually tight to within a constant factor. ",
"neighbors": [
687,
745,
747
],
"mask": "Validation"
},
{
"node_id": 111,
"label": 2,
"text": "Title: Tau Net: A Neural Network for Modeling Temporal Variability \nAbstract: The ability to handle temporal variation is important when dealing with real-world dynamic signals. In many applications, inputs do not come in as fixed-rate sequences, but rather as signals with time scales that can vary from one instance to the next; thus, modeling dynamic signals requires not only the ability to recognize sequences but also the ability to handle temporal changes in the signal. This paper discusses \"Tau Net,\" a neural network for modeling dynamic signals, and its application to speech. In Tau Net, sequence learning is accomplished using a combination of prediction, recurrence and time-delay connections. Temporal variability is modeled by having adaptable time constants in the network, which are adjusted with respect to the prediction error. Adapting the time constants changes the time scale of the network, and the adapted value of the network's time constant provides a measure of temporal variation in the signal. Tau Net has been applied to several simple signals: sets of sine waves differing in frequency and in phase [2], a multidimensional signal representing the walking gait of children [3], and the energy contour of a simple speech utterance [11]. Tau Net has also been shown to work on a voicing distinction task using synthetic speech data [12]. In this paper, Tau Net is applied to two speaker-independent tasks, vowel recognition (of f/ae/,/iy/,/ux/g) and consonant recognition (of f/p/,/t/,/k/g) using speech data taken from the TIMIT database. It is shown that Tau Nets, trained on medium-rate tokens, achieved about the same performance as networks without time constants trained on tokens at all rates, and performed better than networks without time constants trained on medium-rate tokens. Our results demonstrate Tau Net's ability to identify vowels and consonants at variable speech rates by extrapolating to rates not represented in the training set. ",
"neighbors": [
350
],
"mask": "Validation"
},
{
"node_id": 112,
"label": 2,
"text": "Title: Interpretable Neural Networks with BP-SOM \nAbstract: Interpretation of models induced by artificial neural networks is often a difficult task. In this paper we focus on a relatively novel neural network architecture and learning algorithm, bp-som, that offers possibilities to overcome this difficulty. It is shown that networks trained with bp-som show interesting regularities, in that hidden-unit activations become restricted to discrete values, and that the som part can be exploited for automatic rule extraction.",
"neighbors": [
572,
624,
747,
881
],
"mask": "Test"
},
{
"node_id": 113,
"label": 2,
"text": "Title: LU TP Pattern Discrimination Using Feed-Forward Networks a Benchmark Study of Scaling Behaviour \nAbstract: The discrimination powers of Multilayer perceptron (MLP) and Learning Vector Quantisation (LVQ) networks are compared for overlapping Gaussian distributions. It is shown, both analytically and with Monte Carlo studies, that the MLP network handles high dimensional problems in a more efficient way than LVQ. This is mainly due to the sigmoidal form of the MLP transfer function, but also to the the fact that the MLP uses hyper-planes more efficiently. Both algorithms are equally robust to limited training sets and the learning curves fall off like 1=M, where M is the training set size, which is compared to theoretical predictions from statistical estimates and Vapnik-Chervonenkis bounds. ",
"neighbors": [
747
],
"mask": "Train"
},
{
"node_id": 114,
"label": 6,
"text": "Title: A Generalization of Sauer's Lemma \nAbstract: The discrimination powers of Multilayer perceptron (MLP) and Learning Vector Quantisation (LVQ) networks are compared for overlapping Gaussian distributions. It is shown, both analytically and with Monte Carlo studies, that the MLP network handles high dimensional problems in a more efficient way than LVQ. This is mainly due to the sigmoidal form of the MLP transfer function, but also to the the fact that the MLP uses hyper-planes more efficiently. Both algorithms are equally robust to limited training sets and the learning curves fall off like 1=M, where M is the training set size, which is compared to theoretical predictions from statistical estimates and Vapnik-Chervonenkis bounds. ",
"neighbors": [
171
],
"mask": "Train"
},
{
"node_id": 115,
"label": 3,
"text": "Title: Rate of Convergence of the Gibbs Sampler by Gaussian Approximation SUMMARY \nAbstract: In this article we approximate the rate of convergence of the Gibbs sampler by a normal approximation of the target distribution. Based on this approximation, we consider many implementational issues for the Gibbs sampler, e.g., updating strategy, parameterization and blocking. We give theoretical results to justify our approximation and illustrate our methods in a number of realistic examples. ",
"neighbors": [
41,
138,
904,
1713,
2153,
2421
],
"mask": "Validation"
},
{
"node_id": 116,
"label": 0,
"text": "Title: Rate of Convergence of the Gibbs Sampler by Gaussian Approximation SUMMARY \nAbstract: Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost.",
"neighbors": [
88,
686,
2428
],
"mask": "Validation"
},
{
"node_id": 117,
"label": 3,
"text": "Title: How Many Clusters? Which Clustering Method? Answers Via Model-Based Cluster Analysis 1 \nAbstract: Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost.",
"neighbors": [
155,
345,
452,
513
],
"mask": "Train"
},
{
"node_id": 118,
"label": 4,
"text": "Title: Learning to Race: Experiments with a Simulated Race Car \nAbstract: We have implemented a reinforcement learning architecture as the reactive component of a two layer control system for a simulated race car. We have found that separating the layers has expedited gradually improving competition and mult-agent interaction. We ran experiments to test the tuning, decomposition and coordination of the low level behaviors. We then extended our control system to allow passing of other cars and tested its ability to avoid collisions. The best design used reinforcement learning with separate networks for each behavior, coarse coded input and a simple rule based coordination mechanism. ",
"neighbors": [
465,
565,
636
],
"mask": "Train"
},
{
"node_id": 119,
"label": 5,
"text": "Title: Cost-sensitive feature reduction applied to a hybrid genetic algorithm \nAbstract: This study is concerned with whether it is possible to detect what information contained in the training data and background knowledge is relevant for solving the learning problem, and whether irrelevant information can be eliminated in preprocessing before starting the learning process. A case study of data preprocessing for a hybrid genetic algorithm shows that the elimination of irrelevant features can substantially improve the efficiency of learning. In addition, cost-sensitive feature elimination can be effective for reducing costs of induced hypotheses.",
"neighbors": [
228,
430,
686
],
"mask": "Validation"
},
{
"node_id": 120,
"label": 1,
"text": "Title: Genetic Programming Exploratory Power and the Discovery of Functions \nAbstract: Hierarchical genetic programming (HGP) approaches rely on the discovery, modification, and use of new functions to accelerate evolution. This paper provides a qualitative explanation of the improved behavior of HGP, based on an analysis of the evolution process from the dual perspective of diversity and causality. From a static point of view, the use of an HGP approach enables the manipulation of a population of higher diversity programs. Higher diversity increases the exploratory ability of the genetic search process, as demonstrated by theoretical and experimental fitness distributions and expanded structural complexity of individuals. From a dynamic point of view, an analysis of the causality of the crossover operator suggests that HGP discovers and exploits useful structures in a bottom-up, hierarchical manner. Diversity and causality are complementary, affecting exploration and exploitation in genetic search. Unlike other machine learning techniques that need extra machinery to control the tradeoff between them, HGP automatically trades off exploration and exploitation. ",
"neighbors": [
188,
1184,
1959,
2175
],
"mask": "Validation"
},
{
"node_id": 121,
"label": 2,
"text": "Title: LEARNING COMPLEX, EXTENDED SEQUENCES USING THE PRINCIPLE OF HISTORY COMPRESSION (Neural Computation, 4(2):234-242, 1992) \nAbstract: Previous neural network learning algorithms for sequence processing are computationally expensive and perform poorly when it comes to long time lags. This paper first introduces a simple principle for reducing the descriptions of event sequences without loss of information. A consequence of this principle is that only unexpected inputs can be relevant. This insight leads to the construction of neural architectures that learn to `divide and conquer' by recursively decomposing sequences. I describe two architectures. The first functions as a self-organizing multi-level hierarchy of recurrent networks. The second, involving only two recurrent networks, tries to collapse a multi-level predictor hierarchy into a single recurrent net. Experiments show that the system can require less computation per time step and many fewer training sequences than conventional training algorithms for recurrent nets.",
"neighbors": [
595,
731,
1825,
1990
],
"mask": "Train"
},
{
"node_id": 122,
"label": 2,
"text": "Title: Tilt Aftereffects in a Self-Organizing Model of the Primary Visual Cortex \nAbstract: Previous neural network learning algorithms for sequence processing are computationally expensive and perform poorly when it comes to long time lags. This paper first introduces a simple principle for reducing the descriptions of event sequences without loss of information. A consequence of this principle is that only unexpected inputs can be relevant. This insight leads to the construction of neural architectures that learn to `divide and conquer' by recursively decomposing sequences. I describe two architectures. The first functions as a self-organizing multi-level hierarchy of recurrent networks. The second, involving only two recurrent networks, tries to collapse a multi-level predictor hierarchy into a single recurrent net. Experiments show that the system can require less computation per time step and many fewer training sequences than conventional training algorithms for recurrent nets.",
"neighbors": [
124,
127,
1093,
2400
],
"mask": "Train"
},
{
"node_id": 123,
"label": 2,
"text": "Title: Fast Numerical Integration of Relaxation Oscillator Networks Based on Singular Limit Solutions \nAbstract: Relaxation oscillations exhibiting more than one time scale arise naturally from many physical systems. This paper proposes a method to numerically integrate large systems of relaxation oscillators. The numerical technique, called the singular limit method, is derived from analysis of relaxation oscillations in the singular limit. In such limit, system evolution gives rise to time instants at which fast dynamics takes place and intervals between them during which slow dynamics takes place. A full description of the method is given for LEGION (locally excitatory globally inhibitory oscillator networks), where fast dynamics, characterized by jumping which leads to dramatic phase shifts, is captured in this method by iterative operation and slow dynamics is entirely solved. The singular limit method is evaluated by computer experiments, and it produces remarkable speedup compared to other methods of integrating these systems. The speedup makes it possible to simulate large-scale oscillator networks. ",
"neighbors": [
553
],
"mask": "Validation"
},
{
"node_id": 124,
"label": 2,
"text": "Title: Self-Organization and Segmentation in a Laterally Connected Orientation Map of Spiking Neurons \nAbstract: The RF-SLISSOM model integrates two separate lines of research on computational modeling of the visual cortex. Laterally connected self-organizing maps have been used to model how afferent structures such as orientation columns and patterned lateral connections can simultaneously self-organize through input-driven Hebbian adaptation. Spiking neurons with leaky integrator synapses have been used to model image segmentation and binding by synchronization and desynchronization of neuronal group activity. Although these approaches differ in how they model the neuron and what they explain, they share the same overall layout of a laterally connected two-dimensional network. This paper shows how both self-organization and segmentation can be achieved in such an integrated network, thus presenting a unified model of development and functional dynamics in the primary visual cortex. ",
"neighbors": [
122,
127,
745,
747,
2400
],
"mask": "Train"
},
{
"node_id": 125,
"label": 2,
"text": "Title: Gaussian Processes for Bayesian Classification via Hybrid Monte Carlo \nAbstract: The full Bayesian method for applying neural networks to a prediction problem is to set up the prior/hyperprior structure for the net and then perform the necessary integrals. However, these integrals are not tractable analytically, and Markov Chain Monte Carlo (MCMC) methods are slow, especially if the parameter space is high-dimensional. Using Gaussian processes we can approximate the weight space integral analytically, so that only a small number of hyperparameters need be integrated over by MCMC methods. We have applied this idea to classification problems, obtaining ex cellent results on the real-world problems investigated so far. ",
"neighbors": [
160,
394,
1857
],
"mask": "Train"
},
{
"node_id": 126,
"label": 3,
"text": "Title: Perfect Simulation of some Point Processes for the Impatient User \nAbstract: Recently Propp and Wilson [14] have proposed an algorithm, called Coupling from the Past (CFTP), which allows not only an approximate but perfect (i.e. exact) simulation of the stationary distribution of certain finite state space Markov chains. Perfect Sampling using CFTP has been successfully extended to the context of point processes, amongst other authors, by Haggstrom et al. [5]. In [5] Gibbs sampling is applied to a bivariate point process, the penetrable spheres mixture model [19]. However, in general the running time of CFTP in terms of number of transitions is not independent of the state sampled. Thus an impatient user who aborts long runs may introduce a subtle bias, the user impatience bias. Fill [3] introduced an exact sampling algorithm for finite state space Markov chains which, in contrast to CFTP, is unbiased for user impatience. Fill's algorithm is a form of rejection sampling and similar to CFTP requires sufficient mono-tonicity properties of the transition kernel used. We show how Fill's version of rejection sampling can be extended to an infinite state space context to produce an exact sample of the penetrable spheres mixture process and related models. Following [5] we use Gibbs sampling and make use of the partial order of the mixture model state space. Thus ",
"neighbors": [
94,
2208
],
"mask": "Test"
},
{
"node_id": 127,
"label": 2,
"text": "Title: Self-Organization and Functional Role of Lateral Connections and Multisize Receptive Fields in the Primary Visual Cortex \nAbstract: Cells in the visual cortex are selective not only to ocular dominance and orientation of the input, but also to its size and spatial frequency. The simulations reported in this paper show how size selectivity could develop through Hebbian self-organization, and how receptive fields of different sizes could organize into columns like those for orientation and ocular dominance. The lateral connections in the network self-organize cooperatively and simultaneously with the receptive field sizes, and produce patterns of lateral connectivity that closely follow the receptive field organization. Together with our previous work on ocular dominance and orientation selectivity, these results suggest that a single Hebbian self-organizing process can give rise to all the major receptive field properties in the visual cortex, and also to structured patterns of lateral interactions, some of which have been verified experimentally and others predicted by the model. The model also suggests a functional role for the self-organized structures: The afferent receptive fields develop a sparse coding of the visual input, and the recurrent lateral interactions eliminate redundancies in cortical activity patterns, allowing the cortex to efficiently process massive amounts of visual information. ",
"neighbors": [
18,
122,
124,
745,
747,
2228,
2400
],
"mask": "Validation"
},
{
"node_id": 128,
"label": 4,
"text": "Title: Optimal Attitude Control of Satellites by Artificial Neural Networks: a Pilot Study \nAbstract: A pilot study is described on the practical application of artificial neural networks. The limit cycle of the attitude control of a satellite is selected as the test case. One of the sources of the limit cycle is a position dependent error in the observed attitude. A Reinforcement Learning method is selected, which is able to adapt a controller such that a cost function is optimised. An estimate of the cost function is learned by a neural `critic'. In our approach, the estimated cost function is directly represented as a function of the parameters of a linear controller. The critic is implemented as a CMAC network. Results from simulations show that the method is able to find optimal parameters without unstable behaviour. In particular in the case of large discontinuities in the attitude measurements, the method shows a clear improvement compared to the conventional approach: the RMS attitude error decreases approximately 30%. ",
"neighbors": [
103,
294,
565
],
"mask": "Train"
},
{
"node_id": 129,
"label": 1,
"text": "Title: Evolving Networks: Using the Genetic Algorithm with Connectionist Learning \nAbstract: A pilot study is described on the practical application of artificial neural networks. The limit cycle of the attitude control of a satellite is selected as the test case. One of the sources of the limit cycle is a position dependent error in the observed attitude. A Reinforcement Learning method is selected, which is able to adapt a controller such that a cost function is optimised. An estimate of the cost function is learned by a neural `critic'. In our approach, the estimated cost function is directly represented as a function of the parameters of a linear controller. The critic is implemented as a CMAC network. Results from simulations show that the method is able to find optimal parameters without unstable behaviour. In particular in the case of large discontinuities in the attitude measurements, the method shows a clear improvement compared to the conventional approach: the RMS attitude error decreases approximately 30%. ",
"neighbors": [
15,
22,
163,
188,
538,
1204,
1409,
1728,
1973,
2165,
2193,
2220,
2363,
2446,
2451
],
"mask": "Validation"
},
{
"node_id": 130,
"label": 6,
"text": "Title: PAC-Learning PROLOG clauses with or without errors \nAbstract: In a nutshell we can describe a generic ILP problem as following: given a set E of (positive and negative) examples of a target predicate, and some background knowledge B about the world (usually a logic program including facts and auxiliary predicates), the task is to find a logic program H (our hypothesis) such that all positive examples can be deduced from B and H, while no negative example can. In this paper we review some of the results achieved in this area and discuss the techniques used. Moreover we prove the following new results: * Predicates described by non-recursive, local clauses of at most k literals are PAC-learnable under any distribution. This generalizes a previous result that was valid only for constrained clauses. * Predicates that are described by k non-recursive local clauses are PAC-learnable under any distribution. This generalizes a previous result that was non construc tive and valid only under some class of distributions. Finally we introduce what we believe is the first theoretical framework for learning Prolog clauses in the presence of errors. To this purpose we introduce a new noise model, that we call the fixed attribute noise model, for learning propositional concepts over the Boolean domain. This new noise model can be of its own interest. ",
"neighbors": [
459,
640,
672
],
"mask": "Test"
},
{
"node_id": 131,
"label": 3,
"text": "Title: The Expectation-Maximization Algorithm for MAP Estimation \nAbstract: The Expectation-Maximization algorithm given by Dempster et al (1977) has enjoyed considerable popularity for solving MAP estimation problems. This note gives a simple derivation of the algorithm, due to Luttrell (1994), that better illustrates the convergence properties of the algorithm and its variants. The algorithm is illustrated with two examples: pooling data from multiple noisy sources and fitting a mixture density.",
"neighbors": [
76
],
"mask": "Validation"
},
{
"node_id": 132,
"label": 4,
"text": "Title: TOWARDS PLANNING: INCREMENTAL INVESTIGATIONS INTO ADAPTIVE ROBOT CONTROL \nAbstract: The Expectation-Maximization algorithm given by Dempster et al (1977) has enjoyed considerable popularity for solving MAP estimation problems. This note gives a simple derivation of the algorithm, due to Luttrell (1994), that better illustrates the convergence properties of the algorithm and its variants. The algorithm is illustrated with two examples: pooling data from multiple noisy sources and fitting a mixture density.",
"neighbors": [
346
],
"mask": "Train"
},
{
"node_id": 133,
"label": 2,
"text": "Title: PRIOR KNOWLEDGE AND THE CREATION OF \"VIRTUAL\" EXAMPLES FOR RBF NETWORKS 1 \nAbstract: We consider the problem of how to incorporate prior knowledge in supervised learning techniques. We set the problem in the framework of regularization theory, and consider the case in which we know that the approximated function has radial symmetry. The problem can be solved in two alternative ways: 1) use the invariance as a constraint in the regularization theory framework to derive a rotation invariant version of Radial Basis Functions; 2) use the radial symmetry to create new, \"virtual\" examples from a given data set. We show that these two apparently different methods of learning from \"hints\" (Abu-Mostafa, 1993) lead to exactly the same analyt ical solution.",
"neighbors": [
394,
608,
611
],
"mask": "Train"
},
{
"node_id": 134,
"label": 2,
"text": "Title: Gain Adaptation Beats Least Squares? \nAbstract: I present computational results suggesting that gain-adaptation algorithms based in part on connectionist learning methods may improve over least squares and other classical parameter-estimation methods for stochastic time-varying linear systems. The new algorithms are evaluated with respect to classical methods along three dimensions: asymptotic error, computational complexity, and required prior knowledge about the system. The new algorithms are all of the same order of complexity as LMS methods, O(n), where n is the dimensionality of the system, whereas least-squares methods and the Kalman filter are O(n 2 ). The new methods also improve over the Kalman filter in that they do not require a complete statistical model of how the system varies over time. In a simple computational experiment, the new methods are shown to produce asymptotic error levels near that of the optimal Kalman filter and significantly below those of least-squares and LMS methods. The new methods may perform better even than the Kalman filter if there is any error in the filter's model of how the system varies over time. ",
"neighbors": [
505,
1118,
1782,
2135
],
"mask": "Validation"
},
{
"node_id": 135,
"label": 5,
"text": "Title: More Efficient Windowing \nAbstract: Windowing has been proposed as a procedure for efficient memory use in the ID3 decision tree learning algorithm. However, previous work has shown that windowing may often lead to a decrease in performance. In this work, we try to argue that separate-and-conquer rule learning algorithms are more appropriate for windowing than divide-and-conquer algorithms, because they learn rules independently and are less susceptible to changes in class distributions. In particular, we will present a new windowing algorithm that achieves additional gains in efficiency by exploiting this property of separate-and-conquer algorithms. While the presented algorithm is only suitable for redundant, noise-free data sets, we will also briefly discuss the problem of noisy data in windowing and present some preliminary ideas how it might be solved with an extension of the algorithm introduced in this paper. ",
"neighbors": [
418,
654
],
"mask": "Train"
},
{
"node_id": 136,
"label": 5,
"text": "Title: Theory Refinement Combining Analytical and Empirical Methods \nAbstract: This article describes a comprehensive approach to automatic theory revision. Given an imperfect theory, the approach combines explanation attempts for incorrectly classified examples in order to identify the failing portions of the theory. For each theory fault, correlated subsets of the examples are used to inductively generate a correction. Because the corrections are focused, they tend to preserve the structure of the original theory. Because the system starts with an approximate domain theory, in general fewer training examples are required to attain a given level of performance (classification accuracy) compared to a purely empirical system. The approach applies to classification systems employing a propositional Horn-clause theory. The system has been tested in a variety of application domains, and results are presented for problems in the domains of molecular biology and plant disease diagnosis. ",
"neighbors": [
92,
159,
244,
479,
986,
1102,
1259,
1370,
1413,
1479,
1539,
1776,
2172,
2231,
2399,
2487,
2543,
2580,
2635
],
"mask": "Test"
},
{
"node_id": 137,
"label": 3,
"text": "Title: Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications \nAbstract: Suppose one wishes to sample from the density (x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution (ujx) can be defined, giving the joint distribution (x; u) = (x)(ujx). A MCMC scheme which samples over this joint distribution can lead to substantial gains in efficiency compared to standard approaches. The revolutionary algorithm of Swendsen and Wang (1987) is one such example. In addition to reviewing the Swendsen-Wang algorithm and its generalizations, this paper introduces a new auxiliary variable method called partial decoupling. Two applications in Bayesian image analysis are considered. The first is a binary classification problem in which partial decoupling out performs SW and single site Metropolis. The second is a PET reconstruction which uses the gray level prior of Geman and McClure (1987). A generalized Swendsen-Wang algorithm is developed for this problem, which reduces the computing time to the point that MCMC is a viable method of posterior exploration.",
"neighbors": [
138,
416,
748
],
"mask": "Train"
},
{
"node_id": 138,
"label": 3,
"text": "Title: Convergence properties of perturbed Markov chains \nAbstract: Acknowledgements. We thank Neal Madras, Radford Neal, Peter Rosenthal, and Richard Tweedie for helpful conversations. This work was partially supported by EPSRC of the U.K., and by NSERC of Canada. ",
"neighbors": [
115,
137,
416,
748,
892,
1713,
1716
],
"mask": "Train"
},
{
"node_id": 139,
"label": 1,
"text": "Title: Investigating the Generality of Automatically Defined Functions \nAbstract: This paper studies how well the combination of simulated annealing and ADFs solves genetic programming (GP) style program discovery problems. On a suite composed of the even-k-parity problems for k = 3,4,5, it analyses the performance of simulated annealing with ADFs as compared to not using ADFs. In contrast to GP results on this suite, when simulated annealing is run with ADFs, as problem size increases, the advantage to using them over a standard GP program representation is marginal. When the performance of simulated annealing is compared to GP with both algorithm using ADFs on the even-3-parity problem GP is advantageous, on the even-4-parity problem SA and GP are equal, and on the even-5-parity problem SA is advantageous.",
"neighbors": [
188,
2361
],
"mask": "Train"
},
{
"node_id": 140,
"label": 6,
"text": "Title: Exploiting the Omission of Irrelevant Data \nAbstract: Most learning algorithms work most effectively when their training data contain completely specified labeled samples. In many diagnostic tasks, however, the data will include the values of only some of the attributes; we model this as a blocking process that hides the values of those attributes from the learner. While blockers that remove the values of critical attributes can handicap a learner, this paper instead focuses on blockers that remove only irrelevant attribute values, i.e., values that are not needed to classify an instance, given the values of the other unblocked attributes. We first motivate and formalize this model of \"superfluous-value blocking\", and then demonstrate that these omissions can be useful, by proving that certain classes that seem hard to learn in the general PAC model | viz., decision trees and DNF formulae | are trivial to learn in this setting. We also show that this model can be extended to deal with (1) theory revision (i.e., modifying an existing formula); (2) blockers that occasionally include superfluous values or exclude required values; and (3) other cor ruptions of the training data. ",
"neighbors": [
323
],
"mask": "Train"
},
{
"node_id": 141,
"label": 1,
"text": "Title: Hierarchical Self-Organization in Genetic Programming \nAbstract: This paper presents an approach to automatic discovery of functions in Genetic Programming. The approach is based on discovery of useful building blocks by analyzing the evolution trace, generalizing blocks to define new functions, and finally adapting the problem representation on-the-fly. Adaptating the representation determines a hierarchical organization of the extended function set which enables a restructuring of the search space so that solutions can be found more easily. Measures of complexity of solution trees are defined for an adaptive representation framework. The minimum description length principle is applied to justify the feasibility of approaches based on a hierarchy of discovered functions and to suggest alternative ways of defining a problem's fitness function. Preliminary empirical results are presented.",
"neighbors": [
163,
188,
1184
],
"mask": "Train"
},
{
"node_id": 142,
"label": 2,
"text": "Title: PATTERN RECOGNITION VIA LINEAR PROGRAMMING THEORY AND APPLICATION TO MEDICAL DIAGNOSIS \nAbstract: A decision problem associated with a fundamental nonconvex model for linearly inseparable pattern sets is shown to be NP-complete. Another nonconvex model that employs an 1 norm instead of the 2-norm, can be solved in polynomial time by solving 2n linear programs, where n is the (usually small) dimensionality of the pattern space. An effective LP-based finite algorithm is proposed for solving the latter model. The algorithm is employed to obtain a noncon-vex piecewise-linear function for separating points representing measurements made on fine needle aspirates taken from benign and malignant human breasts. A computer program trained on 369 samples has correctly diagnosed each of 45 new samples encountered and is currently in use at the University of Wisconsin Hospitals. 1. Introduction. The fundamental problem we wish to address is that of ",
"neighbors": [
227,
230,
391,
520,
823,
1283,
1318,
1547
],
"mask": "Train"
},
{
"node_id": 143,
"label": 2,
"text": "Title: RESONANCE AND THE PERCEPTION OF MUSICAL METER \nAbstract: Many connectionist approaches to musical expectancy and music composition let the question of What next? overshadow the equally important question of When next?. One cannot escape the latter question, one of temporal structure, when considering the perception of musical meter. We view the perception of metrical structure as a dynamic process where the temporal organization of external musical events synchronizes, or entrains, a listeners internal processing mechanisms. This article introduces a novel connectionist unit, based upon a mathematical model of entrainment, capable of phase and frequency-locking to periodic components of incoming rhythmic patterns. Networks of these units can self-organize temporally structured responses to rhythmic patterns. The resulting network behavior embodies the perception of metrical structure. The article concludes with a discussion of the implications of our approach for theories of metrical structure and musical expectancy. ",
"neighbors": [
180,
201,
337,
346,
363
],
"mask": "Train"
},
{
"node_id": 144,
"label": 2,
"text": "Title: The Observers Paradox: Apparent Computational Complexity in Physical Systems \nAbstract: Many connectionist approaches to musical expectancy and music composition let the question of What next? overshadow the equally important question of When next?. One cannot escape the latter question, one of temporal structure, when considering the perception of musical meter. We view the perception of metrical structure as a dynamic process where the temporal organization of external musical events synchronizes, or entrains, a listeners internal processing mechanisms. This article introduces a novel connectionist unit, based upon a mathematical model of entrainment, capable of phase and frequency-locking to periodic components of incoming rhythmic patterns. Networks of these units can self-organize temporally structured responses to rhythmic patterns. The resulting network behavior embodies the perception of metrical structure. The article concludes with a discussion of the implications of our approach for theories of metrical structure and musical expectancy. ",
"neighbors": [
188,
444,
2102
],
"mask": "Train"
},
{
"node_id": 145,
"label": 1,
"text": "Title: LIBGA: A USER-FRIENDLY WORKBENCH FOR ORDER-BASED GENETIC ALGORITHM RESEARCH \nAbstract: Over the years there has been several packages developed that provide a workbench for genetic algorithm (GA) research. Most of these packages use the generational model inspired by GENESIS. A few have adopted the steady-state model used in Genitor. Unfortunately, they have some deficiencies when working with order-based problems such as packing, routing, and scheduling. This paper describes LibGA, which was developed specifically for order-based problems, but which also works easily with other kinds of problems. It offers an easy to use `user-friendly' interface and allows comparisons to be made between both generational and steady-state genetic algorithms for a particular problem. It includes a variety of genetic operators for reproduction, crossover, and mutation. LibGA makes it easy to use these operators in new ways for particular applications or to develop and include new operators. Finally, it offers the unique new feature of a dynamic generation gap. ",
"neighbors": [
163,
390,
1218,
1224,
1530,
1646,
2248,
2251,
2280,
2286,
2296,
2521
],
"mask": "Train"
},
{
"node_id": 146,
"label": 2,
"text": "Title: Convergence-Zone Episodic Memory: Analysis and Simulations \nAbstract: Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, long-term storage within the neocortex. This paper presents a neural network model of the hippocampal episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern, which in turn reactivates the entire stored pattern. For many configurations of the model, a theoretical lower bound for the memory capacity can be derived, and it can be an order of magnitude or higher than the number of all units in the model, and several orders of magnitude higher than the number of binding-layer units. Computational simulations further indicate that the average capacity is an order of magnitude larger than the theoretical lower bound, and making the connectivity between layers sparser causes an even further increase in capacity. Simulations also show that if more descriptive binding patterns are used, the errors tend to be more plausible (patterns are confused with other similar patterns), with a slight cost in capacity. The convergence-zone episodic memory therefore accounts for the immediate storage and associative retrieval capability and large capacity of the hippocampal memory, and shows why the memory encoding areas can be much smaller than the perceptual maps, consist of rather coarse computational units, and be only sparsely connected to the perceptual maps. ",
"neighbors": [
17,
427,
747
],
"mask": "Test"
},
{
"node_id": 147,
"label": 0,
"text": "Title: Convergence-Zone Episodic Memory: Analysis and Simulations \nAbstract: Empirical Learning Results in POLLYANNA The value of empirical learning is demonstrated by results of testing the theory space search (TSS) component of POLLYANNA. Empirical data shows approximations generated from generic simplifying assumptions to have widely varying levels of accuracy and efficiency. The candidate theory space includes some theories with Pareto optimal combinations of accuracy and efficiency, as well as others that are non-optimal. Empirical learning is thus needed to separate the optimal theories from the non-optimal ones. It works as a filter on the process of generating approximations from generic simplifying assumptions. Empirical tests serve an additional purpose as well. Theory space search collects data that precisely characterizes the tradeoff between accuracy and efficiency among the candidate approximate theories. The tradeoff data can be used to select a theory that best balances the competing objectives of accuracy and efficiency in a manner appropriate to the intended performance context. The feasibility of empirical learning is also addressed by results of testing the theory space search component of POLLYANNA. In order for empirical testing to be feasible, candidate approximate theories must be operationally usable. Candidate hearts theories generated by POLLYANNA are shown to be operationally usable by experimental results from the theory space search (TSS) phase of learning. They run on a real machine producing results that can be compared with training examples. Feasibility also depends on the information and computation costs of empirical testing. Information costs result from the need to supply the system with training examples. Computation costs result from the need to execute candidate theories. Both types of costs grow with the numbers of candidate theories to be tested. Experimental results show that empirical testing in POLLYANNA is limited more by the computation costs of executing candidate theories than by the information costs of obtaining many training examples. POLLYANNA contrasts in this respect with traditional inductive learning systems. The feasibility of empirical learning depends also on the intended performance context, and on the resources available in the context of learning. Measurements from the theory space search phase indicate that TSS algorithms performing exhaustive search would not be feasible for the hearts domain, although they may be feasible for other applications. TSS algorithms that avoid exhaustive search hold considerably more promise. ",
"neighbors": [
479
],
"mask": "Train"
},
{
"node_id": 148,
"label": 4,
"text": "Title: Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm \nAbstract: In this paper, we adopt general-sum stochastic games as a framework for multiagent reinforcement learning. Our work extends previous work by Littman on zero-sum stochastic games to a broader framework. We design a multiagent Q-learning method under this framework, and prove that it converges to a Nash equilibrium under specified conditions. This algorithm is useful for finding the optimal strategy when there exists a unique Nash equilibrium in the game. When there exist multiple Nash equilibria in the game, this algorithm should be combined with other learning techniques to find optimal strategies.",
"neighbors": [
210,
460,
656,
1649,
1687
],
"mask": "Train"
},
{
"node_id": 149,
"label": 0,
"text": "Title: Corporate Memories as Distributed Case Libraries \nAbstract: Rising operating costs and structural transformations such as resizing and globaliza-tion of companies all over the world have brought into focus the emerging discipline of knowledge management that is concerned with making knowledge pay off. Corporate memories form an important part of such knowledge management initiatives in a company. In this paper, we discuss how viewing corporate memories as distributed case libraries can benefit from existing techniques for distributed case-based reasoning for resource discovery and exploitation of previous expertise. We present two techniques developed in the context of multi-agent case-based reasoning for accessing and exploiting past experience from corporate memory resources. The first approach, called Negotiated Retrieval, deals with retrieving and assembling \"case pieces\" from different resources in a corporate memory to form a good overall case. The second approach, based on Federated Peer Learning, deals with two modes of cooperation called DistCBR and ColCBR that let an agent exploit the experience and expertise of peer agents to achieve a local task. fl The first author would like to acknowledge the support by the National Science Foundation under Grant Nos. IRI-9523419 and EEC-9209623. The second author's research reported in this paper has been developed at the IIIA inside the ANALOG Project funded by Spanish CICYT grant 122/93. The content of this paper does not necessarily reflect the position or the policy of the US Government, the Kingdom of Spain Government, or the Catalonia Government, and no official endorsement should be inferred. ",
"neighbors": [
66
],
"mask": "Train"
},
{
"node_id": 150,
"label": 0,
"text": "Title: Using Knowledge of Cognitive Behavior to Learn from Failure \nAbstract: When learning from reasoning failures, knowledge of how a system behaves is a powerful lever for deciding what went wrong with the system and in deciding what the system needs to learn. A number of benefits arise when systems possess knowledge of their own operation and of their own knowledge. Abstract knowledge about cognition can be used to select diagnosis and repair strategies from among alternatives. Specific kinds of self-knowledge can be used to distinguish between failure hypothesis candidates. Making self-knowledge explicit can also facilitate the use of such knowledge across domains and can provide a principled way to incorporate new learning strategies. To illustrate the advantages of self-knowledge for learning, we provide implemented examples from two different systems: A plan execution system called RAPTER and a story understanding system called Meta-AQUA. ",
"neighbors": [
50,
643
],
"mask": "Validation"
},
{
"node_id": 151,
"label": 6,
"text": "Title: Rerepresenting and Restructuring Domain Theories: A Constructive Induction Approach \nAbstract: Theory revision integrates inductive learning and background knowledge by combining training examples with a coarse domain theory to produce a more accurate theory. There are two challenges that theory revision and other theory-guided systems face. First, a representation language appropriate for the initial theory may be inappropriate for an improved theory. While the original representation may concisely express the initial theory, a more accurate theory forced to use that same representation may be bulky, cumbersome, and difficult to reach. Second, a theory structure suitable for a coarse domain theory may be insufficient for a fine-tuned theory. Systems that produce only small, local changes to a theory have limited value for accomplishing complex structural alterations that may be required. Consequently, advanced theory-guided learning systems require flexible representation and flexible structure. An analysis of various theory revision systems and theory-guided learning systems reveals specific strengths and weaknesses in terms of these two desired properties. Designed to capture the underlying qualities of each system, a new system uses theory-guided constructive induction. Experiments in three domains show improvement over previous theory-guided systems. This leads to a study of the behavior, limitations, and potential of theory-guided constructive induction.",
"neighbors": [
360,
1528,
1595
],
"mask": "Test"
},
{
"node_id": 152,
"label": 2,
"text": "Title: Replicability of Neural Computing Experiments \nAbstract: If an experiment requires statistical analysis to establish a result, then one should do a better experiment. Ernest Rutherford, 1930 Most proponents of cold fusion reporting excess heat from their electrolysis experiments were claiming that one of the main characteristics of cold fusion was its irreproducibility | J.R. Huizenga, Cold Fusion, 1993, p. 78 Abstract Amid the ever increasing research into various aspects of neural computing, much progress is evident both from theoretical advances and from empirical studies. On the empirical side a wealth of data from experimental studies is being reported. It is, however, not clear how best to report neural computing experiments such that they may be replicated by other interested researchers. In particular, the nature of iterative learning on a randomised initial architecture, such as backpropagation training of a multilayer perceptron, is such that precise replication of a reported result is virtually impossible. The outcome is that experimental replication of reported results, a touchstone of \"the scientific method\", is not an option for researchers in this most popular subfield of neural computing. In this paper, we address this issue of replicability of experiments based on backpropagation training of multilayer perceptrons (although many of our results will be applicable to any other subfield that is plagued by the same characteristics). First, we attempt to produce a complete abstract specification of such a neural computing experiment. From this specification we identify the full range of parameters needed to support maximum replicability, and we use it to show why absolute replicability is not an option in practice. We propose a statistical framework to support replicability. We demonstrate this framework with some empirical studies of our own on both repli-cability with respect to experimental controls, and validity of implementations of the backpropagation algorithm. Finally, we suggest how the degree of replicability of a neural computing experiment can be estimated and reflected in the claimed precision for any empirical results reported. ",
"neighbors": [
15,
1383
],
"mask": "Train"
},
{
"node_id": 153,
"label": 2,
"text": "Title: Living in a partially structured environment: How to bypass the limitations of classical reinforcement techniques \nAbstract: In this paper, we propose an unsupervised neural network allowing a robot to learn sensori-motor associations with a delayed reward. The robot task is to learn the \"meaning\" of pictograms in order to \"survive\" in a maze. First, we introduce a new neural conditioning rule (PCR: Probabilistic Conditioning Rule) allowing to test hypotheses (associations between visual categories and movements) during a given time span. Second, we describe a real maze experiment with our mobile robot. We propose a neural architecture to solve this problem and we discuss the difficulty to build visual categories dynamically while associating them to movements. Third, we propose to use our algorithm on a simulation in order to test it exhaustively. We give the results for different kind of mazes and we compare our system to an adapted version of the Q-learning algorithm. Finally, we conclude by showing the limitations of approaches that do not take into account the intrinsic complexity of a reasonning based on image recognition. ",
"neighbors": [
63,
294,
747
],
"mask": "Test"
},
{
"node_id": 154,
"label": 2,
"text": "Title: Data-driven Modeling and Synthesis of Acoustical Instruments \nAbstract: We present a framework for the analysis and synthesis of acoustical instruments based on data-driven probabilistic inference modeling. Audio time series and boundary conditions of a played instrument are recorded and the non-linear mapping from the control data into the audio space is inferred using the general inference framework of Cluster-Weighted Modeling. The resulting model is used for real-time synthesis of audio sequences from new input data.",
"neighbors": [
74,
392
],
"mask": "Train"
},
{
"node_id": 155,
"label": 3,
"text": "Title: Inference in Model-Based Cluster Analysis \nAbstract: Technical Report no. 285 Department of Statistics University of Washington. March 10, 1995 ",
"neighbors": [
12,
84,
117,
513
],
"mask": "Train"
},
{
"node_id": 156,
"label": 5,
"text": "Title: Structural Regression Trees \nAbstract: In many real-world domains the task of machine learning algorithms is to learn a theory predicting numerical values. In particular several standard test domains used in Inductive Logic Programming (ILP) are concerned with predicting numerical values from examples and relational and mostly non-determinate background knowledge. However, so far no ILP algorithm except one can predict numbers and cope with non-determinate background knowledge. (The only exception is a covering algorithm called FORS.) In this paper we present Structural Regression Trees (SRT), a new algorithm which can be applied to the above class of problems by integrating the statistical method of regression trees into ILP. SRT constructs a tree containing a literal (an atomic formula or its negation) or a conjunction of literals in each node, and assigns a numerical value to each leaf. SRT provides more comprehensible results than purely statistical methods, and can be applied to a class of problems most other ILP systems cannot handle. Experiments in several real-world domains demonstrate that the approach is competitive with existing methods, indicating that the advantages are not at the expense of predictive accuracy. ",
"neighbors": [
236,
314,
431,
521
],
"mask": "Train"
},
{
"node_id": 157,
"label": 6,
"text": "Title: A Practical Bayesian Framework for Backprop Networks \nAbstract: A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible: (1) objective comparisons between solutions using alternative network architectures; (2) objective stopping rules for network pruning or growing procedures; (3) objective choice of magnitude and type of weight decay terms or additive regularisers (for penalising large weights, etc.); (4) a measure of the effective number of well-determined parameters in a model; (5) quantified estimates of the error bars on network parameters and on network output; (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian `evidence' automatically embodies `Occam's razor,' penalising over-flexible and over-complex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalisation ability and the Bayesian evidence is obtained. This paper makes use of the Bayesian framework for regularisation and model comparison described in the companion paper `Bayesian interpolation' (MacKay, 1991a). This framework is due to Gull and Skilling (Gull, 1989a). ",
"neighbors": [
78,
181,
214,
371,
393,
560,
716,
740,
766,
897,
916,
979,
1017,
1038,
1075,
1289,
1340,
1375,
1452,
1550,
1637,
1718,
2019,
2021,
2095,
2230,
2417,
2540,
2680
],
"mask": "Train"
},
{
"node_id": 158,
"label": 5,
"text": "Title: Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor \nAbstract: Simultaneous multithreading is a technique that permits multiple independent threads to issue multiple instructions each cycle. In previous work we demonstrated the performance potential of simultaneous multithreading, based on a somewhat idealized model. In this paper we show that the throughput gains from simultaneous multithreading can be achieved without extensive changes to a conventional wide-issue superscalar, either in hardware structures or sizes. We present an architecture for simultaneous multithreading that achieves three goals: (1) it minimizes the architectural impact on the conventional superscalar design, (2) it has minimal performance impact on a single thread executing alone, and (3) it achieves significant throughput gains when running multiple threads. Our simultaneous multithreading architecture achieves a throughput of 5.4 instructions per cycle, a 2.5-fold improvement over an unmodified superscalar with similar hardware resources. This speedup is enhanced by an advantage of multithreading previously unexploited in other architectures: the ability to favor for fetch and issue those threads most efficiently using the processor each cycle, thereby providing the best instructions to the processor. ",
"neighbors": [
184,
433,
598,
707
],
"mask": "Validation"
},
{
"node_id": 159,
"label": 6,
"text": "Title: Bias-Driven Revision of Logical Domain Theories \nAbstract: The theory revision problem is the problem of how best to go about revising a deficient domain theory using information contained in examples that expose inaccuracies. In this paper we present our approach to the theory revision problem for propositional domain theories. The approach described here, called PTR, uses probabilities associated with domain theory elements to numerically track the ``ow'' of proof through the theory. This allows us to measure the precise role of a clause or literal in allowing or preventing a (desired or undesired) derivation for a given example. This information is used to efficiently locate and repair awed elements of the theory. PTR is proved to converge to a theory which correctly classifies all examples, and shown experimentally to be fast and accurate even for deep theories.",
"neighbors": [
136,
985,
2066,
2172,
2543,
2674
],
"mask": "Train"
},
{
"node_id": 160,
"label": 2,
"text": "Title: EVALUATION OF GAUSSIAN PROCESSES AND OTHER METHODS FOR NON-LINEAR REGRESSION \nAbstract: The theory revision problem is the problem of how best to go about revising a deficient domain theory using information contained in examples that expose inaccuracies. In this paper we present our approach to the theory revision problem for propositional domain theories. The approach described here, called PTR, uses probabilities associated with domain theory elements to numerically track the ``ow'' of proof through the theory. This allows us to measure the precise role of a clause or literal in allowing or preventing a (desired or undesired) derivation for a given example. This information is used to efficiently locate and repair awed elements of the theory. PTR is proved to converge to a theory which correctly classifies all examples, and shown experimentally to be fast and accurate even for deep theories.",
"neighbors": [
125,
322,
469,
1857,
2540,
2681
],
"mask": "Test"
},
{
"node_id": 161,
"label": 3,
"text": "Title: On Bayesian analysis of mixtures with an unknown number of components Summary \nAbstract: New methodology for fully Bayesian mixture analysis is developed, making use of reversible jump Markov chain Monte Carlo methods, that are capable of jumping between the parameter subspaces corresponding to different numbers of components in the mixture. A sample from the full joint distribution of all unknown variables is thereby generated, and this can be used as a basis for a thorough presentation of many aspects of the posterior distribution. The methodology is applied here to the analysis of univariate normal mixtures, using a hierarchical prior model that offers an approach to dealing with weak prior information while avoiding the mathematical pitfalls of using improper priors in the mixture context.",
"neighbors": [
95,
684,
713,
759,
996,
1147,
2311
],
"mask": "Train"
},
{
"node_id": 162,
"label": 4,
"text": "Title: Analysis of Some Incremental Variants of Policy Iteration: First Steps Toward Understanding Actor-Critic Learning Systems \nAbstract: Northeastern University College of Computer Science Technical Report NU-CCS-93-11 fl We gratefully acknowledge the substantial contributions to this effort provided by Andy Barto, who sparked our original interest in these questions and whose continued encouragement and insightful comments and criticisms have helped us greatly. Recent discussions with Satinder Singh and Vijay Gullapalli have also had a helpful impact on this work. Special thanks also to Rich Sutton, who has influenced our thinking on this subject in numerous ways. This work was supported by Grant IRI-8921275 from the National Science Foundation and by the U. S. Air Force. ",
"neighbors": [
173,
775
],
"mask": "Train"
},
{
"node_id": 163,
"label": 1,
"text": "Title: 4 Implementing Application Specific Routines Genetic algorithms in search, optimization, and machine learning. Reading, MA: Addison-Wesley. \nAbstract: To implement a specific application, you should only have to change the file app.c. Section 2 describes the routines in app.c in detail. If you use additional variables for your specific problem, the easiest method of making them available to other program units is to declare them in sga.h and external.h. However, take care that you do not redeclare existing variables. Two example applications files are included in the SGA-C distribution. The file app1.c performs the simple example problem included with the Pascal version; finding the maximum of x 10 , where x is an integer interpretation of a chromosome. A slightly more complex application is include in app2.c. This application illustrates two features that have been added to SGA-C. The first of these is the ithruj2int function, which converts bits i through j in a chromosome to an integer. The second new feature is the utility pointer that is associated with each population member. The example application interprets each chromosome as a set of concatenated integers in binary form. The lengths of these integer fields is determined by the user-specified value of field size, which is read in by the function app data(). The field size must be less than the smallest of the chromosome length and the length of an unsigned integer. An integer array for storing the interpreted form of each chromosome is dynamically allocated and assigned to the chromosome's utility pointer in app malloc(). The ithruj2int routine (see utility.c) is used to translate each chromosome into its associated vector. The fitness for each chromosome is simply the sum of the squares of these integers. This example application will function for any chromosome length. SGA-C is intended to be a simple program for first-time GA experimentation. It is not intended to be definitive in terms of its efficiency or the grace of its implementation. The authors are interested in the comments, criticisms, and bug reports from SGA-C users, so that the code can be refined for easier use in subsequent versions. Please email your comments to rob@galab2.mh.ua.edu, or write to TCGA: The authors gratefully acknowledge support provided by NASA under Grant NGT-50224 and support provided by the National Science Foundation under Grant CTS-8451610. We also thank Hillol Kargupta for donating his tournament selection implementation. Booker, L. B. (1982). Intelligent behavior as an adaptation to the task environment (Doctoral dissertation, Technical Report No. 243. Ann Arbor: University of Michigan, Logic of Computers Group). Dissertations Abstracts International, 43(2), 469B. (University Microfilms No. 8214966) ",
"neighbors": [
22,
42,
55,
129,
141,
145,
174,
188,
189,
191,
219,
237,
266,
290,
309,
346,
380,
390,
395,
402,
415,
422,
448,
523,
530,
546,
563,
602,
606,
624,
658,
659,
689,
714,
717,
727,
743,
744,
757,
765,
769,
781,
793,
800,
813,
856,
910,
935,
940,
942,
961,
966,
982,
1030,
1060,
1065,
1069,
1070,
1098,
1099,
1106,
1110,
1113,
1114,
1127,
1130,
1136,
1139,
1153,
1159,
1178,
1205,
1206,
1207,
1218,
1219,
1224,
1225,
1253,
1257,
1274,
1277,
1286,
1303,
1305,
1333,
1334,
1362,
1379,
1380,
1404,
1410,
1457,
1467,
1498,
1515,
1530,
1536,
1544,
1558,
1571,
1572,
1573,
1575,
1590,
1594,
1611,
1650,
1675,
1685,
1689,
1691,
1696,
1715,
1717,
1728,
1729,
1734,
1775,
1784,
1792,
1799,
1807,
1831,
1850,
1872,
1890,
1905,
1943,
1971,
2039,
2077,
2116,
2173,
2175,
2177,
2196,
2200,
2202,
2204,
2232,
2248,
2251,
2259,
2265,
2274,
2280,
2286,
2295,
2296,
2298,
2316,
2361,
2363,
2396,
2451,
2518,
2521,
2554,
2557,
2563,
2564,
2600,
2604,
2638,
2659,
2667,
2673
],
"mask": "Test"
},
{
"node_id": 164,
"label": 6,
"text": "Title: Improving Generalization with Active Learning \nAbstract: Active learning differs from passive \"learning from examples\" in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful that learning from examples alone, giving better generalization for a fixed number of training examples. In this paper, we consider the problem of learning a binary concept in the absence of noise (Valiant 1984). We describe a formalism for active concept learning called selective sampling, and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers \"useful.\" We test our implementation, called an SG-network, on three domains, and observe significant improvement in generalization.",
"neighbors": [
517,
740
],
"mask": "Test"
},
{
"node_id": 165,
"label": 5,
"text": "Title: d d The Effects of Predicated Execution on Branch Prediction \nAbstract: This paper analyzes a variety of existing predication models for eliminating branch operations, and the effect that this elimination has on the branch prediction schemes in existing processors, including single issue architectures with simple prediction mechanisms, to the newer multi-issue designs with correspondingly more sophisticated branch predictors. The effect on branch prediction accuracy, branch penalty and basic block size is studied. ",
"neighbors": [
432
],
"mask": "Validation"
},
{
"node_id": 166,
"label": 0,
"text": "Title: Rules and Precedents as Complementary Warrants Complementarity of Rules and Precedents for Classification In a\nAbstract: This paper describes a model of the complementarity of rules and precedents in the classification task. Under this model, precedents assist rule-based reasoning by operationalizing abstract rule antecedents. Conversely, rules assist case-based reasoning through case elaboration, the process of inferring case facts in order to increase the similarity between cases, and term reformulation, the process of replacing a term whose precedents only weakly match a case with terms whose precedents strongly match the case. Fully exploiting this complementarity requires a control strategy characterized by impartiality, the absence of arbitrary ordering restrictions on the use of rules and precedents. An impartial control strategy was implemented in GREBE in the domain of Texas worker's compensation law. In a preliminary evaluation, GREBE's performance was found to be as good or slightly better than the performance of law students on the same task. A case is classified as belonging to a particular category by relating its description to the criteria for category membership. The justifications, or warrants [Toulmin, 1958], that can relate a case to a category, can vary widely in the generality of their antecedents. For example, consider warrants for classifying a case into the legal category \"negligence.\" A rule, such as \"An action is negligent if the actor fails to use reasonable care and the failure is the proximate cause of an injury,\" has very general antecedent terms (e.g., \"breach of reasonable care\"). Conversely, a precedent, such as \"Dr. Jones was negligent because he failed to count sponges during surgery and as a result left a sponge in Smith,\" has very specific antecedent terms (e.g., \"failure to count sponges\"). Both types of warrants have been used by classification systems to relate cases to categories. Classification systems have used precedents to help match the antecedents of rules with cases. Completing this match is difficult when the terms in the antecedent are open-textured, i.e., when there is significant uncertainty whether they match specific facts [Gardner, 1984, McCarty and Sridharan, 1982]. This problem results from the \"generality gap\" separating abstract terms from specific facts [Porter et al., 1990]. Precedents of an open-textured term, i.e., past cases to which the term applied, can be used to bridge this gap. Unlike rule antecedents, the antecedents of precedents are at the same level of generality as cases, so no generality gap exists between precedents and new cases. Precedents therefore reduce the problem of matching specific case facts with open-textured terms to the problem of matching two sets of specific facts. For example, an injured employee's entitlement to worker's compensation depends on whether he was injured during an activity \"in furtherance of employment.\" Determining whether any particular case should be classified as a compensable injury therefore requires matching the specific facts of the case (e.g., John was injured in an automobile accident while driving to his office) to the open-textured term \"activity in furtherance of employment.\" The gap in generality between the case description and the abstract term makes this match problematical. However, completing this match may be much easier if there are precedents of the term \"activity in furtherance of employment\" (e.g., Mary's injury was not compensable because it occurred while she was driving to work, which is not an activity in furtherance of employment; Bill's injury was compensable because it occurred while he was driving to a house to deliver a pizza, an activity in furtherance of employment). In this case, John's driving to his office closely matches Mary's driving to work, so ",
"neighbors": [
457,
649,
1125
],
"mask": "Test"
},
{
"node_id": 167,
"label": 4,
"text": "Title: Auto-exploratory Average Reward Reinforcement Learning \nAbstract: We introduce a model-based average reward Reinforcement Learning method called H-learning and compare it with its discounted counterpart, Adaptive Real-Time Dynamic Programming, in a simulated robot scheduling task. We also introduce an extension to H-learning, which automatically explores the unexplored parts of the state space, while always choosing greedy actions with respect to the current value function. We show that this \"Auto-exploratory H-learning\" performs better than the original H-learning under previously studied exploration methods such as random, recency-based, or counter-based exploration. ",
"neighbors": [
552,
554,
559,
1459
],
"mask": "Validation"
},
{
"node_id": 168,
"label": 1,
"text": "Title: Dynamic Control of Genetic Algorithms using Fuzzy Logic Techniques \nAbstract: This paper proposes using fuzzy logic techniques to dynamically control parameter settings of genetic algorithms (GAs). We describe the Dynamic Parametric GA: a GA that uses a fuzzy knowledge-based system to control GA parameters. We then introduce a technique for automatically designing and tuning the fuzzy knowledge-base system using GAs. Results from initial experiments show a performance improvement over a simple static GA. One Dynamic Parametric GA system designed by our automatic method demonstrated improvement on an application not included in the design phase, which may indicate the general applicability of the Dynamic Parametric GA to a wide range of ap plications.",
"neighbors": [
475,
1728,
1756,
2604
],
"mask": "Test"
},
{
"node_id": 169,
"label": 2,
"text": "Title: LEARNING LINEAR, SPARSE, FACTORIAL CODES \nAbstract: In previous work (Olshausen & Field 1996), an algorithm was described for learning linear sparse codes which, when trained on natural images, produces a set of basis functions that are spatially localized, oriented, and bandpass (i.e., wavelet-like). This note shows how the algorithm may be interpreted within a maximum-likelihood framework. Several useful insights emerge from this connection: it makes explicit the relation to statistical independence (i.e., factorial coding), it shows a formal relationship to the algorithm of Bell and Sejnowski (1995), and it suggests how to adapt parameters that were previously fixed. This report describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology. This research is sponsored by an Individual National Research Service Award to B.A.O. (NIMH F32-MH11062) and by a grant from the National Science Foundation under contract ASC-9217041 (this award includes funds from ARPA provided under the HPCC program) to CBCL. ",
"neighbors": [
212,
570
],
"mask": "Train"
},
{
"node_id": 170,
"label": 3,
"text": "Title: Large Deviation Methods for Approximate Probabilistic Inference, with Rates of Convergence a free parameter. The\nAbstract: We study layered belief networks of binary random variables in which the conditional probabilities Pr[childjparents] depend monotonically on weighted sums of the parents. For these networks, we give efficient algorithms for computing rigorous bounds on the marginal probabilities of evidence at the output layer. Our methods apply generally to the computation of both upper and lower bounds, as well as to generic transfer function parameterizations of the conditional probability tables (such as sigmoid and noisy-OR). We also prove rates of convergence of the accuracy of our bounds as a function of network size. Our results are derived by applying the theory of large deviations to the weighted sums of parents at each node in the network. Bounds on the marginal probabilities are computed from two contributions: one assuming that these weighted sums fall near their mean values, and the other assuming that they do not. This gives rise to an interesting trade-off between probable explanations of the evidence and improbable deviations from the mean. In networks where each child has N parents, the gap between our upper and lower bounds behaves as a sum of two terms, one of order p In addition to providing such rates of convergence for large networks, our methods also yield efficient algorithms for approximate inference in fixed networks. ",
"neighbors": [
4,
250
],
"mask": "Train"
},
{
"node_id": 171,
"label": 6,
"text": "Title: Characterizations of Learnability for Classes of f0; ng-valued Functions \nAbstract: We study layered belief networks of binary random variables in which the conditional probabilities Pr[childjparents] depend monotonically on weighted sums of the parents. For these networks, we give efficient algorithms for computing rigorous bounds on the marginal probabilities of evidence at the output layer. Our methods apply generally to the computation of both upper and lower bounds, as well as to generic transfer function parameterizations of the conditional probability tables (such as sigmoid and noisy-OR). We also prove rates of convergence of the accuracy of our bounds as a function of network size. Our results are derived by applying the theory of large deviations to the weighted sums of parents at each node in the network. Bounds on the marginal probabilities are computed from two contributions: one assuming that these weighted sums fall near their mean values, and the other assuming that they do not. This gives rise to an interesting trade-off between probable explanations of the evidence and improbable deviations from the mean. In networks where each child has N parents, the gap between our upper and lower bounds behaves as a sum of two terms, one of order p In addition to providing such rates of convergence for large networks, our methods also yield efficient algorithms for approximate inference in fixed networks. ",
"neighbors": [
109,
114
],
"mask": "Train"
},
{
"node_id": 172,
"label": 0,
"text": "Title: Efficient Feature Selection in Conceptual Clustering \nAbstract: Feature selection has proven to be a valuable technique in supervised learning for improving predictive accuracy while reducing the number of attributes considered in a task. We investigate the potential for similar benefits in an unsupervised learning task, conceptual clustering. The issues raised in feature selection by the absence of class labels are discussed and an implementation of a sequential feature selection algorithm based on an existing conceptual clustering system is described. Additionally, we present a second implementation which employs a technique for improving the efficiency of the search for an optimal description and compare the performance of both algorithms.",
"neighbors": [
245,
430,
635
],
"mask": "Train"
},
{
"node_id": 173,
"label": 4,
"text": "Title: An Upper Bound on the Loss from Approximate Optimal-Value Functions \nAbstract: Many reinforcement learning (RL) approaches can be formulated from the theory of Markov decision processes and the associated method of dynamic programming (DP). The value of this theoretical understanding, however, is tempered by many practical concerns. One important question is whether DP-based approaches that use function approximation rather than lookup tables, can avoid catastrophic effects on performance. This note presents a result in Bertsekas (1987) which guarantees that small errors in the approximation of a task's optimal value function cannot produce arbitrarily bad performance when actions are selected greedily. We derive an upper bound on performance loss which is slightly tighter than that in Bertsekas (1987), and we show the extension of the bound to Q-learning (Watkins, 1989). These results provide a theoretical justification for a practice that is common in reinforcement learning. ",
"neighbors": [
162,
294,
552,
565,
566,
575,
1378,
2485
],
"mask": "Train"
},
{
"node_id": 174,
"label": 2,
"text": "Title: Symbolic and Subsymbolic Learning for Vision: Some Possibilities \nAbstract: Robust, flexible and sufficiently general vision systems such as those for recognition and description of complex 3-dimensional objects require an adequate armamentarium of representations and learning mechanisms. This paper briefly analyzes the strengths and weaknesses of different learning paradigms such as symbol processing systems, connectionist networks, and statistical and syntactic pattern recognition systems as possible candidates for providing such capabilities and points out several promising directions for integrating multiple such paradigms in a synergistic fashion towards that goal. ",
"neighbors": [
163,
501,
503,
2409
],
"mask": "Test"
},
{
"node_id": 175,
"label": 2,
"text": "Title: SARDNET: A Self-Organizing Feature Map for Sequences \nAbstract: A self-organizing neural network for sequence classification called SARDNET is described and analyzed experimentally. SARDNET extends the Kohonen Feature Map architecture with activation retention and decay in order to create unique distributed response patterns for different sequences. SARDNET yields extremely dense yet descriptive representations of sequential input in very few training iterations. The network has proven successful on mapping arbitrary sequences of binary and real numbers, as well as phonemic representations of English words. Potential applications include isolated spoken word recognition and cognitive science models of sequence processing.",
"neighbors": [
747
],
"mask": "Train"
},
{
"node_id": 176,
"label": 5,
"text": "Title: Knowledge Integration and Learning \nAbstract: LIACC - Technical Report 91-1 Abstract. In this paper we address the problem of acquiring knowledge by integration . Our aim is to construct an integrated knowledge base from several separate sources. The objective of integration is to construct one system that exploits all the knowledge that is available and has good performance. The aim of this paper is to discuss the methodology of knowledge integration and present some concrete results. In our experiments the performance of the integrated theory exceeded the performance of the individual theories by quite a significant amount. Also, the performance did not fluctuate much when the experiments were repeated. These results indicate knowledge integration can complement other existing ML methods. ",
"neighbors": [
379,
756
],
"mask": "Test"
},
{
"node_id": 177,
"label": 6,
"text": "Title: Evaluation and Selection of Biases in Machine Learning \nAbstract: In this introduction, we define the term bias as it is used in machine learning systems. We motivate the importance of automated methods for evaluating and selecting biases using a framework of bias selection as search in bias and meta-bias spaces. Recent research in the field of machine learning bias is summarized. ",
"neighbors": [
430,
635,
885,
911,
965,
1489,
1959,
2192
],
"mask": "Train"
},
{
"node_id": 178,
"label": 5,
"text": "Title: Learning Decision Trees from Decision Rules: \nAbstract: A method and initial results from a comparative study ABSTRACT A standard approach to determining decision trees is to learn them from examples. A disadvantage of this approach is that once a decision tree is learned, it is difficult to modify it to suit different decision making situations. Such problems arise, for example, when an attribute assigned to some node cannot be measured, or there is a significant change in the costs of measuring attributes or in the frequency distribution of events from different decision classes. An attractive approach to resolving this problem is to learn and store knowledge in the form of decision rules, and to generate from them, whenever needed, a decision tree that is most suitable in a given situation. An additional advantage of such an approach is that it facilitates building compact decision trees , which can be much simpler than the logically equivalent conventional decision trees (by compact trees are meant decision trees that may contain branches assigned a set of values , and nodes assigned derived attributes, i.e., attributes that are logical or mathematical functions of the original ones). The paper describes an efficient method, AQDT-1, that takes decision rules generated by an AQ-type learning system (AQ15 or AQ17), and builds from them a decision tree optimizing a given optimality criterion. The method can work in two modes: the standard mode , which produces conventional decision trees, and compact mode, which produces compact decision trees. The preliminary experiments with AQDT-1 have shown that the decision trees generated by it from decision rules (conventional and compact) have outperformed those generated from examples by the well-known C4.5 program both in terms of their simplicity and their predictive accuracy. ",
"neighbors": [
286,
378
],
"mask": "Train"
},
{
"node_id": 179,
"label": 2,
"text": "Title: for Projective Basis Function Networks 2m1 Global Form 2m Local Form With appropriate constant factors,\nAbstract: OGI CSE Technical Report 96-006 Abstract: Smoothing regularizers for radial basis functions have been studied extensively, but no general smoothing regularizers for projective basis functions (PBFs), such as the widely-used sigmoidal PBFs, have heretofore been proposed. We derive new classes of algebraically-simple m th -order smoothing regularizers for networks of projective basis functions f (W; x) = P N fi fl + u 0 ; with general transfer functions g[]. These simple algebraic forms R(W; m) enable the direct enforcement of smoothness without the need for costly Monte Carlo integrations of S(W; m). The regularizers are tested on illustrative sample problems and compared to quadratic weight decay. The new regularizers are shown to yield better generalization errors than ",
"neighbors": [
331,
608,
611
],
"mask": "Train"
},
{
"node_id": 180,
"label": 2,
"text": "Title: REDUCED MEMORY REPRESENTATIONS FOR MUSIC \nAbstract: We address the problem of musical variation (identification of different musical sequences as variations) and its implications for mental representations of music. According to reductionist theories, listeners judge the structural importance of musical events while forming mental representations. These judgments may result from the production of reduced memory representations that retain only the musical gist. In a study of improvised music performance, pianists produced variations on melodies. Analyses of the musical events retained across variations provided support for the reductionist account of structural importance. A neural network trained to produce reduced memory representations for the same melodies represented structurally important events more efficiently than others. Agreement among the musicians' improvisations, the network model, and music-theoretic predictions suggest that perceived constancy across musical variation is a natural result of a reductionist mechanism for producing memory representations. ",
"neighbors": [
143,
350
],
"mask": "Test"
},
{
"node_id": 181,
"label": 6,
"text": "Title: Ensemble Learning and Evidence Maximization \nAbstract: Ensemble learning by variational free energy minimization is a tool introduced to neural networks by Hinton and van Camp in which learning is described in terms of the optimization of an ensemble of parameter vectors. The optimized ensemble is an approximation to the posterior probability distribution of the parameters. This tool has now been applied to a variety of statistical inference problems. In this paper I study a linear regression model with both parameters and hyper-parameters. I demonstrate that the evidence approximation for the optimization of regularization constants can be derived in detail from a free energy minimization viewpoint. ",
"neighbors": [
76,
157,
518,
662,
766
],
"mask": "Validation"
},
{
"node_id": 182,
"label": 3,
"text": "Title: Adaptation for Self Regenerative MCMC SUMMARY \nAbstract: The self regenerative MCMC is a tool for constructing a Markov chain with a given stationary distribution by constructing an auxiliary chain with some other stationary distribution . Elements of the auxiliary chain are picked a suitable random number of times so that the resulting chain has the stationary distribution , Sahu and Zhigljavsky (1998). In this article we provide a generic adaptation scheme for the above algorithm. The adaptive scheme is to use the knowledge of the stationary distribution gathered so far and then to update during the course of the simulation. This method is easy to implement and often leads to considerable improvement. We obtain theoretical results for the adaptive scheme. Our proposed methodology is illustrated with a number of realistic examples in Bayesian computation and its performance is compared with other available MCMC techniques. In one of our applications we develop a non-linear dynamics model for modeling predator-prey relationships in the wild. ",
"neighbors": [
468,
491
],
"mask": "Train"
},
{
"node_id": 183,
"label": 0,
"text": "Title: Conceptual Analogy \nAbstract: Conceptual analogy (CA) is an approach that integrates conceptualization, i.e., memory organization based on prior experiences and analogical reasoning (Borner 1994a). It was implemented prototypically and tested to support the design process in building engineering (Borner and Janetzko 1995, Borner 1995). There are a number of features that distinguish CA from standard approaches to CBR and AR. First of all, CA automatically extracts the knowledge needed to support design tasks (i.e., complex case representations, the relevance of object features and relations, and proper adaptations) from attribute-value representations of prior layouts. Secondly, it effectively determines the similarity of complex case representations in terms of adaptability. Thirdly, implemented and integrated into a highly interactive and adaptive system architecture it allows for incremental knowledge acquisition and user support. This paper surveys the basic assumptions and the psychological results which influenced the development of CA. It sketches the knowledge representation formalisms employed and characterizes the sub-processes needed to integrate memory organization and analogical reasoning. ",
"neighbors": [
66,
454,
539,
541
],
"mask": "Validation"
},
{
"node_id": 184,
"label": 5,
"text": "Title: Multipath Execution: Opportunities and Limits \nAbstract: Even sophisticated branch-prediction techniques necessarily suffer some mispredictions, and even relatively small mispredict rates hurt performance substantially in current-generation processors. In this paper, we investigate schemes for improving performance in the face of imperfect branch predictors by having the processor simultaneously execute code from both the taken and not-taken outcomes of a branch. This paper presents data regarding the limits of multipath execution, considers fetch-bandwidth needs for multipath execution, and discusses various dynamic confidence-prediction schemes that gauge the likelihood of branch mispredictions. Our evaluations consider executing along several (28) paths at once. Using 4 paths and a relatively simple confidence predictor, multipath execution garners speedups of up to 30% compared to the single-path case, with an average speedup of 14.4% for the SPECint suite. While associated increases in instruction-fetch-bandwidth requirements are not too surprising, a less expected result is the significance of having a separate return-address stack for each forked path. Overall, our results indicate that multipath execution offers significant improvements over single-path performance, and could be especially useful when combined with multithreading so that hardware costs can be amortized over both approaches. ",
"neighbors": [
158,
428,
432,
433
],
"mask": "Train"
},
{
"node_id": 185,
"label": 3,
"text": "Title: Robustness Analysis of Bayesian Networks with Global Neighborhoods \nAbstract: This paper presents algorithms for robustness analysis of Bayesian networks with global neighborhoods. Robust Bayesian inference is the calculation of bounds on posterior values given perturbations in a probabilistic model. We present algorithms for robust inference (including expected utility, expected value and variance bounds) with global perturbations that can be modeled by *-contaminated, constant density ratio, constant density bounded and total variation classes of distributions. c fl1996 Carnegie Mellon University",
"neighbors": [
324,
332,
389
],
"mask": "Train"
},
{
"node_id": 186,
"label": 4,
"text": "Title: Adaptive state space quantisation: adding and removing neurons \nAbstract: This paper describes a self-learning control system for a mobile robot. Based on local sensor data, a robot is taught to avoid collisions with obstacles. The only feedback to the control system is a binary-valued external reinforcement signal, which indicates whether or not a collision has occured. A reinforcement learning scheme is used to find a correct mapping from input (sensor) space to output (steering signal) space. An adaptive quantisation scheme is introduced, through which the discrete division of input space is built up from scratch by the system itself. ",
"neighbors": [
294,
566,
588,
747
],
"mask": "Train"
},
{
"node_id": 187,
"label": 2,
"text": "Title: Evaluation and Ordering of Rules Extracted from Feedforward Networks \nAbstract: Rules extracted from trained feedforward networks can be used for explanation, validation, and cross-referencing of network output decisions. This paper introduces a rule evaluation and ordering mechanism that orders rules extracted from feedforward networks based on three performance measures. Detailed experiments using three rule extraction techniques as applied to the Wisconsin breast cancer database, illustrate the power of the proposed methods. Moreover, a method of integrating the output decisions of both the extracted rule-based system and the corresponding trained network is proposed. The integrated system provides further improvements. ",
"neighbors": [
462
],
"mask": "Train"
},
{
"node_id": 188,
"label": 1,
"text": "Title: Coevolving High-Level Representations \nAbstract: Rules extracted from trained feedforward networks can be used for explanation, validation, and cross-referencing of network output decisions. This paper introduces a rule evaluation and ordering mechanism that orders rules extracted from feedforward networks based on three performance measures. Detailed experiments using three rule extraction techniques as applied to the Wisconsin breast cancer database, illustrate the power of the proposed methods. Moreover, a method of integrating the output decisions of both the extracted rule-based system and the corresponding trained network is proposed. The integrated system provides further improvements. ",
"neighbors": [
42,
120,
129,
139,
141,
144,
163,
189,
262,
380,
415,
523,
717,
721,
755,
757
],
"mask": "Validation"
},
{
"node_id": 189,
"label": 1,
"text": "Title: An Evolutionary Algorithm that Constructs Recurrent Neural Networks \nAbstract: Standard methods for inducing both the structure and weight values of recurrent neural networks fit an assumed class of architectures to every task. This simplification is necessary because the interactions between network structure and function are not well understood. Evolutionary computation, which includes genetic algorithms and evolutionary programming, is a population-based search method that has shown promise in such complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. This algorithms empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods. ",
"neighbors": [
42,
163,
188,
395,
2102,
2664
],
"mask": "Train"
},
{
"node_id": 190,
"label": 2,
"text": "Title: Spline Smoothing For Bivariate Data With Applications To Association Between Hormones \nAbstract: Standard methods for inducing both the structure and weight values of recurrent neural networks fit an assumed class of architectures to every task. This simplification is necessary because the interactions between network structure and function are not well understood. Evolutionary computation, which includes genetic algorithms and evolutionary programming, is a population-based search method that has shown promise in such complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. This algorithms empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods. ",
"neighbors": [
356,
519,
2223
],
"mask": "Train"
},
{
"node_id": 191,
"label": 1,
"text": "Title: USING MARKER-BASED GENETIC ENCODING OF NEURAL NETWORKS TO EVOLVE FINITE-STATE BEHAVIOUR \nAbstract: A new mechanism for genetic encoding of neural networks is proposed, which is loosely based on the marker structure of biological DNA. The mechanism allows all aspects of the network structure, including the number of nodes and their connectivity, to be evolved through genetic algorithms. The effectiveness of the encoding scheme is demonstrated in an object recognition task that requires artificial creatures (whose behaviour is driven by a neural network) to develop high-level finite-state exploration and discrimination strategies. The task requires solving the sensory-motor grounding problem, i.e. developing a functional understanding of the effects that a creature's movement has on its sensory input. ",
"neighbors": [
22,
163,
294,
395,
448
],
"mask": "Train"
},
{
"node_id": 192,
"label": 2,
"text": "Title: Smoothing Spline ANOVA with Component-Wise Bayesian \"Confidence Intervals\" To Appear, J. Computational and Graphical Statistics \nAbstract: We study a multivariate smoothing spline estimate of a function of several variables, based on an ANOVA decomposition as sums of main effect functions (of one variable), two-factor interaction functions (of two variables), etc. We derive the Bayesian \"confidence intervals\" for the components of this decomposition and demonstrate that, even with multiple smoothing parameters, they can be efficiently computed using the publicly available code RKPACK, which was originally designed just to compute the estimates. We carry out a small Monte Carlo study to see how closely the actual properties of these component-wise confidence intervals match their nominal confidence levels. Lastly, we analyze some lake acidity data as a function of calcium concentration, latitude, and longitude, using both polynomial and thin plate spline main effects in the same model. ",
"neighbors": [
10,
193,
280,
420,
439,
510,
519,
705
],
"mask": "Train"
},
{
"node_id": 193,
"label": 2,
"text": "Title: Soft Classification, a.k.a. Risk Estimation, via Penalized Log Likelihood and Smoothing Spline Analysis of Variance \nAbstract: We study a multivariate smoothing spline estimate of a function of several variables, based on an ANOVA decomposition as sums of main effect functions (of one variable), two-factor interaction functions (of two variables), etc. We derive the Bayesian \"confidence intervals\" for the components of this decomposition and demonstrate that, even with multiple smoothing parameters, they can be efficiently computed using the publicly available code RKPACK, which was originally designed just to compute the estimates. We carry out a small Monte Carlo study to see how closely the actual properties of these component-wise confidence intervals match their nominal confidence levels. Lastly, we analyze some lake acidity data as a function of calcium concentration, latitude, and longitude, using both polynomial and thin plate spline main effects in the same model. ",
"neighbors": [
10,
74,
192,
238,
280,
420,
519,
2549
],
"mask": "Validation"
},
{
"node_id": 194,
"label": 2,
"text": "Title: Improving the Quality of Automatic DNA Sequence Assembly using Fluorescent Trace-Data Classifications \nAbstract: Virtually all large-scale sequencing projects use automatic sequence-assembly programs to aid in the determination of DNA sequences. The computer-generated assemblies require substantial handediting to transform them into submissions for GenBank. As the size of sequencing projects increases, it becomes essential to improve the quality of the automated assemblies so that this time-consuming handediting may be reduced. Current ABI sequencing technology uses base calls made from fluorescently-labeled DNA fragments run on gels. We present a new representation for the fluorescent trace data associated with individual base calls. This representation can be used before, during, and after fragment assembly to improve the quality of assemblies. We demonstrate one such use end-trimming of suboptimal data that results in a significant improvement in the quality of subsequent assemblies. ",
"neighbors": [
673
],
"mask": "Train"
},
{
"node_id": 195,
"label": 5,
"text": "Title: d d Techniques for Extracting Instruction Level Parallelism on MIMD Architectures \nAbstract: Extensive research has been done on extracting parallelism from single instruction stream processors. This paper presents some results of our investigation into ways to modify MIMD architectures to allow them to extract the instruction level parallelism achieved by current superscalar and VLIW machines. A new architecture is proposed which utilizes the advantages of a multiple instruction stream design while addressing some of the limitations that have prevented MIMD architectures from performing ILP operation. A new code scheduling mechanism is described to support this new architecture by partitioning instructions across multiple processing elements in order to exploit this level of parallelism. ",
"neighbors": [
196,
707,
735
],
"mask": "Train"
},
{
"node_id": 196,
"label": 5,
"text": "Title: d d MISC: A Multiple Instruction Stream Computer \nAbstract: This paper describes a single chip Multiple Instruction Stream Computer (MISC) capable of extracting instruction level parallelism from a broad spectrum of programs. The MISC architecture uses multiple asynchronous processing elements to separate a program into streams that can be executed in parallel, and integrates a conflict-free message passing system into the lowest level of the processor design to facilitate low latency intra-MISC communication. This approach allows for increased machine parallelism with minimal code expansion, and provides an alternative approach to single instruction stream multi-issue machines such as SuperScalar and VLIW. ",
"neighbors": [
195,
216,
707
],
"mask": "Train"
},
{
"node_id": 197,
"label": 4,
"text": "Title: Optimal Navigation in a Probibalistic World \nAbstract: In this paper, we define and examine two versions of the bridge problem. The first variant of the bridge problem is a determistic model where the agent knows a superset of the transitions and a priori probabilities that those transitions are intact. In the second variant, transitions can break or be fixed with some probability at each time step. These problems are applicable to planning in uncertain domains as well as packet routing in a computer network. We show how an agent can act optimally in these models by reduction to Markov decision processes. We describe methods of solving them but note that these methods are intractable for reasonably sized problems. Finally, we suggest neuro-dynamic programming as a method of value function approximation for these types of models.",
"neighbors": [
3,
295,
633,
749
],
"mask": "Test"
},
{
"node_id": 198,
"label": 2,
"text": "Title: EEG Signal Classification with Different Signal Representations for a large number of hidden units. \nAbstract: If several mental states can be reliably distinguished by recognizing patterns in EEG, then a paralyzed person could communicate to a device like a wheelchair by composing sequencesof these mental states. In this article, we report on a study comparing four representations of EEG signals and their classification by a two-layer neural network with sigmoid activation functions. The neural network is implemented on a CNAPS server (128 processor, SIMD architecture) by Adaptive Solutions, Inc., gaining a 100-fold decrease in training time over a Sun ",
"neighbors": [
655,
747
],
"mask": "Train"
},
{
"node_id": 199,
"label": 6,
"text": "Title: On Learning Conjunctions with Malicious Noise \nAbstract: We show how to learn monomials in the presence of malicious noise, when the underlined distribution is a product distribution. We show that our results apply not only to product distributions but to a wide class of distributions. ",
"neighbors": [
591,
640
],
"mask": "Test"
},
{
"node_id": 200,
"label": 2,
"text": "Title: Sample Complexity for Learning Recurrent Perceptron Mappings \nAbstract: Recurrent perceptron classifiers generalize the classical perceptron model. They take into account those correlations and dependences among input coordinates which arise from linear digital filtering. This paper provides tight bounds on sample complexity associated to the fitting of such models to experimental data. ",
"neighbors": [
536,
1464,
1891
],
"mask": "Train"
},
{
"node_id": 201,
"label": 2,
"text": "Title: Neural net architectures for temporal sequence processing \nAbstract: I present a general taxonomy of neural net architectures for processing time-varying patterns. This taxonomy subsumes many existing architectures in the literature, and points to several promising architectures that have yet to be examined. Any architecture that processes time-varying patterns requires two conceptually distinct components: a short-term memory that holds on to relevant past events and an associator that uses the short-term memory to classify or predict. My taxonomy is based on a characterization of short-term memory models along the dimensions of form, content, and adaptability. Experiments on predicting future values of a financial time series (US dollar-Swiss franc exchange rates) are presented using several alternative memory models. The results of these experiments serve as a baseline against which more sophisticated architectures can be compared. Neural networks have proven to be a promising alternative to traditional techniques for nonlinear temporal prediction tasks (e.g., Curtiss, Brandemuehl, & Kreider, 1992; Lapedes & Farber, 1987; Weigend, Huberman, & Rumelhart, 1992). However, temporal prediction is a particularly challenging problem because conventional neural net architectures and algorithms are not well suited for patterns that vary over time. The prototypical use of neural nets is in structural pattern recognition. In such a task, a collection of features|visual, semantic, or otherwise|is presented to a network and the network must categorize the input feature pattern as belonging to one or more classes. For example, a network might be trained to classify animal species based on a set of attributes describing living creatures such as \"has tail\", \"lives in water\", or \"is carnivorous\"; or a network could be trained to recognize visual patterns over a two-dimensional pixel array as a letter in fA; B; . . . ; Zg. In such tasks, the network is presented with all relevant information simultaneously. In contrast, temporal pattern recognition involves processing of patterns that evolve over time. The appropriate response at a particular point in time depends not only on the current input, but potentially all previous inputs. This is illustrated in Figure 1, which shows the basic framework for a temporal prediction problem. I assume that time is quantized into discrete steps, a sensible assumption because many time series of interest are intrinsically discrete, and continuous series can be sampled at a fixed interval. The input at time t is denoted x(t). For univariate series, this input ",
"neighbors": [
143,
350,
427,
1718,
1990
],
"mask": "Test"
},
{
"node_id": 202,
"label": 2,
"text": "Title: Dyslexic and Category-Specific Aphasic Impairments in a Self-Organizing Feature Map Model of the Lexicon \nAbstract: DISLEX is an artificial neural network model of the mental lexicon. It was built to test com-putationally whether the lexicon could consist of separate feature maps for the different lexical modalities and the lexical semantics, connected with ordered pathways. In the model, the orthographic, phonological, and semantic feature maps and the associations between them are formed in an unsupervised process, based on cooccurrence of the lexical symbol and its meaning. After the model is organized, various damage to the lexical system can be simulated, resulting in dyslexic and category-specific aphasic impairments similar to those observed in human patients. ",
"neighbors": [
72,
204,
427,
747,
771
],
"mask": "Train"
},
{
"node_id": 203,
"label": 2,
"text": "Title: Theory of Synaptic Plasticity in Visual Cortex \nAbstract: ",
"neighbors": [
359,
747,
2499
],
"mask": "Train"
},
{
"node_id": 204,
"label": 2,
"text": "Title: Natural Language Processing with Subsymbolic Neural Networks \nAbstract: ",
"neighbors": [
202,
274,
597,
741,
747,
1645,
1811,
2410,
2650
],
"mask": "Train"
},
{
"node_id": 205,
"label": 2,
"text": "Title: Beyond the Cognitive Map: Contributions to a Computational Neuroscience Theory of Rodent Navigation for the\nAbstract: ",
"neighbors": [
427,
600,
745,
747
],
"mask": "Validation"
},
{
"node_id": 206,
"label": 2,
"text": "Title: NEURAL NETS AS SYSTEMS MODELS AND CONTROLLERS suitability of \"neural nets\" as models for dynamical\nAbstract: This paper briefly surveys some recent results relevant ",
"neighbors": [
536,
1028,
1042,
1488,
1490,
1891
],
"mask": "Train"
},
{
"node_id": 207,
"label": 2,
"text": "Title: LEARNING BY ERROR-DRIVEN DECOMPOSITION \nAbstract: In this paper we describe a new selforganizing decomposition technique for learning high-dimensional mappings. Problem decomposition is performed in an error-driven manner, such that the resulting subtasks (patches) are equally well approximated. Our method combines an unsupervised learning scheme (Feature Maps [Koh84]) with a nonlinear approximator (Backpropagation [RHW86]). The resulting learning system is more stable and effective in changing environments than plain backpropagation and much more powerful than extended feature maps as proposed by [RS88, RMS89]. Extensions of our method give rise to active exploration strategies for autonomous agents facing unknown environments. The appropriateness of our general purpose method will be demonstrated with an ex ample from mathematical function approximation.",
"neighbors": [
688,
747,
1536,
1676
],
"mask": "Test"
},
{
"node_id": 208,
"label": 6,
"text": "Title: Feature Subset Selection as Search with Probabilistic Estimates \nAbstract: Irrelevant features and weakly relevant features may reduce the comprehensibility and accuracy of concepts induced by supervised learning algorithms. We formulate the search for a feature subset as an abstract search problem with probabilistic estimates. Searching a space using an evaluation function that is a random variable requires trading off accuracy of estimates for increased state exploration. We show how recent feature subset selection algorithms in the machine learning literature fit into this search problem as simple hill climbing approaches, and conduct a small experiment using a best-first search technique. ",
"neighbors": [
88,
430,
635,
1270,
1569,
2343
],
"mask": "Train"
},
{
"node_id": 209,
"label": 1,
"text": "Title: 17 Massively Parallel Genetic Programming \nAbstract: As the field of Genetic Programming (GP) matures and its breadth of application increases, the need for parallel implementations becomes absolutely necessary. The transputer-based system presented in the chapter by Koza and Andre ([11]) is one of the rare such parallel implementations. Until today, no implementation has been proposed for parallel GP using a SIMD architecture, except for a data-parallel approach ([20]), although others have exploited workstation farms and pipelined supercomputers. One reason is certainly the apparent difficulty of dealing with the parallel evaluation of different S-expressions when only a single instruction can be executed at the same time on every processor. The aim of this chapter is to present such an implementation of parallel GP on a SIMD system, where each processor can efficiently evaluate a different S-expression. We have implemented this approach on a MasPar MP-2 computer, and will present some timing results. To the extent that SIMD machines, like the MasPar are available to offer cost-effective cycles for scientific experimentation, this is a useful approach. The idea of simulating a MIMD machine using a SIMD architecture is not new ([8, 15]). One of the original ideas for the Connection Machine ([8]) was that it could simulate other parallel architectures. Indeed, in the extreme, each processor on a SIMD architecture can simulate a universal Turing machine (TM). With different turing machine specifications stored in each local memory, each processor would simply have its own tape, tape head, state table and state pointer, and the simulation would be performed by repeating the basic TM operations simultaneously. Of course, such a simulation would be very inefficient, and difficult to program, but would have the advantage of being really MIMD, where no SIMD processor would be in idle state, until its simulated machine halts. Now let us consider an alternative idea, that each SIMD processor would simulate an individual stored program computer using a simple instruction set. For each step of the simulation, the SIMD system would sequentially execute each possible instruction on the subset of processors whose next instruction match it. For a typical assembly language, even with a reduced instruction set, most processors would be idle most of the time. However, if the set of instructions implemented on the virtual processor is very small, this approach can be fruitful. In the case of Genetic Programming, the \"instruction set\" is composed of the specified set of functions designed for the task. We will show below that with a precompilation step, simply adding a push, a conditional, and unconditional branching and a stop instruction, we can get a very effective MIMD simulation running. This chapter reports such an implementation of GP on a MasPar MP-2 parallel computer. The configuration of our system is composed of 4K processor elements ",
"neighbors": [
415,
2334,
2704
],
"mask": "Test"
},
{
"node_id": 210,
"label": 4,
"text": "Title: A Unified Analysis of Value-Function-Based Reinforcement-Learning Algorithms \nAbstract: Reinforcement learning is the problem of generating optimal behavior in a sequential decision-making environment given the opportunity of interacting with it. Many algorithms for solving reinforcement-learning problems work by computing improved estimates of the optimal value function. We extend prior analyses of reinforcement-learning algorithms and present a powerful new theorem that can provide a unified analysis of value-function-based reinforcement-learning algorithms. The usefulness of the theorem lies in how it allows the asynchronous convergence of a complex reinforcement-learning algorithm to be proven by verifying that a simpler synchronous algorithm converges. We illustrate the application of the theorem by analyzing the convergence of Q-learning, model-based reinforcement learning, Q-learning with multi-state updates, Q-learning for Markov games, and risk-sensitive reinforcement learning. ",
"neighbors": [
63,
148,
295,
552,
644,
738,
1459
],
"mask": "Train"
},
{
"node_id": 211,
"label": 3,
"text": "Title: Using Path Diagrams as a Structural Equation Modelling Tool \nAbstract: Reinforcement learning is the problem of generating optimal behavior in a sequential decision-making environment given the opportunity of interacting with it. Many algorithms for solving reinforcement-learning problems work by computing improved estimates of the optimal value function. We extend prior analyses of reinforcement-learning algorithms and present a powerful new theorem that can provide a unified analysis of value-function-based reinforcement-learning algorithms. The usefulness of the theorem lies in how it allows the asynchronous convergence of a complex reinforcement-learning algorithm to be proven by verifying that a simpler synchronous algorithm converges. We illustrate the application of the theorem by analyzing the convergence of Q-learning, model-based reinforcement learning, Q-learning with multi-state updates, Q-learning for Markov games, and risk-sensitive reinforcement learning. ",
"neighbors": [
325,
645,
1527,
2076
],
"mask": "Train"
},
{
"node_id": 212,
"label": 2,
"text": "Title: Analyzing Hyperspectral Data with Independent Component Analysis \nAbstract: Hyperspectral image sensors provide images with a large number of contiguous spectral channels per pixel and enable information about different materials within a pixel to be obtained. The problem of spectrally unmixing materials may be viewed as a specific case of the blind source separation problem where data consists of mixed signals (in this case minerals) and the goal is to determine the contribution of each mineral to the mix without prior knowledge of the minerals in the mix. The technique of Independent Component Analysis (ICA) assumes that the spectral components are close to statistically independent and provides an unsupervised method for blind source separation. We introduce contextual ICA in the context of hyperspectral data analysis and apply the method to mineral data from synthetically mixed minerals and real image signatures. ",
"neighbors": [
169,
570,
576
],
"mask": "Train"
},
{
"node_id": 213,
"label": 4,
"text": "Title: Incremental methods for computing bounds in partially observable Markov decision processes \nAbstract: Partially observable Markov decision processes (POMDPs) allow one to model complex dynamic decision or control problems that include both action outcome uncertainty and imperfect observabil-ity. The control problem is formulated as a dynamic optimization problem with a value function combining costs or rewards from multiple steps. In this paper we propose, analyse and test various incremental methods for computing bounds on the value function for control problems with infinite discounted horizon criteria. The methods described and tested include novel incremental versions of grid-based linear interpolation method and simple lower bound method with Sondik's updates. Both of these can work with arbitrary points of the belief space and can be enhanced by various heuristic point selection strategies. Also introduced is a new method for computing an initial upper bound the fast informed bound method. This method is able to improve significantly on the standard and commonly used upper bound computed by the MDP-based method. The quality of resulting bounds are tested on a maze navigation problem with 20 states, 6 actions and 8 observations. ",
"neighbors": [
490,
492,
734
],
"mask": "Validation"
},
{
"node_id": 214,
"label": 2,
"text": "Title: Bayesian Non-linear Modelling for the Prediction Competition \nAbstract: The 1993 energy prediction competition involved the prediction of a series of building energy loads from a series of environmental input variables. Non-linear regression using `neural networks' is a popular technique for such modeling tasks. Since it is not obvious how large a time-window of inputs is appropriate, or what preprocessing of inputs is best, this can be viewed as a regression problem in which there are many possible input variables, some of which may actually be irrelevant to the prediction of the output variable. Because a finite data set will show random correlations between the irrelevant inputs and the output, any conventional neural network (even with reg-ularisation or `weight decay') will not set the coefficients for these junk inputs to zero. Thus the irrelevant variables will hurt the model's performance. The Automatic Relevance Determination (ARD) model puts a prior over the regression parameters which embodies the concept of relevance. This is done in a simple and `soft' way by introducing multiple regularisation constants, one associated with each input. Using Bayesian methods, the regularisation constants for junk inputs are automatically inferred to be large, preventing those inputs from causing significant overfitting. ",
"neighbors": [
78,
157,
469,
587
],
"mask": "Train"
},
{
"node_id": 215,
"label": 0,
"text": "Title: Integration of Case-Based Reasoning and Neural Networks Approaches for Classification \nAbstract: Several different approaches have been used to describe concepts for supervised learning tasks. In this paper we describe two approaches which are: prototype-based incremental neural networks and case-based reasoning approaches. We show then how we can improve a prototype-based neural network model by storing some specific instances in a CBR memory system. This leads us to propose a co-processing hybrid model for classification. 1 ",
"neighbors": [
66,
478,
696,
2061,
2380
],
"mask": "Train"
},
{
"node_id": 216,
"label": 5,
"text": "Title: d d Code Scheduling for Multiple Instruction Stream Architectures \nAbstract: Extensive research has been done on extracting parallelism from single instruction stream processors. This paper presents our investigation into ways to modify MIMD architectures to allow them to extract the instruction level parallelism achieved by current superscalar and VLIW machines. A new architecture is proposed which utilizes the advantages of a multiple instruction stream design while addressing some of the limitations that have prevented MIMD architectures from performing ILP operation. A new code scheduling mechanism is described to support this new architecture by partitioning instructions across multiple processing elements in order to exploit this level of parallelism. ",
"neighbors": [
196,
735
],
"mask": "Test"
},
{
"node_id": 217,
"label": 2,
"text": "Title: A Scalable Performance Prediction Method for Parallel Neural Network Simulations \nAbstract: A performance prediction method is presented for indicating the performance range of MIMD parallel processor systems for neural network simulations. The total execution time of a parallel application is modeled as the sum of its calculation and communication times. The method is scalable because based on the times measured on one processor and one communication link, the performance, speedup, and efficiency can be predicted for a larger processor system. It is validated quantitatively by applying it to two popular neural networks, backpropagation and the Kohonen self-organizing feature map, decomposed on a GCel-512, a 512 transputer system. Agreement of the model with the measurements is within 9%.",
"neighbors": [
685,
747,
2355
],
"mask": "Train"
},
{
"node_id": 218,
"label": 3,
"text": "Title: Learning Classification Trees \nAbstract: Algorithms for learning classification trees have had successes in artificial intelligence and statistics over many years. This paper outlines how a tree learning algorithm can be derived using Bayesian statistics. This introduces Bayesian techniques for splitting, smoothing, and tree averaging. The splitting rule is similar to Quinlan's information gain, while smoothing and averaging replace pruning. Comparative experiments with reimplementations of a minimum encoding approach, Quinlan's C4 (Quinlan et al., 1987) and Breiman et al.'s CART (Breiman et al., 1984) show the full Bayesian algorithm can produce Publication: This paper is a final draft submitted for publication to the Statistics and Computing journal; a version with some minor changes appeared in Volume 2, 1992, pages 63-73. more accurate predictions than versions of these other approaches, though pay a computational price.",
"neighbors": [
378,
478,
1290
],
"mask": "Train"
},
{
"node_id": 219,
"label": 1,
"text": "Title: Issues in Evolutionary Robotics \nAbstract: A version of this paper appears in: Proceedings of SAB92, the Second International Conference on Simulation of Adaptive Behaviour J.-A. Meyer, H. Roitblat, and S. Wilson, editors, MIT Press Bradford Books, Cambridge, MA, 1993. ",
"neighbors": [
38,
163,
402,
563,
712,
757,
846,
1325,
1404,
1689,
1738,
2563
],
"mask": "Train"
},
{
"node_id": 220,
"label": 4,
"text": "Title: Learning sorting and decision trees with POMDPs \nAbstract: pomdps are general models of sequential decisions in which both actions and observations can be probabilistic. Many problems of interest can be formulated as pomdps, yet the use of pomdps has been limited by the lack of effective algorithms. Recently this has started to change and a number of problems such as robot navigation and planning are beginning to be formulated and solved as pomdps. The advantage of the pomdp approach is its clean semantics and its ability to produce principled solutions that integrate physical and information gathering actions. In this paper we pursue this approach in the context of two learning tasks: learning to sort a vector of numbers and learning decision trees from data. Both problems are formulated as pomdps and solved by a general pomdp algorithm. The main lessons and results are that 1) the use of suitable heuristics and representations allows for the solution of sorting and classification pomdps of non-trivial sizes, 2) the quality of the resulting solutions are competitive with the best algorithms, and 3) problematic aspects in decision tree learning such as test and mis-classification costs, noisy tests, and missing values are naturally accommodated.",
"neighbors": [
45,
295,
490,
552
],
"mask": "Train"
},
{
"node_id": 221,
"label": 0,
"text": "Title: Abstract \nAbstract: Given an arbitrary learning situation, it is difficult to determine the most appropriate learning strategy. The goal of this research is to provide a general representation and processing framework for introspective reasoning for strategy selection. The learning framework for an introspective system is to perform some reasoning task. As it does, the system also records a trace of the reasoning itself, along with the results of such reasoning. If a reasoning failure occurs, the system retrieves and applies an introspective explanation of the failure in order to understand the error and repair the knowledge base. A knowledge structure called a Meta-Explanation Pattern is used to both explain how conclusions are derived and why such conclusions fail. If reasoning is represented in an explicit, declarative manner, the system can examine its own reasoning, analyze its reasoning failures, identify what it needs to learn, and select appropriate learning strategies in order to learn the required knowledge without overreli ance on the programmer.",
"neighbors": [
629
],
"mask": "Test"
},
{
"node_id": 222,
"label": 0,
"text": "Title: Abstract \nAbstract: We describe an ongoing project to develop an adaptive training system (ATS) that dynamically models a students learning processes and can provide specialized tutoring adapted to a students knowledge state and learning style. The student modeling component of the ATS, ML-Modeler, uses machine learning (ML) techniques to emulate the students novice-to-expert transition. ML-Modeler infers which learning methods the student has used to reach the current knowledge state by comparing the students solution trace to an expert solution and generating plausible hypotheses about what misconceptions and errors the student has made. A case-based approach is used to generate hypotheses through incorrectly applying analogy, overgeneralization, and overspecialization. The student and expert models use a network-based representation that includes abstract concepts and relationships as well as strategies for problem solving. Fuzzy methods are used to represent the uncertainty in the student model. This paper describes the design of the ATS and ML-Modeler, and gives a detailed example of how the system would model and tutor the student in a typical session. The domain we use for this example is high-school level chemistry. ",
"neighbors": [
581,
643
],
"mask": "Validation"
},
{
"node_id": 223,
"label": 5,
"text": "Title: Automated model selection \nAbstract: Many algorithms have parameters that should be set by the user. For most machine learning algorithms parameter setting is a non-trivial task that influence knowledge model returned by the algorithm. Parameter values are usually set approximately according to some characteristics of the target problem, obtained in different ways. The usual way is to use background knowledge about the target problem (if any) and perform some testing experiments. The paper presents an approach to automated model selection based on local optimization that uses an empirical evaluation of the constructed concept description to guide the search. The approach was tested by using the inductive concept learning system Magnus ",
"neighbors": [
430,
686
],
"mask": "Train"
},
{
"node_id": 224,
"label": 5,
"text": "Title: on Inductive Logic Programming (ILP-95) Inducing Logic Programs without Explicit Negative Examples \nAbstract: This paper presents a method for learning logic programs without explicit negative examples by exploiting an assumption of output completeness. A mode declaration is supplied for the target predicate and each training input is assumed to be accompanied by all of its legal outputs. Any other outputs generated by an incomplete program implicitly represent negative examples; however, large numbers of ground negative examples never need to be generated. This method has been incorporated into two ILP systems, Chillin and IFoil, both of which use intensional background knowledge. Tests on two natural language acquisition tasks, case-role mapping and past-tense learning, illustrate the advantages of the approach. ",
"neighbors": [
597,
1601,
1819
],
"mask": "Train"
},
{
"node_id": 225,
"label": 0,
"text": "Title: on Inductive Logic Programming (ILP-95) Inducing Logic Programs without Explicit Negative Examples \nAbstract: Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost.",
"neighbors": [
88,
686,
2428
],
"mask": "Validation"
},
{
"node_id": 226,
"label": 2,
"text": "Title: A Neuro-Fuzzy Approach to Agglomerative Clustering \nAbstract: In this paper, we introduce a new agglomerative clustering algorithm in which each pattern cluster is represented by a collection of fuzzy hyperboxes. Initially, a number of such hyperboxes are calculated to represent the pattern samples. Then, the algorithm applies multi-resolution techniques to progressively \"combine\" these hyperboxes in a hierarchial manner. Such an agglomerative scheme has been found to yield encouraging results in real-world clustering problems. ",
"neighbors": [
617
],
"mask": "Validation"
},
{
"node_id": 227,
"label": 6,
"text": "Title: Induction of Oblique Decision Trees \nAbstract: This paper introduces a randomized technique for partitioning examples using oblique hyperplanes. Standard decision tree techniques, such as ID3 and its descendants, partition a set of points with axis-parallel hyper-planes. Our method, by contrast, attempts to find hyperplanes at any orientation. The purpose of this more general technique is to find smaller but equally accurate decision trees than those created by other methods. We have tested our algorithm on both real and simulated data, and found that in some cases it produces surprisingly small trees without losing predictive accuracy. Small trees allow us, in turn, to obtain simple qualitative descriptions of each problem domain.",
"neighbors": [
142,
296,
378,
438,
478,
638,
823,
1318,
1547,
2319
],
"mask": "Train"
},
{
"node_id": 228,
"label": 1,
"text": "Title: Cost-Sensitive Classification: Empirical Evaluation of a Hybrid Genetic Decision Tree Induction Algorithm \nAbstract: This paper introduces ICET, a new algorithm for costsensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for costsensitive classification EG2, CS-ID3, and IDX and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five real-world medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICETs search in bias space and discovers a way to improve the search.",
"neighbors": [
52,
119,
259,
323,
900,
1692,
2465,
2617
],
"mask": "Validation"
},
{
"node_id": 229,
"label": 2,
"text": "Title: Understanding Musical Sound with Forward Models and Physical Models \nAbstract: This paper introduces ICET, a new algorithm for costsensitive classification. ICET uses a genetic algorithm to evolve a population of biases for a decision tree induction algorithm. The fitness function of the genetic algorithm is the average cost of classification when using the decision tree, including both the costs of tests (features, measurements) and the costs of classification errors. ICET is compared here with three other algorithms for costsensitive classification EG2, CS-ID3, and IDX and also with C4.5, which classifies without regard to cost. The five algorithms are evaluated empirically on five real-world medical datasets. Three sets of experiments are performed. The first set examines the baseline performance of the five algorithms on the five datasets and establishes that ICET performs significantly better than its competitors. The second set tests the robustness of ICET under a variety of conditions and shows that ICET maintains its advantage. The third set looks at ICETs search in bias space and discovers a way to improve the search.",
"neighbors": [
477,
747
],
"mask": "Train"
},
{
"node_id": 230,
"label": 2,
"text": "Title: Mathematical Programming in Neural Networks \nAbstract: This paper highlights the role of mathematical programming, particularly linear programming, in training neural networks. A neural network description is given in terms of separating planes in the input space that suggests the use of linear programming for determining these planes. A more standard description in terms of a mean square error in the output space is also given, which leads to the use of unconstrained minimization techniques for training a neural network. The linear programming approach is demonstrated by a brief description of a system for breast cancer diagnosis that has been in use for the last four years at a major medical facility.",
"neighbors": [
142,
391,
406,
427,
520,
1283,
1284
],
"mask": "Validation"
},
{
"node_id": 231,
"label": 0,
"text": "Title: Understanding Creativity: A Case-Based Approach \nAbstract: Dissatisfaction with existing standard case-based reasoning (CBR) systems has prompted us to investigate how we can make these systems more creative and, more broadly, what would it mean for them to be more creative. This paper discusses three research goals: understanding creative processes better, investigating the role of cases and CBR in creative problem solving, and understanding the framework that supports this more interesting kind of case-based reasoning. In addition, it discusses methodological issues in the study of creativity and, in particular, the use of CBR as a research paradigm for exploring creativity.",
"neighbors": [
30,
285,
486
],
"mask": "Train"
},
{
"node_id": 232,
"label": 2,
"text": "Title: Stochastic Decomposition of DNA Sequences Using Hidden Markov Models \nAbstract: This work presents an application of a machine learning for characterizing an important property of natural DNA sequences compositional inhomogeneity. Compositional segments often correspond to meaningful biological units. Taking into account such inhomogeneity is a prerequisite of successful recognition of functional features in DNA sequences, especially, protein-coding genes. Here we present a technique for DNA segmentation using hidden Markov models. A DNA sequence is represented by a chain of homogeneous segments, each described by one of a few statistically discriminated hidden states, whose contents form a first-order Markov chain. The technique is used to describe and compare chromosomes I and IV of the completely sequenced Saccharomyces cerevisiae (yeast) genome. Our results indicate the existence of a few well separated states, which gives support to the isochore theory. We also explore the model's likelihood landscape and analyze the dynamics of the optimization process, thus addressing the problem of reliability of the obtained optima and efficiency of the algorithms. ",
"neighbors": [
14,
268,
613
],
"mask": "Train"
},
{
"node_id": 233,
"label": 2,
"text": "Title: A `SELF-REFERENTIAL' WEIGHT MATRIX \nAbstract: Weight modifications in traditional neural nets are computed by hard-wired algorithms. Without exception, all previous weight change algorithms have many specific limitations. Is it (in principle) possible to overcome limitations of hard-wired algorithms by allowing neural nets to run and improve their own weight change algorithms? This paper constructively demonstrates that the answer (in principle) is `yes'. I derive an initial gradient-based sequence learning algorithm for a `self-referential' recurrent network that can `speak' about its own weight matrix in terms of activations. It uses some of its input and output units for observing its own errors and for explicitly analyzing and modifying its own weight matrix, including those parts of the weight matrix responsible for analyzing and modifying the weight matrix. The result is the first `introspective' neural net with explicit potential control over all of its own adaptive parameters. A disadvantage of the algorithm is its high computational complexity per time step which is independent of the sequence length and equals O(n conn logn conn ), where n conn is the number of connections. Another disadvantage is the high number of local minima of the unusually complex error surface. The purpose of this paper, however, is not to come up with the most efficient `introspective' or `self-referential' weight change algorithm, but to show that such algorithms are possible at all. ",
"neighbors": [
595,
1990
],
"mask": "Train"
},
{
"node_id": 234,
"label": 2,
"text": "Title: Multiassociative Memory \nAbstract: This paper discusses the problem of how to implement many-to-many, or multi-associative, mappings within connectionist models. Traditional symbolic approaches wield explicit representation of all alternatives via stored links, or implicitly through enumerative algorithms. Classical pattern association models ignore the issue of generating multiple outputs for a single input pattern, and while recent research on recurrent networks is promising, the field has not clearly focused upon multi-associativity as a goal. In this paper, we define multiassociative memory MM, and several possible variants, and discuss its utility in general cognitive modeling. We extend sequential cascaded networks (Pollack 1987, 1990a) to fit the task, and perform several ini tial experiments which demonstrate the feasibility of the concept. This paper appears in The Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society. August 7-10, 1991. ",
"neighbors": [
15,
747
],
"mask": "Train"
},
{
"node_id": 235,
"label": 2,
"text": "Title: Development of triadic neural circuits for visual image stabilization under eye movements \nAbstract: Human visual systems maintain a stable internal representation of a scene even though the image on the retina is constantly changing because of eye movements. Such stabilization can theoretically be effected by dynamic shifts in the receptive field (RF) of neurons in the visual system. This paper examines how a neural circuit can learn to generate such shifts. The shifts are controlled by eye position signals and compensate for the movement in the retinal image caused by eye movements. The development of a neural shifter circuit (Olshausen, Anderson, & Van Essen, 1992) is modeled using triadic connections. These connections are gated by signals that indicate the direction of gaze (eye position signals). In simulations, a neural model is exposed to sequences of stimuli paired with appropriate eye position signals. The initially ",
"neighbors": [
747
],
"mask": "Validation"
},
{
"node_id": 236,
"label": 0,
"text": "Title: Machine Learning Methods for International Conflict Databases: A Case Study in Predicting Mediation Outcome \nAbstract: This paper tries to identify rules and factors that are predictive for the outcome of international conflict management attempts. We use C4.5, an advanced Machine Learning algorithm, for generating decision trees and prediction rules from cases in the CONFMAN database. The results show that simple patterns and rules are often not only more understandable, but also more reliable than complex rules. Simple decision trees are able to improve the chances of correctly predicting the outcome of a conflict management attempt. This suggests that mediation is more repetitive than conflicts per se, where such results have not been achieved so far. ",
"neighbors": [
156,
430,
1270,
1271,
1617
],
"mask": "Validation"
},
{
"node_id": 237,
"label": 1,
"text": "Title: A Sequential Niche Technique for Multimodal Function Optimization \nAbstract: c fl UWCC COMMA Technical Report No. 93001, February 1993 x No part of this article may be reproduced for commercial purposes. Abstract A technique is described which allows unimodal function optimization methods to be extended to efficiently locate all optima of multimodal problems. We describe an algorithm based on a traditional genetic algorithm (GA). This involves iterating the GA, but uses knowledge gained during one iteration to avoid re-searching, on subsequent iterations, regions of problem space where solutions have already been found. This is achieved by applying a fitness derating function to the raw fitness function, so that fitness values are depressed in the regions of the problem space where solutions have already been found. Consequently, the likelihood of discovering a new solution on each iteration is dramatically increased. The technique may be used with various styles of GA, or with other optimization methods, such as simulated annealing. The effectiveness of the algorithm is demonstrated on a number of multimodal test functions. The technique is at least as fast as fitness sharing methods. It provides a speedup of between 1 and 10p on a problem with p optima, depending on the value of p and the convergence time complexity. ",
"neighbors": [
163,
329,
1060
],
"mask": "Test"
},
{
"node_id": 238,
"label": 2,
"text": "Title: Learning from Examples, Agent Teams and the Concept of Reflection \nAbstract: In International Journal of Pattern Recognition and AI, 10(3):251-272, 1996 Also available as GMD report #766 ",
"neighbors": [
46,
193,
301,
489,
1815
],
"mask": "Validation"
},
{
"node_id": 239,
"label": 4,
"text": "Title: Robust Value Function Approximation by Working Backwards Computing an accurate value function is the key\nAbstract: In this paper, we examine the intuition that TD() is meant to operate by approximating asynchronous value iteration. We note that on the important class of discrete acyclic stochastic tasks, value iteration is inefficient compared with the DAG-SP algorithm, which essentially performs only one sweep instead of many by working backwards from the goal. The question we address in this paper is whether there is an analogous algorithm that can be used in large stochastic state spaces requiring function approximation. We present such an algorithm, analyze it, and give comparative results to TD on several domains. the state). Using VI to solve MDPs belonging to either of these special classes can be quite inefficient, since VI performs backups over the entire space, whereas the only backups useful for improving V fl are those on the \"frontier\" between already-correct and not-yet-correct V fl values. In fact, there are classical algorithms for both problem classes which compute V fl more efficiently by explicitly working backwards: for the deterministic class, Dijkstra's shortest-path algorithm; and for the acyclic class, Directed-Acyclic-Graph-Shortest-Paths (DAG-SP) [6]. 1 DAG-SP first topologically sorts the MDP, producing a linear ordering of the states in which every state x precedes all states reachable from x. Then, it runs through that list in reverse, performing one backup per state. Worst-case bounds for VI, Dijkstra, and DAG-SP in deterministic domains with X states and A actions/state are 1 Although [6] presents DAG-SP only for deterministic acyclic problems, it applies straightforwardly to the ",
"neighbors": [
82,
552,
565,
1378
],
"mask": "Validation"
},
{
"node_id": 240,
"label": 1,
"text": "Title: A Transformation System for Interactive Reformulation of Design Optimization Strategies \nAbstract: Automatic design optimization is highly sensitive to problem formulation. The choice of objective function, constraints and design parameters can dramatically impact the computational cost of optimization and the quality of the resulting design. The best formulation varies from one application to another. A design engineer will usually not know the best formulation in advance. In order to address this problem, we have developed a system that supports interactive formulation, testing and reformulation of design optimization strategies. Our system includes an executable, data-flow language for representing optimization strategies. The language allows an engineer to define multiple stages of optimization, each using different approximations of the objective and constraints or different abstractions of the design space. We have also developed a set of transformations that reformulate strategies represented in our language. The transformations can approximate objective and constraint functions, abstract or re-parameterize a search space, or divide an optimization into multiple stages. The system is applicable to design problems in which the artifact is governed by algebraic and ordinary differential equations. We have tested the system on problems of racing yacht and jet engine nozzle design. We report experimental results demonstrating that our reformulation techniques can significantly improve the performance of automatic design optimization. Our research demonstrates the viability of a reformulation methodology that combines symbolic program transformation with numerical experimentation. It is an important first step in a research programme aimed at automating the entire strategy formulation process.",
"neighbors": [
61,
2308,
2652
],
"mask": "Train"
},
{
"node_id": 241,
"label": 2,
"text": "Title: Segmentation and Classification of Combined Optical and Radar Imagery \nAbstract: The classification performance of a neural network for combined six-band Landsat-TM and one-band ERS-1/SAR PRI imagery from the same scene is carried out. Different combinations of the data | either raw, segmented or filtered |, using the available ground truth polygons, training and test sets are created. The training sets are used for learning while the test sets are used for verification of the neural network. The different combinations are evaluated here. ",
"neighbors": [
763
],
"mask": "Train"
},
{
"node_id": 242,
"label": 6,
"text": "Title: Learning Markov chains with variable memory length from noisy output \nAbstract: The problem of modeling complicated data sequences, such as DNA or speech, often arises in practice. Most of the algorithms select a hypothesis from within a model class assuming that the observed sequence is the direct output of the underlying generation process. In this paper we consider the case when the output passes through a memoryless noisy channel before observation. In particular, we show that in the class of Markov chains with variable memory length, learning is affected by factors, which, despite being super-polynomial, are still small in some practical cases. Markov models with variable memory length, or probabilistic finite suffix automata, were introduced in learning theory by Ron, Singer and Tishby who also described a polynomial time learning algorithm [11, 12]. We present a modification of the algorithm which uses a noise-corrupted sample and has knowledge of the noise structure. The same algorithm is still viable if the noise is not known exactly but a good estimation is available. Finally, some experimental results are presented for removing noise from corrupted English text, and to measure how the performance of the learning algorithm is affected by the size of the noisy sample and the noise rate. ",
"neighbors": [
14,
574,
1006
],
"mask": "Train"
},
{
"node_id": 243,
"label": 1,
"text": "Title: Distribution Category: Users Guide to the PGAPack Parallel Genetic Algorithm Library \nAbstract: The problem of modeling complicated data sequences, such as DNA or speech, often arises in practice. Most of the algorithms select a hypothesis from within a model class assuming that the observed sequence is the direct output of the underlying generation process. In this paper we consider the case when the output passes through a memoryless noisy channel before observation. In particular, we show that in the class of Markov chains with variable memory length, learning is affected by factors, which, despite being super-polynomial, are still small in some practical cases. Markov models with variable memory length, or probabilistic finite suffix automata, were introduced in learning theory by Ron, Singer and Tishby who also described a polynomial time learning algorithm [11, 12]. We present a modification of the algorithm which uses a noise-corrupted sample and has knowledge of the noise structure. The same algorithm is still viable if the noise is not known exactly but a good estimation is available. Finally, some experimental results are presented for removing noise from corrupted English text, and to measure how the performance of the learning algorithm is affected by the size of the noisy sample and the noise rate. ",
"neighbors": [
53,
357,
728
],
"mask": "Validation"
},
{
"node_id": 244,
"label": 2,
"text": "Title: Building Intelligent Agents for Web-Based Tasks: A Theory-Refinement Approach \nAbstract: We present and evaluate an infrastructure with which to rapidly and easily build intelligent software agents for Web-based tasks. Our design is centered around two basic functions: ScoreThis-Link and ScoreThisPage. If given highly accurate such functions, standard heuristic search would lead to efficient retrieval of useful information. Our approach allows users to tailor our system's behavior by providing approximate advice about the above functions. This advice is mapped into neural network implementations of the two functions. Subsequent reinforcements from the Web (e.g., dead links) and any ratings of retrieved pages that the user wishes to provide are, respectively, used to refine the link- and page-scoring functions. Hence, our agent architecture provides an appealing middle ground between nonadaptive \"agent\" programming languages and systems that solely learn user preferences from the user's ratings of pages. We present a case study where we provide some simple advice and specialize our general-purpose system into a \"home-page finder\". An empirical study demonstrates that our approach leads to a more effective home-page finder than that of a leading commercial Web search engine. ",
"neighbors": [
136,
565,
750
],
"mask": "Validation"
},
{
"node_id": 245,
"label": 0,
"text": "Title: ICML-96 Workshop \"Learning in context-sensitive domains\" Bari, Italy. Dynamically Adjusting Concepts to Accommodate Changing Contexts \nAbstract: In concept learning, objects in a domain are grouped together based on similarity as determined by the attributes used to describe them. Existing concept learners require that this set of attributes be known in advance and presented in entirety before learning begins. Additionally, most systems do not possess mechanisms for altering the attribute set after concepts have been learned. Consequently, a veridical attribute set relevant to the task for which the concepts are to be used must be supplied at the onset of learning, and in turn, the usefulness of the concepts is limited to the task for which the attributes were originally selected. In order to efficiently accommodate changing contexts, a concept learner must be able to alter the set of descriptors without discarding its prior knowledge of the domain. We introduce the notion of attribute-incrementation, the dynamic modification of the attribute set used to describe instances in a problem domain. We have implemented the capability in a concept learning system that has been evaluated along several dimensions using an existing concept formation system for com parison.",
"neighbors": [
172,
1636
],
"mask": "Train"
},
{
"node_id": 246,
"label": 3,
"text": "Title: Bayesian Mixture Modeling by Monte Carlo Simulation \nAbstract: It is shown that Bayesian inference from data modeled by a mixture distribution can feasibly be performed via Monte Carlo simulation. This method exhibits the true Bayesian predictive distribution, implicitly integrating over the entire underlying parameter space. An infinite number of mixture components can be accommodated without difficulty, using a prior distribution for mixing proportions that selects a reasonable subset of components to explain any finite training set. The need to decide on a \"correct\" number of components is thereby avoided. The feasibility of the method is shown empirically for a simple classification task. ",
"neighbors": [
560
],
"mask": "Test"
},
{
"node_id": 247,
"label": 4,
"text": "Title: Machine Learning, Efficient Reinforcement Learning through Symbiotic Evolution \nAbstract: This article presents a new reinforcement learning method called SANE (Symbiotic, Adaptive Neuro-Evolution), which evolves a population of neurons through genetic algorithms to form a neural network capable of performing a task. Symbiotic evolution promotes both cooperation and specialization, which results in a fast, efficient genetic search and discourages convergence to suboptimal solutions. In the inverted pendulum problem, SANE formed effective networks 9 to 16 times faster than the Adaptive Heuristic Critic and 2 times faster than Q-learning and the GENITOR neuro-evolution approach without loss of generalization. Such efficient learning, combined with few domain assumptions, make SANE a promising approach to a broad range of reinforcement learning problems, including many real-world applications. ",
"neighbors": [
500,
563,
982,
1117,
1261,
1973,
2257,
2446
],
"mask": "Train"
},
{
"node_id": 248,
"label": 3,
"text": "Title: Probabilistic evaluation of sequential plans from causal models with hidden variables \nAbstract: The paper concerns the probabilistic evaluation of plans in the presence of unmeasured variables, each plan consisting of several concurrent or sequential actions. We establish a graphical criterion for recognizing when the effects of a given plan can be predicted from passive observations on measured variables only. When the criterion is satisfied, a closed-form expression is provided for the probability that the plan will achieve a specified goal.",
"neighbors": [
105,
398,
419,
1326,
2088
],
"mask": "Train"
},
{
"node_id": 249,
"label": 5,
"text": "Title: Control Flow Prediction For Dynamic ILP Processors \nAbstract: We introduce a technique to enhance the ability of dynamic ILP processors to exploit (speculatively executed) parallelism. Existing branch prediction mechanisms used to establish a dynamic window from which ILP can be extracted are limited in their abilities to: (i) create a large, accurate dynamic window, (ii) initiate a large number of instructions into this window in every cycle, and (iii) traverse multiple branches of the control flow graph per prediction. We introduce control flow prediction which uses information in the control flow graph of a program to overcome these limitations. We discuss how information present in the control flow graph can be represented using multiblocks, and conveyed to the hardware using Control Flow Tables and Control Flow Prediction Buffers. We evaluate the potential of control flow prediction on an abstract machine and on a dynamic ILP processing model. Our results indicate that control flow prediction is a powerful and effective assist to the hardware in making more informed run time decisions about program control flow. ",
"neighbors": [
86,
652,
2649
],
"mask": "Train"
},
{
"node_id": 250,
"label": 3,
"text": "Title: Mean Field Theory for Sigmoid Belief Networks \nAbstract: We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean field theory provides a tractable approximation to the true probability distribution in these networks; it also yields a lower bound on the likelihood of evidence. We demonstrate the utility of this framework on a benchmark problem in statistical pattern recognition|the classification of handwritten digits.",
"neighbors": [
31,
33,
76,
107,
108,
170,
304,
427,
498,
499,
577,
584,
639,
694,
708,
736
],
"mask": "Train"
},
{
"node_id": 251,
"label": 6,
"text": "Title: A Statistical Approach to Solving the EBL Utility Problem \nAbstract: Many \"learning from experience\" systems use information extracted from problem solving experiences to modify a performance element PE, forming a new element PE 0 that can solve these and similar problems more efficiently. However, as transformations that improve performance on one set of problems can degrade performance on other sets, the new PE 0 is not always better than the original PE; this depends on the distribution of problems. We therefore seek the performance element whose expected performance, over this distribution, is optimal. Unfortunately, the actual distribution, which is needed to determine which element is optimal, is usually not known. Moreover, the task of finding the optimal element, even knowing the distribution, is intractable for most interesting spaces of elements. This paper presents a method, palo, that side-steps these problems by using a set of samples to estimate the unknown distribution, and by using a set of transformations to hill-climb to a local optimum. This process is based on a mathematically rigorous form of utility analysis: in particular, it uses statistical techniques to determine whether the result of a proposed transformation will be better than the original system. We also present an efficient way of implementing this learning system in the context of a general class of performance elements, and include empirical evidence that this approach can work effectively. fl Much of this work was performed at the University of Toronto, where it was supported by the Institute for Robotics and Intelligent Systems and by an operating grant from the National Science and Engineering Research Council of Canada. We also gratefully acknowledge receiving many helpful comments from William Cohen, Dave Mitchell, Dale Schuurmans and the anonymous referees. ",
"neighbors": [
6,
88,
482,
865,
932,
1505,
1877,
2560
],
"mask": "Train"
},
{
"node_id": 252,
"label": 4,
"text": "Title: A Modular Q-Learning Architecture for Manipulator Task Decomposition `Data storage in the cerebellar model ar\nAbstract: Compositional Q-Learning (CQ-L) (Singh 1992) is a modular approach to learning to perform composite tasks made up of several elemental tasks by reinforcement learning. Skills acquired while performing elemental tasks are also applied to solve composite tasks. Individual skills compete for the right to act and only winning skills are included in the decomposition of the composite task. We extend the original CQ-L concept in two ways: (1) a more general reward function, and (2) the agent can have more than one actuator. We use the CQ-L architecture to acquire skills for performing composite tasks with a simulated two-linked manipulator having large state and action spaces. The manipulator is a non-linear dynamical system and we require its end-effector to be at specific positions in the workspace. Fast function approximation in each of the Q-modules is achieved through the use of an array of Cerebellar Model Articulation Controller (CMAC) (Albus Our research interests involve the scaling up of machine learning methods, especially reinforcement learning, for autonomous robot control. We are interested in function approximators suitable for reinforcement learning in problems with large state spaces, such as the Cerebellar Model Articulation Controller (CMAC) (Albus 1975) which permit fast, online learning and good local generalization. In addition, we are interested in task decomposition by reinforcement learning and the use of hierarchical and modular function approximator architectures. We are examining the effectiveness of a modified Hierarchical Mixtures of Experts (HME) (Jordan & Jacobs 1993) approach for reinforcement learning since the original HME was developed mainly for supervised learning and batch learning tasks. The incorporation of domain knowledge into reinforcement learning agents is an important way of extending their capabilities. Default policies can be specified, and domain knowledge can also be used to restrict the size of the state-action space, leading to faster learning. We are investigating the use of Q-Learning (Watkins 1989) in planning tasks, using a classifier system (Holland 1986) to encode the necessary condition-action rules. Jordan, M. & Jacobs, R. (1993), Hierarchical mixtures of experts and the EM algorithm, Technical Report 9301, MIT Computational Cognitive Science. ",
"neighbors": [
60,
74,
294,
562,
688
],
"mask": "Train"
},
{
"node_id": 253,
"label": 2,
"text": "Title: Hyperplane Dynamics as a Means to Understanding Back-Propagation Learning and Network Plasticity \nAbstract: The processing performed by a feed-forward neural network is often interpreted through use of decision hyperplanes at each layer. The adaptation process, however, is normally explained using the picture of gradient descent of an error landscape. In this paper the dynamics of the decision hyperplanes is used as the model of the adaptation process. A electro-mechanical analogy is drawn where the dynamics of hyperplanes is determined by interaction forces between hyperplanes and the particles which represent the patterns. Relaxation of the system is determined by increasing hyperplane inertia (mass). This picture is used to clarify the dynamics of learning, and to go some way to explaining learning deadlocks and escaping from certain local minima. Furthermore network plasticity is introduced as a dynamic property of the system, and its reduction as a necessary consequence of information storage. Hyper-plane inertia is used to explain and avoid destructive relearning in trained networks. ",
"neighbors": [
15,
1815,
2670
],
"mask": "Train"
},
{
"node_id": 254,
"label": 2,
"text": "Title: Scaling-up RAAMs \nAbstract: Modifications to Recursive Auto-Associative Memory are presented, which allow it to store deeper and more complex data structures than previously reported. These modifications include adding extra layers to the compressor and reconstructor networks, employing integer rather than real-valued representations, pre-conditioning the weights and pre-setting the representations to be compatible with them. The resulting system is tested on a data set of syntactic trees extracted from the Penn Treebank.",
"neighbors": [
15,
1176
],
"mask": "Train"
},
{
"node_id": 255,
"label": 6,
"text": "Title: An Efficient Boosting Algorithm for Combining Preferences \nAbstract: The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for a restricted case. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of RankBoost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations. Here, we present results comparing RankBoost to nearest-neighbor and regression algorithms. ",
"neighbors": [
70,
421,
569,
767
],
"mask": "Validation"
},
{
"node_id": 256,
"label": 0,
"text": "Title: Using Decision Trees to Improve Case-Based Learning \nAbstract: This paper shows that decision trees can be used to improve the performance of case-based learning (CBL) systems. We introduce a performance task for machine learning systems called semi-flexible prediction that lies between the classification task performed by decision tree algorithms and the flexible prediction task performed by conceptual clustering systems. In semi-flexible prediction, learning should improve prediction of a specific set of features known a priori rather than a single known feature (as in classification) or an arbitrary set of features (as in conceptual clustering). We describe one such task from natural language processing and present experiments that compare solutions to the problem using decision trees, CBL, and a hybrid approach that combines the two. In the hybrid approach, decision trees are used to specify the features to be included in k-nearest neighbor case retrieval. Results from the experiments show that the hybrid approach outperforms both the decision tree and case-based approaches as well as two case-based systems that incorporate expert knowledge into their case retrieval algorithms. Results clearly indicate that decision trees can be used to improve the performance of CBL systems and do so without reliance on potentially expensive expert knowledge.",
"neighbors": [
430,
634,
635,
928,
983,
2225,
2369
],
"mask": "Test"
},
{
"node_id": 257,
"label": 2,
"text": "Title: Factor Analysis Using Delta-Rule Wake-Sleep Learning \nAbstract: Technical Report No. 9607, Department of Statistics, University of Toronto We describe a linear network that models correlations between real-valued visible variables using one or more real-valued hidden variables a factor analysis model. This model can be seen as a linear version of the Helmholtz machine, and its parameters can be learned using the wake-sleep method, in which learning of the primary generative model is assisted by a recognition model, whose role is to fill in the values of hidden variables based on the values of visible variables. The generative and recognition models are jointly learned in wake and sleep phases, using just the delta rule. This learning procedure is comparable in simplicity to Oja's version of Hebbian learning, which produces a somewhat different representation of correlations in terms of principal components. We argue that the simplicity of wake-sleep learning makes factor analysis a plau sible alternative to Hebbian learning as a model of activity-dependent cortical plasticity.",
"neighbors": [
36,
480,
667
],
"mask": "Test"
},
{
"node_id": 258,
"label": 2,
"text": "Title: Using Dirichlet Mixture Priors to Derive Hidden Markov Models for Protein Families \nAbstract: A Bayesian method for estimating the amino acid distributions in the states of a hidden Markov model (HMM) for a protein family or the columns of a multiple alignment of that family is introduced. This method uses Dirichlet mixture densities as priors over amino acid distributions. These mixture densities are determined from examination of previously constructed HMMs or multiple alignments. It is shown that this Bayesian method can improve the quality of HMMs produced from small training sets. Specific experiments on the EF-hand motif are reported, for which these priors are shown to produce HMMs with higher likelihood on unseen data, and fewer false positives and false negatives in a database search task. ",
"neighbors": [
0,
7,
8,
14,
268,
400,
435,
544,
751,
1031,
1111
],
"mask": "Train"
},
{
"node_id": 259,
"label": 0,
"text": "Title: How to Get a Free Lunch: A Simple Cost Model for Machine Learning Applications \nAbstract: This paper proposes a simple cost model for machine learning applications based on the notion of net present value. The model extends and unifies the models used in (Pazzani et al., 1994) and (Masand & Piatetsky-Shapiro, 1996). It attempts to answer the question \"Should a given machine learning system now in the prototype stage be fielded?\" The model's inputs are the system's confusion matrix, the cash flow matrix for the application, the cost per decision, the one-time cost of deploying the system, and the rate of return on investment. Like Provost and Fawcett's (1997) ROC convex hull method, the present model can be used for decision-making even when its input variables are not known exactly. Despite its simplicity, it has a number of non-trivial consequences. For example, under it the \"no free lunch\" theorems of learning theory no longer apply. ",
"neighbors": [
228,
320,
582
],
"mask": "Train"
},
{
"node_id": 260,
"label": 3,
"text": "Title: ASPECTS OF GRAPHICAL MODELS CONNECTED WITH CAUSALITY \nAbstract: This paper demonstrates the use of graphs as a mathematical tool for expressing independenices, and as a formal language for communicating and processing causal information in statistical analysis. We show how complex information about external interventions can be organized and represented graphically and, conversely, how the graphical representation can be used to facilitate quantitative predictions of the effects of interventions. We first review the Markovian account of causation and show that directed acyclic graphs (DAGs) offer an economical scheme for representing conditional independence assumptions and for deducing and displaying all the logical consequences of such assumptions. We then introduce the manipulative account of causation and show that any DAG defines a simple transformation which tells us how the probability distribution will change as a result of external interventions in the system. Using this transformation it is possible to quantify, from non-experimental data, the effects of external interventions and to specify conditions under which randomized experiments are not necessary. Finally, the paper offers a graphical interpretation for Rubin's model of causal effects, and demonstrates its equivalence to the manipulative account of causation. We exemplify the tradeoffs between the two approaches by deriving nonparametric bounds on treatment effects under conditions of imperfect compliance. ",
"neighbors": [
419,
619,
1324,
1527,
2069,
2166,
2524,
2559
],
"mask": "Train"
},
{
"node_id": 261,
"label": 2,
"text": "Title: Soft Vector Quantization and the EM Algorithm Running Title: Soft Vector Quantization and EM Section:\nAbstract: This paper demonstrates the use of graphs as a mathematical tool for expressing independenices, and as a formal language for communicating and processing causal information in statistical analysis. We show how complex information about external interventions can be organized and represented graphically and, conversely, how the graphical representation can be used to facilitate quantitative predictions of the effects of interventions. We first review the Markovian account of causation and show that directed acyclic graphs (DAGs) offer an economical scheme for representing conditional independence assumptions and for deducing and displaying all the logical consequences of such assumptions. We then introduce the manipulative account of causation and show that any DAG defines a simple transformation which tells us how the probability distribution will change as a result of external interventions in the system. Using this transformation it is possible to quantify, from non-experimental data, the effects of external interventions and to specify conditions under which randomized experiments are not necessary. Finally, the paper offers a graphical interpretation for Rubin's model of causal effects, and demonstrates its equivalence to the manipulative account of causation. We exemplify the tradeoffs between the two approaches by deriving nonparametric bounds on treatment effects under conditions of imperfect compliance. ",
"neighbors": [
345,
626
],
"mask": "Validation"
},
{
"node_id": 262,
"label": 0,
"text": "Title: EVOLVING REPRESENTATIONS OF DESIGN CASES AND THEIR USE IN CREATIVE DESIGN \nAbstract: In case-based design, the adaptation of a design case to new design requirements plays an important role. If it is sufficient to adapt a predefined set of design parameters, the task is easily automated. If, however, more far-reaching, creative changes are required, current systems provide only limited success. This paper describes an approach to creative design adaptation based on the notion of creativity as 'goal oriented shift of focus of a search process'. An evolving representation is used to restructure the search space so that designs similar to the example case lie in the focus of the search. This focus is than used as a starting point to create new designs. ",
"neighbors": [
188,
266,
793,
1980
],
"mask": "Train"
},
{
"node_id": 263,
"label": 2,
"text": "Title: Non-linear Models for Time Series Using Mixtures of Experts \nAbstract: We consider a novel non-linear model for time series analysis. The study of this model emphasizes both theoretical aspects as well as practical applicability. The architecture of the model is demonstrated to be sufficiently rich, in the sense of approximating unknown functional forms, yet it retains some of the simple and intuitive characteristics of linear models. A comparison to some more established non-linear models will be emphasized, and theoretical issues are backed by prediction results for benchmark time series, as well as computer generated data sets. Efficient estimation algorithms are seen to be applicable, made possible by the mixture based structure of the model. Large sample properties of the estimators are discussed as well, in both well specified as well as misspecified settings. We also demonstrate how inference pertaining to the data structure may be made from the parameterization of the model, resulting in a better, more intuitive, understanding of the structure and performance of the model.",
"neighbors": [
74,
668,
2421
],
"mask": "Validation"
},
{
"node_id": 264,
"label": 6,
"text": "Title: On Learning More Concepts \nAbstract: The coverage of a learning algorithm is the number of concepts that can be learned by that algorithm from samples of a given size. This paper asks whether good learning algorithms can be designed by maximizing their coverage. The paper extends a previous upper bound on the coverage of any Boolean concept learning algorithm and describes two algorithms|Multi-Balls and Large-Ball|whose coverage approaches this upper bound. Experimental measurement of the coverage of the ID3 and FRINGE algorithms shows that their coverage is far below this bound. Further analysis of Large-Ball shows that although it learns many concepts, these do not seem to be very interesting concepts. Hence, coverage maximization alone does not appear to yield practically-useful learning algorithms. The paper concludes with a definition of coverage within a bias, which suggests a way that coverage maximization could be applied to strengthen weak preference biases.",
"neighbors": [
635,
638
],
"mask": "Train"
},
{
"node_id": 265,
"label": 1,
"text": "Title: Analyzing GAs Using Markov Models with Semantically Ordered and Lumped States \nAbstract: At the previous FOGA workshop, we presented some initial results on using Markov models to analyze the transient behavior of genetic algorithms (GAs) being used as function optimizers (GAFOs). In that paper, the states of the Markov model were ordered via a simple and mathematically convenient lexicographic ordering used initially by Nix and Vose. In this paper, we explore alternative orderings of states based on interesting semantic properties such as average fitness, degree of homogeneity, average attractive force, etc. We also explore lumping techniques for reducing the size of the state space. Analysis of these reordered and lumped Markov models provides new insights into the transient behavior of GAs in general and GAFOs in particular.",
"neighbors": [
100,
758
],
"mask": "Train"
},
{
"node_id": 266,
"label": 0,
"text": "Title: EMERGENT BEHAVIOUR IN CO-EVOLUTIONARY DESIGN \nAbstract: An important aspect of creative design is the concept of emergence. Though emergence is important, its mechanism is either not well understood or it is limited to the domain of shapes. This deficiency can be compensated by considering definitions of emergent behaviour from the Artificial Life (ALife) research community. With these new insights, it is proposed that a computational technique, called evolving representations of design genes, can be extended to emergent behaviour. We demonstrate emergent be-haviour in a co-evolutionary model of design. This co-evolutionary approach to design allows a solution space (structure space) to evolve in response to a problem space (be-haviour space). Since the behaviour space is now an active participant, behaviour may emerge with new structures at the end of the design process. This paper hypothesizes that emergent behaviour can be identified using the same technique. The floor plan example of (Gero & Schnier 1995) is extended to demonstrate how behaviour can emerge in a co-evolutionary design process. ",
"neighbors": [
163,
262
],
"mask": "Validation"
},
{
"node_id": 267,
"label": 6,
"text": "Title: On Learning from Noisy and Incomplete Examples \nAbstract: We investigate learnability in the PAC model when the data used for learning, attributes and labels, is either corrupted or incomplete. In order to prove our main results, we define a new complexity measure on statistical query (SQ) learning algorithms. The view of an SQ algorithm is the maximum over all queries in the algorithm, of the number of input bits on which the query depends. We show that a restricted view SQ algorithm for a class is a general sufficient condition for learnability in both the models of attribute noise and covered (or missing) attributes. We further show that since the algorithms in question are statistical, they can also simultaneously tolerate classification noise. Classes for which these results hold, and can therefore be learned with simultaneous attribute noise and classification noise, include k-DNF, k-term-DNF by DNF representations, conjunctions with few relevant variables, and over the uniform distribution, decision lists. These noise models are the first PAC models in which all training data, attributes and labels, may be corrupted by a random process. Previous researchers had shown that the class of k-DNF is learnable with attribute noise if the attribute noise rate is known exactly. We show that all of our attribute noise learnabil-ity results, either with or without classification noise, also hold when the exact noise rate is not Appeared in Proceedings of the Eighth Annual ACM Conference on Computational Learning Theory. ACM Press, July 1995. known, provided that the learner instead has a polynomially good approximation of the noise rate. In addition, we show that the results also hold when there is not one single noise rate, but a distinct noise rate for each attribute. Our results for learning with random covering do not require the learner to be told even an approximation of the covering rate and in addition hold in the setting with distinct covering rates for each attribute. Finally, we give lower bounds on the number of examples required for learning in the presence of attribute noise or covering.",
"neighbors": [
20,
334,
459,
732
],
"mask": "Train"
},
{
"node_id": 268,
"label": 2,
"text": "Title: Finding Genes in DNA with a Hidden Markov Model \nAbstract: This study describes a new Hidden Markov Model (HMM) system for segmenting uncharacterized genomic DNA sequences into exons, introns, and intergenic regions. Separate HMM modules were designed and trained for specific regions of DNA: exons, introns, intergenic regions, and splice sites. The models were then tied together to form a biologically feasible topology. The integrated HMM was trained further on a set of eukaryotic DNA sequences, and tested by using it to segment a separate set of sequences. The resulting HMM system, which is called VEIL (Viterbi Exon-Intron Locator), obtains an overall accuracy on test data of 92% of total bases correctly labelled, with a correlation coefficient of 0.73. Using the more stringent test of exact exon prediction, VEIL correctly located both ends of 53% of the coding exons, and 49% of the exons it predicts are exactly correct. These results compare favorably to the best previous results for gene structure prediction, and demonstrate the benefits of using HMMs for this problem.",
"neighbors": [
14,
232,
258,
613,
616,
2046,
2571
],
"mask": "Train"
},
{
"node_id": 269,
"label": 2,
"text": "Title: Discovering Structure in Multiple Learning Tasks: The TC Algorithm \nAbstract: Recently, there has been an increased interest in lifelong machine learning methods, that transfer knowledge across multiple learning tasks. Such methods have repeatedly been found to outperform conventional, single-task learning algorithms when the learning tasks are appropriately related. To increase robustness of such approaches, methods are desirable that can reason about the relatedness of individual learning tasks, in order to avoid the danger arising from tasks that are unrelated and thus potentially misleading. This paper describes the task-clustering (TC) algorithm. TC clusters learning tasks into classes of mutually related tasks. When facing a new learning task, TC first determines the most related task cluster, then exploits information selectively from this task cluster only. An empirical study carried out in a mobile robot domain shows that TC outperforms its non-selective counterpart in situations where only a small number of tasks is relevant.",
"neighbors": [
24
],
"mask": "Validation"
},
{
"node_id": 270,
"label": 3,
"text": "Title: Generalized Update: Belief Change in Dynamic Settings \nAbstract: Belief revision and belief update have been proposed as two types of belief change serving different purposes. Belief revision is intended to capture changes of an agent's belief state reflecting new information about a static world. Belief update is intended to capture changes of belief in response to a changing world. We argue that both belief revision and belief update are too restrictive; routine belief change involves elements of both. We present a model for generalized update that allows updates in response to external changes to inform the agent about its prior beliefs. This model of update combines aspects of revision and update, providing a more realistic characterization of belief change. We show that, under certain assumptions, the original update postulates are satisfied. We also demonstrate that plain revision and plain update are special cases of our model, in a way that formally verifies the intuition that revision is suitable for static belief change.",
"neighbors": [
276,
339,
342,
467,
495,
573
],
"mask": "Train"
},
{
"node_id": 271,
"label": 2,
"text": "Title: Bayesian Regression Filters and the Issue of Priors \nAbstract: We propose a Bayesian framework for regression problems, which covers areas which are usually dealt with by function approximation. An online learning algorithm is derived which solves regression problems with a Kalman filter. Its solution always improves with increasing model complexity, without the risk of over-fitting. In the infinite dimension limit it approaches the true Bayesian posterior. The issues of prior selection and over-fitting are also discussed, showing that some of the commonly held beliefs are misleading. The practical implementation is summarised. Simulations using 13 popular publicly available data sets are used to demonstrate the method and highlight important issues concerning the choice of priors.",
"neighbors": [
718,
2056
],
"mask": "Train"
},
{
"node_id": 272,
"label": 2,
"text": "Title: A Performance Analysis of CNS-1 on Sparse Connectionist Networks \nAbstract: This report deals with the efficient mapping of sparse neural networks on CNS-1. We develop parallel vector code for an idealized sparse network and determine its performance under three memory systems. We use the code to evaluate the memory systems (one of which will be implemented in the prototype), and to pinpoint bottlenecks in the current CNS-1 design. ",
"neighbors": [
516,
914,
1551
],
"mask": "Train"
},
{
"node_id": 273,
"label": 1,
"text": "Title: Two is better than one: A diploid genotype for neural networks \nAbstract: In nature the genotype of many organisms exhibits diploidy, i.e., it includes two copies of every gene. In this paper we describe the results of simulations comparing the behavior of haploid and diploid populations of ecological neural networks living in both fixed and changing environments. We show that diploid genotypes create more variability in fitness in the population than haploid genotypes and buffer better environmental change; as a consequence, if one wants to obtain good results for both average and peak fitness in a single population one should choose a diploid population with an appropriate mutation rate. Some results of our simulations parallel biological findings.",
"neighbors": [
38,
372
],
"mask": "Test"
},
{
"node_id": 274,
"label": 4,
"text": "Title: Some Experiments with a Hybrid Model for Learning Sequential Decision Making \nAbstract: In nature the genotype of many organisms exhibits diploidy, i.e., it includes two copies of every gene. In this paper we describe the results of simulations comparing the behavior of haploid and diploid populations of ecological neural networks living in both fixed and changing environments. We show that diploid genotypes create more variability in fitness in the population than haploid genotypes and buffer better environmental change; as a consequence, if one wants to obtain good results for both average and peak fitness in a single population one should choose a diploid population with an appropriate mutation rate. Some results of our simulations parallel biological findings.",
"neighbors": [
204,
478,
566
],
"mask": "Test"
},
{
"node_id": 275,
"label": 3,
"text": "Title: Belief Revision: A Critique \nAbstract: We examine carefully the rationale underlying the approaches to belief change taken in the literature, and highlight what we view as methodological problems. We argue that to study belief change carefully, we must be quite explicit about the \"ontology\" or scenario underlying the belief change process. This is something that has been missing in previous work, with its focus on postulates. Our analysis shows that we must pay particular attention to two issues that have often been taken for granted: The first is how we model the agent's epistemic state. (Do we use a set of beliefs, or a richer structure, such as an ordering on worlds? And if we use a set of beliefs, in what language are these beliefs are expressed?) We show that even postulates that have been called \"beyond controversy\" are unreasonable when the agent's beliefs include beliefs about her own epistemic state as well as the external world. The second is the status of observations. (Are observations known to be true, or just believed? In the latter case, how firm is the belief?) Issues regarding the status of observations arise particularly when we consider iterated belief revision, and we must confront the possibility of revising by ' and then by :'. fl Some of this work was done while both authors were at the IBM Almaden Research Center. The first author was also at Stanford while much of the work was done. IBM and Stanford's support are gratefully acknowledged. This work was also supported in part by NSF under grants IRI-95-03109 and IRI-96-25901, by the Air Force Office of Scientific Research under grant F49620-96-1-0323, and by an IBM Graduate Fellowship to the first author. A preliminary version of this paper appeared in L. C. Aiello, J. Doyle, and S. C. Shapiro (Eds.) Principles of knowledge representation and reasoning : proc. Fifth International Conference (KR '96), pp. 421-431, 1996. ",
"neighbors": [
464
],
"mask": "Test"
},
{
"node_id": 276,
"label": 3,
"text": "Title: A Qualitative Markov Assumption and Its Implications for Belief Change \nAbstract: The study of belief change has been an active area in philosophy and AI. In recent years, two special cases of belief change, belief revision and belief update, have been studied in detail. Roughly speaking, revision treats a surprising observation as a sign that previous beliefs were wrong, while update treats a surprising observation as an indication that the world has changed. In general, we would expect that an agent making an observation may both want to revise some earlier beliefs and assume that some change has occurred in the world. We define a novel approach to belief change that allows us to do this, by applying ideas from probability theory in a qualitative settings. The key idea is to use a qualitative Markov assumption, which says that state transitions are independent. We show that a recent approach to modeling qualitative uncertainty using plausibility measures allows us to make such a qualitative Markov assumption in a relatively straightforward way, and show how the Markov assumption can be used to provide an attractive belief-change model.",
"neighbors": [
270,
342,
467,
1993,
2115,
2546
],
"mask": "Train"
},
{
"node_id": 277,
"label": 4,
"text": "Title: Applying Online Search Techniques to Continuous-State Reinforcement Learning key to the success of the local\nAbstract: In this paper, we describe methods for efficiently computing better solutions to control problems in continuous state spaces. We provide algorithms that exploit online search to boost the power of very approximate value functions discovered by traditional reinforcement learning techniques. We examine local searches, where the agent performs a finite-depth lookahead search, and global searches, where the agent performs a search for a trajectory all the way from the current state to a goal state. The key to the success of the global methods lies in using aggressive state-space search techniques such as uniform-cost search and A fl , tamed into a tractable form by exploiting neighborhood relations and trajectory constraints that arise from continuous-space dynamic control. ",
"neighbors": [
294,
483,
523,
552,
567
],
"mask": "Test"
},
{
"node_id": 278,
"label": 3,
"text": "Title: Generalized Queries on Probabilistic Context-Free Grammars on Pattern Analysis and Machine Intelligence \nAbstract: In this paper, we describe methods for efficiently computing better solutions to control problems in continuous state spaces. We provide algorithms that exploit online search to boost the power of very approximate value functions discovered by traditional reinforcement learning techniques. We examine local searches, where the agent performs a finite-depth lookahead search, and global searches, where the agent performs a search for a trajectory all the way from the current state to a goal state. The key to the success of the global methods lies in using aggressive state-space search techniques such as uniform-cost search and A fl , tamed into a tractable form by exploiting neighborhood relations and trajectory constraints that arise from continuous-space dynamic control. ",
"neighbors": [
324,
326,
1898
],
"mask": "Train"
},
{
"node_id": 279,
"label": 3,
"text": "Title: Qualitative Probabilities for Default Reasoning, Belief Revision, and Causal Modeling \nAbstract: This paper presents recent developments toward a formalism that combines useful properties of both logic and probabilities. Like logic, the formalism admits qualitative sentences and provides symbolic machinery for deriving deductively closed beliefs and, like probability, it permits us to express if-then rules with different levels of firmness and to retract beliefs in response to changing observations. Rules are interpreted as order-of-magnitude approximations of conditional probabilities which impose constraints over the rankings of worlds. Inferences are supported by a unique priority ordering on rules which is syntactically derived from the knowledge base. This ordering accounts for rule interactions, respects specificity considerations and facilitates the construction of coherent states of beliefs. Practical algorithms are developed and analyzed for testing consistency, computing rule ordering, and answering queries. Imprecise observations are incorporated using qualitative versions of Jef-frey's Rule and Bayesian updating, with the result that coherent belief revision is embodied naturally and tractably. Finally, causal rules are interpreted as imposing Markovian conditions that further constrain world rankings to reflect the modularity of causal organizations. These constraints are shown to facilitate reasoning about causal projections, explanations, actions and change. ",
"neighbors": [
464
],
"mask": "Train"
},
{
"node_id": 280,
"label": 3,
"text": "Title: USING SMOOTHING SPLINE ANOVA TO EXAMINE THE RELATION OF RISK FACTORS TO THE INCIDENCE AND\nAbstract: This paper presents recent developments toward a formalism that combines useful properties of both logic and probabilities. Like logic, the formalism admits qualitative sentences and provides symbolic machinery for deriving deductively closed beliefs and, like probability, it permits us to express if-then rules with different levels of firmness and to retract beliefs in response to changing observations. Rules are interpreted as order-of-magnitude approximations of conditional probabilities which impose constraints over the rankings of worlds. Inferences are supported by a unique priority ordering on rules which is syntactically derived from the knowledge base. This ordering accounts for rule interactions, respects specificity considerations and facilitates the construction of coherent states of beliefs. Practical algorithms are developed and analyzed for testing consistency, computing rule ordering, and answering queries. Imprecise observations are incorporated using qualitative versions of Jef-frey's Rule and Bayesian updating, with the result that coherent belief revision is embodied naturally and tractably. Finally, causal rules are interpreted as imposing Markovian conditions that further constrain world rankings to reflect the modularity of causal organizations. These constraints are shown to facilitate reasoning about causal projections, explanations, actions and change. ",
"neighbors": [
10,
192,
193,
519,
2549
],
"mask": "Test"
},
{
"node_id": 281,
"label": 4,
"text": "Title: Clay: Integrating Motor Schemas and Reinforcement Learning \nAbstract: Clay is an evolutionary architecture for autonomous robots that integrates motor schema-based control and reinforcement learning. Robots utilizing Clay benefit from the real-time performance of motor schemas in continuous and dynamic environments while taking advantage of adaptive reinforcement learning. Clay coordinates assemblages (groups of motor schemas) using embedded reinforcement learning modules. The coordination modules activate specific assemblages based on the presently perceived situation. Learning occurs as the robot selects assemblages and samples a reinforcement signal over time. Experiments in a robot soccer simulation illustrate the performance and utility of the system.",
"neighbors": [
460,
858
],
"mask": "Test"
},
{
"node_id": 282,
"label": 2,
"text": "Title: Cortical Synchronization and Perceptual Framing \nAbstract: Clay is an evolutionary architecture for autonomous robots that integrates motor schema-based control and reinforcement learning. Robots utilizing Clay benefit from the real-time performance of motor schemas in continuous and dynamic environments while taking advantage of adaptive reinforcement learning. Clay coordinates assemblages (groups of motor schemas) using embedded reinforcement learning modules. The coordination modules activate specific assemblages based on the presently perceived situation. Learning occurs as the robot selects assemblages and samples a reinforcement signal over time. Experiments in a robot soccer simulation illustrate the performance and utility of the system.",
"neighbors": [
589,
592
],
"mask": "Test"
},
{
"node_id": 283,
"label": 2,
"text": "Title: A Local Learning Algorithm for Dynamic Feedforward and Recurrent Networks \nAbstract: Most known learning algorithms for dynamic neural networks in non-stationary environments need global computations to perform credit assignment. These algorithms either are not local in time or not local in space. Those algorithms which are local in both time and space usually can not deal sensibly with `hidden units'. In contrast, as far as we can judge by now, learning rules in biological systems with many `hidden units' are local in both space and time. In this paper we propose a parallel on-line learning algorithm which performs local computations only, yet still is designed to deal with hidden units and with units whose past activations are `hidden in time'. The approach is inspired by Holland's idea of the bucket brigade for classifier systems, which is transformed to run on a neural network with fixed topology. The result is a feedforward or recurrent `neural' dissipative system which is consuming `weight-substance' and permanently trying to distribute this substance onto its connections in an appropriate way. Simple experiments demonstrating the feasability of the algorithm are reported.",
"neighbors": [
523,
565,
747,
2093
],
"mask": "Train"
},
{
"node_id": 284,
"label": 0,
"text": "Title: Role of Ontology in Creative Understanding \nAbstract: In Proceedings of the 18th Annual Cognitive Science Conference, San Diego, CA, July 1996 This paper can also be found at the Georgia Tech WWW site: http://www.cc.gatech.edu/cogsci/ Abstract Successful creative understanding requires that a reasoner be able to manipulate known concepts in order to understand novel ones. A major problem arises, however, when one considers exactly how these manipulations are to be bounded. If a bound is imposed which is too loose, the reasoner is likely to create bizarre understandings rather than useful creative ones. On the other hand, if the bound is too tight, the reasoner will not have the flexibility needed to deal with a wide range of creative understanding experiences. Our approach is to make use of a principled ontology as one source of reasonable bounding. This allows our creative understanding theory to have good explanatory power about the process while allowing the computer implementation of the theory (the ISAAC system) to be flexible without being bizarre in the task domain of reading science fiction short stories. ",
"neighbors": [
64,
486,
583
],
"mask": "Validation"
},
{
"node_id": 285,
"label": 0,
"text": "Title: Explaining Serendipitous Recognition in Design \nAbstract: Creative designers often see solutions to pending design problems in the everyday objects surrounding them. This can often lead to innovation and insight, sometimes revealing new functions and purposes for common design pieces in the process. We are interested in modeling serendipitous recognition of solutions to pending problems in the context of creative mechanical design. This paper characterizes this ability, analyzing observations we have made of it, and placing it in the context of other forms of recognition. We propose a computational model to capture and explore serendipitous recognition which is based on ideas from reconstructive dynamic memory and situation assessment in case-based reasoning. ",
"neighbors": [
30,
231,
486,
1148
],
"mask": "Validation"
},
{
"node_id": 286,
"label": 5,
"text": "Title: The Estimation of Probabilities in Attribute Selection Measures for Decision Tree Induction \nAbstract: In this paper we analyze two well-known measures for attribute selection in decision tree induction, informativity and gini index. In particular, we are interested in the influence of different methods for estimating probabilities on these two measures. The results of experiments show that different measures, which are obtained by different probability estimation methods, determine the preferential order of attributes in a given node. Therefore, they determine the structure of a constructed decision tree. This feature can be very beneficial, especially in real-world applications where several different trees are often required. ",
"neighbors": [
178,
378,
1963,
2195
],
"mask": "Train"
},
{
"node_id": 287,
"label": 6,
"text": "Title: Learning Switching Concepts \nAbstract: We consider learning in situations where the function used to classify examples may switch back and forth between a small number of different concepts during the course of learning. We examine several models for such situations: oblivious models in which switches are made independent of the selection of examples, and more adversarial models in which a single adversary controls both the concept switches and example selection. We show relationships between the more benign models and the p-concepts of Kearns and Schapire, and present polynomial-time algorithms for learning switches between two k-DNF formulas. For the most adversarial model, we present a model of success patterned after the popular competitive analysis used in studying on-line algorithms. We describe a randomized query algorithm for such adversarial switches between two monotone disjunctions that is \"1-competitive\" in that the total number of mistakes plus queries is with high probability bounded by the number of switches plus some fixed polynomial in n (the number of variables). We also use notions described here to provide sufficient conditions under which learning a p-concept class \"with a decision rule\" implies being able to learn the class \"with a model of probability.\" ",
"neighbors": [
549,
591,
640
],
"mask": "Train"
},
{
"node_id": 288,
"label": 0,
"text": "Title: A Formal Analysis of Case Base Retrieval \nAbstract: Case based systems typically retrieve cases from the case base by applying similarity measures. The measures are usually constructed in an ad hoc manner. This report presents a toolbox for the systematic construction of similarity measures. In addition to paving the way to a design methodology for similarity measures, this systematic approach facilitates the identification of opportunities for parallelisation in case base retrieval.",
"neighbors": [
66,
75,
1377,
2037,
2157,
2380
],
"mask": "Validation"
},
{
"node_id": 289,
"label": 0,
"text": "Title: A theory of questions and question asking \nAbstract: ",
"neighbors": [
64,
612,
1278,
1498,
1534,
1535,
1537,
2568
],
"mask": "Train"
},
{
"node_id": 290,
"label": 1,
"text": "Title: 20 Data Structures and Genetic Programming two techniques for reducing run time. \nAbstract: In real world applications, software engineers recognise the use of memory must be organised via data structures and that software using the data must be independant of the data structures' implementation details. They achieve this by using abstract data structures, such as records, files and buffers. We demonstrate that genetic programming can automatically implement simple abstract data structures, considering in detail the task of evolving a list. We show general and reasonably efficient implementations can be automatically generated from simple primitives. A model for maintaining evolved code is demonstrated using the list problem. Much published work on genetic programming (GP) evolves functions without side-effects to learn patterns in test data. In contrast human written programs often make extensive and explicit use of memory. Indeed memory in some form is required for a programming system to be Turing Complete, i.e. for it to be possible to write any (computable) program in that system. However inclusion of memory can make the interactions between parts of programs much more complex and so make it harder to produce programs. Despite this it has been shown GP can automatically create programs which explicitly use memory [Teller 1994]. In both normal and genetic programming considerable benefits have been found in adopting a structured approach. For example [Koza 1994] shows the introduction of evolvable code modules (automatically defined functions, ADFs) can greatly help GP to reach a solution. We suggest that a corresponding structured approach to use of data will similarly have significant advantage to GP. Earlier work has demonstrated that genetic programming can automatically generate simple abstract data structures, namely stacks and queues [Langdon 1995a]. That is, GP can evolve programs that organise memory (accessed via simple read and write primitives) into data structures which can be used by external software without it needing to know how they are implemented. This chapter shows it is possible to evolve a list data structure from basic primitives. [Aho, Hopcroft and Ullman 1987] suggest three different ways to implement a list but these experiments show GP can evolve its own implementation. This requires all the list components to agree on one implementation as they co-evolve together. Section 20.3 describes the GP architecture, including use of Pareto multiple component fitness scoring (20.3.4) and measures aimed at speeding the GP search (20.3.5). The evolved solutions are described in Section 20.4. Section 20.5 presents a candidate model for maintaining evolved software. This is followed by a discussion of what we have learned (20.6) and conclusions that can be drawn (20.7). ",
"neighbors": [
163,
1839,
1911,
2220
],
"mask": "Validation"
},
{
"node_id": 291,
"label": 2,
"text": "Title: NETWORKS WITH REAL WEIGHTS: ANALOG COMPUTATIONAL COMPLEXITY In contrast to classical computational models, the models\nAbstract: Report SYCON-92-05 ABSTRACT We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of \"neurons\". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints there are limits on their capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomial-time equivalent to classical digital computation in the previous work [17].) Moreover, there is a precise correspondence between nets and standard non-uniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. This relationship is perhaps surprising since our analog devices do not change in any manner with input size. We note that these networks are not likely to solve polynomially NP-hard problems, as the equality \"p = np \" in our model implies the almost complete collapse of the standard polynomial hierarchy. ",
"neighbors": [
487
],
"mask": "Test"
},
{
"node_id": 292,
"label": 3,
"text": "Title: An Approach to Diagnosing Total Variation Convergence of MCMC Algorithms \nAbstract: We introduce a convergence diagnostic procedure for MCMC which operates by estimating total variation distances for the distribution of the algorithm after certain numbers of iterations. The method has advantages over many existing methods in terms of applicability, utility, computational expense and interpretability. It can be used to assess convergence of both marginal and joint posterior densities, and we show how it can be applied to the two most commonly used MCMC samplers; the Gibbs Sampler and the Metropolis Hastings algorithm. Illustrative examples highlight the utility and interpretability of the proposed diagnostic, but also highlight some of its limitations. ",
"neighbors": [
759,
904
],
"mask": "Test"
},
{
"node_id": 293,
"label": 2,
"text": "Title: Independent Component Analysis of Electroencephalographic Data \nAbstract: Because of the distance between the skull and brain and their different resistivities, electroencephalographic (EEG) data collected from any point on the human scalp includes activity generated within a large brain area. This spatial smearing of EEG data by volume conduction does not involve significant time delays, however, suggesting that the Independent Component Analysis (ICA) algorithm of Bell and Sejnowski [1] is suitable for performing blind source separation on EEG data. The ICA algorithm separates the problem of source identification from that of source localization. First results of applying the ICA algorithm to EEG and event-related potential (ERP) data collected during a sustained auditory detection task show: (1) ICA training is insensitive to different random seeds. (2) ICA may be used to segregate obvious artifactual EEG components (line and muscle noise, eye movements) from other sources. (3) ICA is capable of isolating overlapping EEG phenomena, including alpha and theta bursts and spatially-separable ERP components, to separate ICA channels. (4) Nonstationarities in EEG and behavioral state can be tracked using ICA via changes in the amount of residual correlation between ICA-filtered output channels. ",
"neighbors": [
387,
576
],
"mask": "Validation"
},
{
"node_id": 294,
"label": 4,
"text": "Title: References elements that can solve difficult learning control problems. on Simulation of Adaptive Behavior, pages\nAbstract: Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department. ",
"neighbors": [
21,
83,
85,
103,
128,
153,
173,
186,
191,
252,
277,
465,
466,
477,
483,
502,
554,
564,
565,
566,
588,
633,
636,
638,
665,
699,
807,
1353,
1438,
1672,
2430,
2476
],
"mask": "Train"
},
{
"node_id": 295,
"label": 4,
"text": "Title: A Neuro-Dynamic Programming Approach to Retailer Inventory Management 1 \nAbstract: Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63(2):81-97. Schmidhuber, J. (1990b). Towards compositional learning with dynamic neural networks. Technical Report FKI-129-90, Technische Universitat Munchen, Institut fu Informatik. Servan-Schreiber, D., Cleermans, A., and McClelland, J. (1988). Encoding sequential structure in simple recurrent networks. Technical Report CMU-CS-88-183, Carnegie Mellon University, Computer Science Department. ",
"neighbors": [
82,
197,
210,
220,
471,
565,
1012,
1316,
1782,
2078
],
"mask": "Train"
},
{
"node_id": 296,
"label": 6,
"text": "Title: Lookahead and Pathology in Decision Tree Induction \nAbstract: The standard approach to decision tree induction is a top-down, greedy algorithm that makes locally optimal, irrevocable decisions at each node of a tree. In this paper, we empirically study an alternative approach, in which the algorithms use one-level lookahead to decide what test to use at a node. We systematically compare, using a very large number of real and artificial data sets, the quality of decision trees induced by the greedy approach to that of trees induced using lookahead. The main observations from our experiments are: (i) the greedy approach consistently produced trees that were just as accurate as trees produced with the much more expensive lookahead step; and (ii) we observed many instances of pathology, i.e., lookahead produced trees that were both larger and less accurate than trees produced without it.",
"neighbors": [
227,
438,
638,
692,
1236
],
"mask": "Train"
},
{
"node_id": 297,
"label": 2,
"text": "Title: Automatic Feature Extraction in Machine Learning \nAbstract: This thesis presents a machine learning model capable of extracting discrete classes out of continuous valued input features. This is done using a neurally inspired novel competitive classifier (CC) which feeds the discrete classifications forward to a supervised machine learning model. The supervised learning model uses the discrete classifications and perhaps other information available to solve a problem. The supervised learner then generates feedback to guide the CC into potentially more useful classifications of the continuous valued input features. Two supervised learning models are combined with the CC creating ASOCS-AFE and ID3-AFE. Both models are simulated and the results are analyzed. Based on these results, several areas of future research are proposed. ",
"neighbors": [
441,
809,
1321
],
"mask": "Test"
},
{
"node_id": 298,
"label": 4,
"text": "Title: How to Dynamically Merge Markov Decision Processes \nAbstract: We are frequently called upon to perform multiple tasks that compete for our attention and resource. Often we know the optimal solution to each task in isolation; in this paper, we describe how this knowledge can be exploited to efficiently find good solutions for doing the tasks in parallel. We formulate this problem as that of dynamically merging multiple Markov decision processes (MDPs) into a composite MDP, and present a new theoretically-sound dynamic programming algorithm for finding an optimal policy for the composite MDP. We analyze various aspects of our algorithm and Every day, we are faced with the problem of doing multiple tasks in parallel, each of which competes for our attention and resource. If we are running a job shop, we must decide which machines to allocate to which jobs, and in what order, so that no jobs miss their deadlines. If we are a mail delivery robot, we must find the intended recipients of the mail while simultaneously avoiding fixed obstacles (such as walls) and mobile obstacles (such as people), and still manage to keep ourselves sufficiently charged up. Frequently we know how to perform each task in isolation; this paper considers how we can take the information we have about the individual tasks and combine it to efficiently find an optimal solution for doing the entire set of tasks in parallel. More importantly, we describe a theoretically-sound algorithm for doing this merging dynamically; new tasks (such as a new job arrival at a job shop) can be assimilated online into the solution being found for the ongoing set of simultaneous tasks. illustrate its use on a simple merging problem.",
"neighbors": [
410,
552
],
"mask": "Train"
},
{
"node_id": 299,
"label": 6,
"text": "Title: On the Approximability of Numerical Taxonomy (Fitting Distances by Tree Metrics) \nAbstract: We consider the problem of fitting an n fi n distance matrix D by a tree metric T . Let \" be the distance to the closest tree metric, that is, \" = min T fk T; D k 1 g. First we present an O(n 2 ) algorithm for finding an additive tree T such that k T; D k 1 3\", giving the first algorithm for this problem with a performance guarantee. Second we show that it is N P-hard to find a tree T such that k T; D k 1 < 9 ",
"neighbors": [
596,
746,
1827,
2083,
2110
],
"mask": "Train"
},
{
"node_id": 300,
"label": 0,
"text": "Title: Storing and Indexing Plan Derivations through Explanation-based Analysis of Retrieval Failures \nAbstract: Case-Based Planning (CBP) provides a way of scaling up domain-independent planning to solve large problems in complex domains. It replaces the detailed and lengthy search for a solution with the retrieval and adaptation of previous planning experiences. In general, CBP has been demonstrated to improve performance over generative (from-scratch) planning. However, the performance improvements it provides are dependent on adequate judgements as to problem similarity. In particular, although CBP may substantially reduce planning effort overall, it is subject to a mis-retrieval problem. The success of CBP depends on these retrieval errors being relatively rare. This paper describes the design and implementation of a replay framework for the case-based planner dersnlp+ebl. der-snlp+ebl extends current CBP methodology by incorporating explanation-based learning techniques that allow it to explain and learn from the retrieval failures it encounters. These techniques are used to refine judgements about case similarity in response to feedback when a wrong decision has been made. The same failure analysis is used in building the case library, through the addition of repairing cases. Large problems are split and stored as single goal subproblems. Multi-goal problems are stored only when these smaller cases fail to be merged into a full solution. An empirical evaluation of this approach demonstrates the advantage of learning from experienced retrieval failure.",
"neighbors": [
593,
1621
],
"mask": "Train"
},
{
"node_id": 301,
"label": 2,
"text": "Title: Data Exploration with Reflective Adaptive Models \nAbstract: Case-Based Planning (CBP) provides a way of scaling up domain-independent planning to solve large problems in complex domains. It replaces the detailed and lengthy search for a solution with the retrieval and adaptation of previous planning experiences. In general, CBP has been demonstrated to improve performance over generative (from-scratch) planning. However, the performance improvements it provides are dependent on adequate judgements as to problem similarity. In particular, although CBP may substantially reduce planning effort overall, it is subject to a mis-retrieval problem. The success of CBP depends on these retrieval errors being relatively rare. This paper describes the design and implementation of a replay framework for the case-based planner dersnlp+ebl. der-snlp+ebl extends current CBP methodology by incorporating explanation-based learning techniques that allow it to explain and learn from the retrieval failures it encounters. These techniques are used to refine judgements about case similarity in response to feedback when a wrong decision has been made. The same failure analysis is used in building the case library, through the addition of repairing cases. Large problems are split and stored as single goal subproblems. Multi-goal problems are stored only when these smaller cases fail to be merged into a full solution. An empirical evaluation of this approach demonstrates the advantage of learning from experienced retrieval failure.",
"neighbors": [
46,
238,
489
],
"mask": "Validation"
},
{
"node_id": 302,
"label": 5,
"text": "Title: Confidence Estimation for Speculation Control \nAbstract: Modern processors improve instruction level parallelism by speculation. The outcome of data and control decisions is predicted, and the operations are speculatively executed and only committed if the original predictions were correct. There are a number of other ways that processor resources could be used, such as threading or eager execution. As the use of speculation increases, we believe more processors will need some form of speculation control to balance the benefits of speculation against other possible activities. Confidence estimation is one technique that can be exploited by architects for speculation control. In this paper, we introduce performance metrics to compare confidence estimation mechanisms, and argue that these metrics are appropriate for speculation control. We compare a number of confidence estimation mechanisms, focusing on mechanisms that have a small implementation cost and gain benefit by exploiting characteristics of branch predictors, such as clustering of mispredicted branches. We compare the performance of the different confidence estimation methods using detailed pipeline simulations. Using these simulations, we show how to improve some confidence estimators, providing better insight for future investigations comparing and applying confidence estimators. ",
"neighbors": [
428,
432,
598
],
"mask": "Train"
},
{
"node_id": 303,
"label": 0,
"text": "Title: Relating Relational Learning Algorithms \nAbstract: Relational learning algorithms are of special interest to members of the machine learning community; they offer practical methods for extending the representations used in algorithms that solve supervised learning tasks. Five approaches are currently being explored to address issues involved with using relational representations. This paper surveys algorithms embodying these approaches, summarizes their empirical evaluations, highlights their commonalities, and suggests potential directions for future research. ",
"neighbors": [
426,
478,
1174,
2091
],
"mask": "Train"
},
{
"node_id": 304,
"label": 2,
"text": "Title: Boltzmann Machine learning using mean field theory and linear response correction \nAbstract: We present a new approximate learning algorithm for Boltzmann Machines, using a systematic expansion of the Gibbs free energy to second order in the weights. The linear response correction to the correlations is given by the Hessian of the Gibbs free energy. The computational complexity of the algorithm is cubic in the number of neurons. We compare the performance of the exact BM learning algorithm with first order (Weiss) mean field theory and second order (TAP) mean field theory. The learning task consists of a fully connected Ising spin glass model on 10 neurons. We conclude that 1) the method works well for paramagnetic problems 2) the TAP correction gives a significant improvement over the Weiss mean field theory, both for paramagnetic and spin glass problems and 3) that the inclusion of diagonal weights improves the Weiss approximation for paramagnetic problems, but not for spin glass problems.",
"neighbors": [
108,
250,
427,
1461,
1912
],
"mask": "Validation"
},
{
"node_id": 305,
"label": 4,
"text": "Title: Solving Combinatorial Optimization Tasks by Reinforcement Learning: A General Methodology Applied to Resource-Constrained Scheduling \nAbstract: This paper introduces a methodology for solving combinatorial optimization problems through the application of reinforcement learning methods. The approach can be applied in cases where several similar instances of a combinatorial optimization problem must be solved. The key idea is to analyze a set of \"training\" problem instances and learn a search control policy for solving new problem instances. The search control policy has the twin goals of finding high-quality solutions and finding them quickly. Results of applying this methodology to a NASA scheduling problem show that the learned search control policy is much more effective than the best known non-learning search procedure|a method based on simulated annealing.",
"neighbors": [
82,
410,
552,
565
],
"mask": "Train"
},
{
"node_id": 306,
"label": 4,
"text": "Title: Learning Curve Bounds for Markov Decision Processes with Undiscounted Rewards \nAbstract: Markov decision processes (MDPs) with undis-counted rewards represent an important class of problems in decision and control. The goal of learning in these MDPs is to find a policy that yields the maximum expected return per unit time. In large state spaces, computing these averages directly is not feasible; instead, the agent must estimate them by stochastic exploration of the state space. In this case, longer exploration times enable more accurate estimates and more informed decision-making. The learning curve for an MDP measures how the agent's performance depends on the allowed exploration time, T . In this paper we analyze these learning curves for a simple control problem with undiscounted rewards. In particular, methods from statistical mechanics are used to calculate lower bounds on the agent's performance in the thermodynamic limit T ! 1, N ! 1, ff = T =N (finite), where T is the number of time steps allotted per policy evaluation and N is the size of the state space. In this limit, we provide a lower bound on the return of policies that appear optimal based on imperfect statistics.",
"neighbors": [
57,
552,
554,
565,
967,
1376
],
"mask": "Train"
},
{
"node_id": 307,
"label": 5,
"text": "Title: A Comparison of Full and Partial Predicated Execution Support for ILP Processors \nAbstract: One can effectively utilize predicated execution to improve branch handling in instruction-level parallel processors. Although the potential benefits of predicated execution are high, the tradeoffs involved in the design of an instruction set to support predicated execution can be difficult. On one end of the design spectrum, architectural support for full predicated execution requires increasing the number of source operands for all instructions. Full predicate support provides for the most flexibility and the largest potential performance improvements. On the other end, partial predicated execution support, such as conditional moves, requires very little change to existing architectures. This paper presents a preliminary study to qualitatively and quantitatively address the benefit of full and partial predicated execution support. With our current compiler technology, we show that the compiler can use both partial and full predication to achieve speedup in large control-intensive programs. Some details of the code generation techniques are shown to provide insight into the benefit of going from partial to full predication. Preliminary experimental results are very encouraging: partial predication provides an average of 33% performance improvement for an 8-issue processor with no predicate support while full predication provides an additional 30% improvement. ",
"neighbors": [
598,
735
],
"mask": "Test"
},
{
"node_id": 308,
"label": 6,
"text": "Title: The Power of Self-Directed Learning \nAbstract: This paper studies self-directed learning, a variant of the on-line learning model in which the learner selects the presentation order for the instances. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, k-term DNF formulas, and orthogonal rectangles in f0; 1; ; n1g d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then prove that the model of self-directed learning is more powerful than all other commonly used on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis dimension. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes. fl Supported in part by a GE Foundation Junior Faculty Grant and NSF Grant CCR-9110108. Part of this research was conducted while the author was at the M.I.T. Laboratory for Computer Science and supported by NSF grant DCR-8607494 and a grant from the Siemens Corporation. Net address: sg@cs.wustl.edu. ",
"neighbors": [
9,
1456,
2028
],
"mask": "Train"
},
{
"node_id": 309,
"label": 1,
"text": "Title: The Power of Self-Directed Learning \nAbstract: A lower-bound result on the power of Abstract This paper presents a lower-bound result on the computational power of a genetic algorithm in the context of combinatorial optimization. We describe a new genetic algorithm, the merged genetic algorithm, and prove that for the class of monotonic functions, the algorithm finds the optimal solution, and does so with an exponential convergence rate. The analysis pertains to the ideal behavior of the algorithm where the main task reduces to showing convergence of probability distributions over the search space of combinatorial structures to the optimal one. We take exponential convergence to be indicative of efficient solvability for the sample-bounded algorithm, although a sampling theory is needed to better relate the limit behavior to actual behavior. The paper concludes with a discussion of some immediate problems that lie ahead. a genetic algorithm",
"neighbors": [
163
],
"mask": "Train"
},
{
"node_id": 310,
"label": 2,
"text": "Title: Forecasting electricity demand using nonlinear mixture of experts \nAbstract: In this paper we study a forecasting model based on mixture of experts for predicting the French electric daily consumption energy. We split the task into two parts. Using mixture of experts, a first model predicts the electricity demand from the exogenous variables (such as temperature and degree of cloud cover) and can be viewed as a nonlinear regression model of mixture of Gaussians. Using a single neural network, a second model predicts the evolution of the residual error of the first one, and can be viewed as an nonlinear autoregression model. We analyze the splitting of the input space generated by the mixture of experts model, and compare the performance to models presently used. ",
"neighbors": [
74,
668,
747,
2513
],
"mask": "Train"
},
{
"node_id": 311,
"label": 2,
"text": "Title: June 1994 T o app ear in Neural Computation A Coun terexample to T emp\nAbstract: Sutton's TD( ) metho d aims to provide a represen tation of the cost function in an absorbing Mark ov chain with transition costs. A simple example is given where the represen tation obtained dep ends on . For = 1 the represen tation is optimal with resp ect to a least squares error criterion, but as decreases towards 0 the represen tation becomes progressiv ely worse and, in some cases, very poor. The example suggests a need to understand better the circumstances under which TD(0) and Q-learning obtain satisfactory neural net work-based compact represen tations of the cost function. A variation of TD(0) is also prop osed, which performs b etter on the example. ",
"neighbors": [
406,
552
],
"mask": "Train"
},
{
"node_id": 312,
"label": 3,
"text": "Title: Chain graphs for learning \nAbstract: ",
"neighbors": [
427,
577,
645,
772
],
"mask": "Train"
},
{
"node_id": 313,
"label": 0,
"text": "Title: The Case for Graph-Structured Representations \nAbstract: Case-based reasoning involves reasoning from cases: specific pieces of experience, the reasoner's or another's, that can be used to solve problems. We use the term \"graph-structured\" for representations that (1) are capable of expressing the relations between any two objects in a case, (2) allow the set of relations used to vary from case to case, and (3) allow the set of possible relations to be expanded as necessary to describe new cases. Such representations can be implemented as, for example, semantic networks or lists of concrete propositions in some logic. We believe that graph-structured representations offer significant advantages, and thus we are investigating ways to implement such representations efficiently. We make a \"case-based argument\" using examples from two systems, chiron and caper, to show how a graph-structured representation supports two different kinds of case-based planning in two different domains. We discuss the costs associated with graph-structured representations and describe an approach to reducing those costs, imple mented in caper.",
"neighbors": [
534,
801,
1354,
1377,
1642
],
"mask": "Validation"
},
{
"node_id": 314,
"label": 5,
"text": "Title: Employing Linear Regression in Regression Tree Leaves \nAbstract: The advantage of using linear regression in the leaves of a regression tree is analysed in the paper. It is carried out how this modification affects the construction, pruning and interpretation of a regression tree. The modification is tested on artificial and real-life domains. The results show that the modification is beneficial as it leads to smaller classification errors of induced regression trees. Keywords: machine learning, TDIDT, regression, linear regression, Bayesian approach. ",
"neighbors": [
156,
509,
1073,
1244,
1596,
1684,
1726
],
"mask": "Train"
},
{
"node_id": 315,
"label": 2,
"text": "Title: Cortical Functionality Emergence: Self-Organization of Complex Structures: From Individual to Collective Dynamics, \nAbstract: General Theory & Quantitative Results Abstract: The human genotype represents at most ten billion binary informations, whereas the human brain contains more than a million times a billion synapses. So a differentiated brain structure is essentially due to self-organization. Such self-organization is relevant for areas ranging from medicine to the design of intelligent complex systems. Many brain structures emerge as collective phenomenon of a microscopic neurosynaptic dynamics: a stochastic dynamics mimics the neuronal action potentials, while the synaptic dynamics is modeled by a local coupling dynamics of type Hebb-rule, that is, a synaptic efficiency increases after coincident spiking of pre- and postsynaptic neuron. The microscopic dynamics is transformed to a collective dynamics reminiscent of hydrodynamics. The theory models empirical findings quantitatively: Topology preserving neuronal maps were assumed by Descartes in 1664; their self-organization was suggested by Weiss in 1928; their empirical observation was reported by Marshall in 1941; it is shown that they are neurosynaptically stable due to ubiquitous infinitesimal short range electrical or chemical leakage. In the visual cortex, neuronal stimulus orientation preference emerges; empirically measured orientation patterns are determined by the Poisson equation of electrostatics; this Poisson equation orientation pattern emergence is derived here. Complex cognitive abilities emerge when the basic local synaptic changes are regulated by valuation, emergent valuation, attention, attention focus or combination of subnetworks. Altogether a general theory is presented for the emergence of functionality from synaptic growth in neuro-biological systems. The theory provides a transformation to a collective dynamics and is used for quantitative modeling of empirical data. ",
"neighbors": [
747
],
"mask": "Train"
},
{
"node_id": 316,
"label": 5,
"text": "Title: Cortical Functionality Emergence: Self-Organization of Complex Structures: From Individual to Collective Dynamics, \nAbstract: A Methodology for Evaluating Theory Revision Systems: Results Abstract Theory revision systems are learning systems that have a goal of making small changes to an original theory to account for new data. A measure for the distance between two theories is proposed. This measure corresponds to the minimum number of edit operations at the literal level required to transform one theory into another. By computing the distance between an original theory and a revised theory, the claim that a theory revision system makes few revisions to a theory may be quantitatively evaluated. We present data using both accuracy and the distance metric on Audrey II, with Audrey II fl",
"neighbors": [
344
],
"mask": "Validation"
},
{
"node_id": 317,
"label": 6,
"text": "Title: A dataset decomposition approach to data mining and machine discovery \nAbstract: We present a novel data mining approach based on decomposition. In order to analyze a given dataset, the method decomposes it to a hierarchy of smaller and less complex datasets that can be analyzed independently. The method is experimentally evaluated on a real-world housing loans allocation dataset, showing that the decomposition can (1) discover meaningful intermediate concepts, (2) decompose a relatively complex dataset to datasets that are easy to analyze and comprehend, and (3) derive a classifier of high classification accuracy. We also show that human interaction has a positive effect on both the comprehensibility and classification accuracy. ",
"neighbors": [
417,
508,
2326
],
"mask": "Train"
},
{
"node_id": 318,
"label": 0,
"text": "Title: Generalizing from Case Studies: A Case Study \nAbstract: Most empirical evaluations of machine learning algorithms are case studies evaluations of multiple algorithms on multiple databases. Authors of case studies implicitly or explicitly hypothesize that the pattern of their results, which often suggests that one algorithm performs significantly better than others, is not limited to the small number of databases investigated, but instead holds for some general class of learning problems. However, these hypotheses are rarely supported with additional evidence, which leaves them suspect. This paper describes an empirical method for generalizing results from case studies and an example application. This method yields rules describing when some algorithms significantly outperform others on some dependent measures. Advantages for generalizing from case studies and limitations of this particular approach are also described.",
"neighbors": [
426,
445,
686,
991,
1173,
1644,
2310,
2333,
2427
],
"mask": "Train"
},
{
"node_id": 319,
"label": 6,
"text": "Title: Error Reduction through Learning Multiple Descriptions \nAbstract: Learning multiple descriptions for each class in the data has been shown to reduce generalization error but the amount of error reduction varies greatly from domain to domain. This paper presents a novel empirical analysis that helps to understand this variation. Our hypothesis is that the amount of error reduction is linked to the \"degree to which the descriptions for a class make errors in a correlated manner.\" We present a precise and novel definition for this notion and use twenty-nine data sets to show that the amount of observed error reduction is negatively correlated with the degree to which the descriptions make errors in a correlated manner. We empirically show that it is possible to learn descriptions that make less correlated errors in domains in which many ties in the search evaluation measure (e.g. information gain) are experienced during learning. The paper also presents results that help to understand when and why multiple descriptions are a help (irrelevant attributes) and when they are not as much help (large amounts of class noise). ",
"neighbors": [
29,
1273
],
"mask": "Train"
},
{
"node_id": 320,
"label": 2,
"text": "Title: Appears in Working Notes, Integrating Multiple Learned Models for Improving and Scaling Machine Learning Algorithms\nAbstract: This paper presents the Plannett system, which combines artificial neural networks to achieve expert- level accuracy on the difficult scientific task of recognizing volcanos in radar images of the surface of the planet Venus. Plannett uses ANNs that vary along two dimensions: the set of input features used to train and the number of hidden units. The ANNs are combined simply by averaging their output activations. When Plannett is used as the classification module of a three-stage image analysis system called JAR- tool, the end-to-end accuracy (sensitivity and specificity) is as good as that of a human planetary geologist on a four-image test suite. JARtool-Plannett also achieves the best algorithmic accuracy on these images to date. ",
"neighbors": [
259
],
"mask": "Train"
},
{
"node_id": 321,
"label": 4,
"text": "Title: Planning with Closed-Loop Macro Actions \nAbstract: Planning and learning at multiple levels of temporal abstraction is a key problem for artificial intelligence. In this paper we summarize an approach to this problem based on the mathematical framework of Markov decision processes and reinforcement learning. Conventional model-based reinforcement learning uses primitive actions that last one time step and that can be modeled independently of the learning agent. These can be generalized to macro actions, multi-step actions specified by an arbitrary policy and a way of completing. Macro actions generalize the classical notion of a macro operator in that they are closed loop, uncertain, and of variable duration. Macro actions are needed to represent common-sense higher-level actions such as going to lunch, grasping an object, or traveling to a distant city. This paper generalizes prior work on temporally abstract models (Sutton 1995) and extends it from the prediction setting to include actions, control, and planning. We define a semantics of models of macro actions that guarantees the validity of planning using such models. This paper present new results in the theory of planning with macro actions and illustrates its potential advantages in a gridworld task. ",
"neighbors": [
566,
1192,
1954,
2179,
2305
],
"mask": "Test"
},
{
"node_id": 322,
"label": 6,
"text": "Title: Statistical Tests for Comparing Supervised Classification Learning Algorithms \nAbstract: This paper reviews five statistical tests for determining whether one learning algorithm outperforms another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (type 1 error). Two widely-used statistical tests are shown to have high probability of Type I error in certain situations and should never be used. These tests are (a) a test for the difference of two proportions and (b) a paired-differences t test based on taking several random train/test splits. A third test, a paired-differences t test based on 10-fold cross-validation, exhibits somewhat elevated probability of Type I error. A fourth test, McNemar's test, is shown to have low Type I error. The fifth test is a new test, 5x2cv, based on 5 iterations of 2-fold cross-validation. Experiments show that this test also has good Type I error. The paper also measures the power (ability to detect algorithm differences when they do exist) of these tests. The 5x2cv test is shown to be slightly more powerful than McNemar's test. The choice of the best test is determined by the computational cost of running the learning algorithm. For algorithms that can be executed only once, McNemar's test is the only test with acceptable Type I error. For algorithms that can be executed ten times, the 5x2cv test is recommended, because it is slightly more powerful and because it directly measures variation due to the choice of training set. ",
"neighbors": [
15,
160,
967,
1027,
1644,
2508
],
"mask": "Train"
},
{
"node_id": 323,
"label": 6,
"text": "Title: Learning Active Classifiers \nAbstract: Many classification algorithms are \"passive\", in that they assign a class-label to each instance based only on the description given, even if that description is incomplete. In contrast, an active classifier can | at some cost | obtain the values of missing attributes, before deciding upon a class label. The expected utility of using an active classifier depends on both the cost required to obtain the additional attribute values and the penalty incurred if it outputs the wrong classification. This paper considers the problem of learning near-optimal active classifiers, using a variant of the probably-approximately-correct (PAC) model. After defining the framework | which is perhaps the main contribution of this paper | we describe a situation where this task can be achieved efficiently, but then show that the task is often intractable. ",
"neighbors": [
140,
228,
1322,
2467,
2560
],
"mask": "Test"
},
{
"node_id": 324,
"label": 3,
"text": "Title: BUCKET ELIMINATION: A UNIFYING FRAMEWORK FOR PROBABILISTIC INFERENCE \nAbstract: Probabilistic inference algorithms for belief updating, finding the most probable explanation, the maximum a posteriori hypothesis, and the maximum expected utility are reformulated within the bucket elimination framework. This emphasizes the principles common to many of the algorithms appearing in the probabilistic inference literature and clarifies the relationship of such algorithms to nonserial dynamic programming algorithms. A general method for combining conditioning and bucket elimination is also presented. For all the algorithms, bounds on complexity are given as a function of the problem's structure. ",
"neighbors": [
62,
185,
278,
326,
327,
332,
389
],
"mask": "Test"
},
{
"node_id": 325,
"label": 3,
"text": "Title: Bayesian Model Selection in Social Research (with Discussion by \nAbstract: 1 This article will be published in Sociological Methodology 1995, edited by Peter V. Marsden, Cambridge, Mass.: Blackwells. Adrian E. Raftery is Professor of Statistics and Sociology, Department of Sociology, DK-40, University of Washington, Seattle, WA 98195. This research was supported by NIH grant no. 5R01HD26330. I would like to thank Robert Hauser, Michael Hout, Steven Lewis, Scott Long, Diane Lye, Peter Marsden, Bruce Western, Yu Xie and two anonymous reviewers for detailed comments on an earlier version. I am also grateful to Clem Brooks, Sir David Cox, Tom DiPrete, John Goldthorpe, David Grusky, Jennifer Hoeting, Robert Kass, David Madigan, Michael Sobel and Chris Volinsky for helpful discussions and correspondence. ",
"neighbors": [
211,
1201
],
"mask": "Test"
},
{
"node_id": 326,
"label": 3,
"text": "Title: Topological Parameters for time-space tradeoff \nAbstract: In this paper we propose a family of algorithms combining tree-clustering with conditioning that trade space for time. Such algorithms are useful for reasoning in probabilistic and deterministic networks as well as for accomplishing optimization tasks. By analyzing the problem structure it will be possible to select from a spectrum the algorithm that best meets a given time-space specifica tion.",
"neighbors": [
278,
324,
327
],
"mask": "Train"
},
{
"node_id": 327,
"label": 3,
"text": "Title: Global Conditioning for Probabilistic Inference in Belief Networks \nAbstract: In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loop-cutset conditioning. We show that global conditioning, as well as loop-cutset conditioning, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 1990b). Nonetheless, this approach provides new opportunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. We also show how a hybrid method (Suermondt and others 1990) combining loop-cutset conditioning with Jensen's method can be viewed within our framework. By exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully.",
"neighbors": [
324,
326,
945,
1532,
1899
],
"mask": "Test"
},
{
"node_id": 328,
"label": 2,
"text": "Title: Associative memory using action potential timing \nAbstract: The dynamics and collective properties of feedback networks with spiking neurons are investigated. Special emphasis is given to the potential computational role of subthreshold oscillations. It is shown that model systems with integrate-and-fire neurons can function as associative memories on two distinct levels. On the first level, binary patterns are represented by the spike activity | \"to fire or not to fire.\" On the second level, analog patterns are encoded in the relative firing times between individual spikes or between spikes and an underlying subthreshold oscillation. Both coding schemes may coexist within the same network. The results suggest that cortical neurons may perform a broad spectrum of associative computations far beyond the scope of the traditional firing-rate picture. ",
"neighbors": [
747,
2619
],
"mask": "Validation"
},
{
"node_id": 329,
"label": 1,
"text": "Title: Simple Subpopulation Schemes \nAbstract: This paper considers a new method for maintaining diversity by creating subpopulations in a standard generational evolutionary algorithm. Unlike other methods, it replaces the concept of distance between individuals with tag bits that identify the subpopulation to which an individual belongs. Two variations of this method are presented, illustrating the feasibility of this approach. ",
"neighbors": [
237
],
"mask": "Test"
},
{
"node_id": 330,
"label": 2,
"text": "Title: Local Feature Analysis: A general statistical theory for object representation \nAbstract: This paper considers a new method for maintaining diversity by creating subpopulations in a standard generational evolutionary algorithm. Unlike other methods, it replaces the concept of distance between individuals with tag bits that identify the subpopulation to which an individual belongs. Two variations of this method are presented, illustrating the feasibility of this approach. ",
"neighbors": [
354,
359,
576,
731,
747
],
"mask": "Train"
},
{
"node_id": 331,
"label": 2,
"text": "Title: From Data Distributions to Regularization in Invariant Learning \nAbstract: Ideally pattern recognition machines provide constant output when the inputs are transformed under a group G of desired invariances. These invariances can be achieved by enhancing the training data to include examples of inputs transformed by elements of G, while leaving the corresponding targets unchanged. Alternatively the cost function for training can include a regularization term that penalizes changes in the output when the input is transformed under the group. This paper relates the two approaches, showing precisely the sense in which the regularized cost function approximates the result of adding transformed (or distorted) examples to the training data. The cost function for the enhanced training set is equivalent to the sum of the original cost function plus a regularizer. For unbiased models, the regularizer reduces to the intuitively obvious choice - a term that penalizes changes in the output when the inputs are transformed under the group. For infinitesimal transformations, the coefficient of the regularization term reduces to the variance of the distortions introduced into the training data. This correspondence provides a simple bridge between the two approaches. ",
"neighbors": [
101,
179,
774
],
"mask": "Validation"
},
{
"node_id": 332,
"label": 3,
"text": "Title: Exploiting Causal Independence in Bayesian Network Inference \nAbstract: A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as or, sum or max, on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.",
"neighbors": [
62,
185,
324,
389,
1064
],
"mask": "Train"
},
{
"node_id": 333,
"label": 4,
"text": "Title: A Comparison of Action Selection Learning Methods \nAbstract: Our goal is to develop a hybrid cognitive model of how humans acquire skills on complex cognitive tasks. We are pursuing this goal by designing hybrid computational architectures for the NRL Navigation task, which requires competent senso-rimotor coordination. In this paper, we empirically compare two methods for control knowledge acquisition (reinforcement learning and a novel variant of action models), as well as a hybrid of these methods, with human learning on this task. Our results indicate that the performance of our action models approach more closely approximates the rate of human learning on the task than does reinforcement learning or the hybrid. We also experimentally explore the impact of background knowledge on system performance. By adding knowledge used by the action models system to the benchmark reinforcement learner, we elevate its performance above that of the action models system. ",
"neighbors": [
463,
477,
565,
566
],
"mask": "Train"
},
{
"node_id": 334,
"label": 6,
"text": "Title: Improved Noise-Tolerant Learning and Generalized Statistical Queries \nAbstract: The statistical query learning model can be viewed as a tool for creating (or demonstrating the existence of) noise-tolerant learning algorithms in the PAC model. The complexity of a statistical query algorithm, in conjunction with the complexity of simulating SQ algorithms in the PAC model with noise, determine the complexity of the noise-tolerant PAC algorithms produced. Although roughly optimal upper bounds have been shown for the complexity of statistical query learning, the corresponding noise-tolerant PAC algorithms are not optimal due to inefficient simulations. In this paper we provide both improved simulations and a new variant of the statistical query model in order to overcome these inefficiencies. We improve the time complexity of the classification noise simulation of statistical query algorithms. Our new simulation has a roughly optimal dependence on the noise rate. We also derive a simpler proof that statistical queries can be simulated in the presence of classification noise. This proof makes fewer assumptions on the queries themselves and therefore allows one to simulate more general types of queries. We also define a new variant of the statistical query model based on relative error, and we show that this variant is more natural and strictly more powerful than the standard additive error model. We demonstrate efficient PAC simulations for algorithms in this new model and give general upper bounds on both learning with relative error statistical queries and PAC simulation. We show that any statistical query algorithm can be simulated in the PAC model with malicious errors in such a way that the resultant PAC algorithm has a roughly optimal tolerable malicious error rate and sample complexity. Finally, we generalize the types of queries allowed in the statistical query model. We discuss the advantages of allowing these generalized queries and show that our results on improved simulations also hold for these queries. This paper is available from the Center for Research in Computing Technology, Division of Applied Sciences, Harvard University as technical report TR-17-94. ",
"neighbors": [
20,
267
],
"mask": "Test"
},
{
"node_id": 335,
"label": 5,
"text": "Title: Incremental Reduced Error Pruning \nAbstract: This paper outlines some problems that may occur with Reduced Error Pruning in relational learning algorithms, most notably efficiency. Thereafter a new method, Incremental Reduced Error Pruning, is proposed that attempts to address all of these problems. Experiments show that in many noisy domains this method is much more efficient than alternative algorithms, along with a slight gain in accuracy. However, the experiments show as well that the use of the algorithm cannot be recommended for domains which require a very specific concept description.",
"neighbors": [
344,
378,
426,
585
],
"mask": "Train"
},
{
"node_id": 336,
"label": 2,
"text": "Title: PREENS Tutorial How to use tools and NN simulations \nAbstract: This report contains a description about how to use PREENS: its tools, convis and its neural network simulation programs. It does so by using several sample sessions. For more technical details, I refer to the convis technical description. ",
"neighbors": [
747
],
"mask": "Train"
},
{
"node_id": 337,
"label": 2,
"text": "Title: Meter as Mechanism: A Neural Network that Learns Metrical Patterns \nAbstract: One kind of prosodic structure that apparently underlies both music and language is meter. Yet detailed measurements of both music and speech show that the nested periodicities that define metrical structure are noisy in some sense. What kind of system could produce or perceive such variable metrical timing? And what would it take to store particular metrical patterns in the long-term memory of the system? We have developed a network of coupled oscillators that both produces and perceives metrical patterns of pulses. In addition, beginning with an initial state with no biases, it learns to prefer 3-beat patterns (like waltzes) over 2-beat patterns. Models of this general class could learn to entrain to musical patterns. And given a way to process speech to extract appropriate pulses, the model should be applicable to metrical structure in speech as well. Is language metrical? Meter refers both to particular sorts of patterns in time and to an abstract description of such patterns, potentially a cognitive representation of them. In both cases there are two or more hierarchical levels at which equally spaced events occur, and the periods characterizing these levels are integral multiples of each other (usually 2 or 3). The hierarchy is implied in standard Western musical notation, where the different levels are indicated by kinds of notes (quarter notes, half notes, etc.) and by bars separating measures. For example, in a basic waltz-time meter, there are individual beats, all with the same spacing, grouped into sets of three with every third one receiving a stronger accent. In such a meter, there is a hierarchy consisting of both a faster periodic cycle (at the beat level) and a slower one (the measure level) that is 1/3 as fast with its onset (or zero phase angle) coinciding with the zero phase angle of every third beat. Metrical systems like this seem to underlie most forms of music around the world and are often said to underlie human speech as well (Jones, 1932; Martin, 1972). However, an awkward difficulty is that the definition employs the notion of an integer since data on both music and speech show clearly that the perfect temporal ratios predicted by such a definition are not observed in performance. In music performance, various kinds of systematic temporal deviations in the timing specified by musical notation are known to ",
"neighbors": [
77,
143,
346,
363
],
"mask": "Validation"
},
{
"node_id": 338,
"label": 2,
"text": "Title: Knowledge Integration and Rule Extraction in Neural Networks Ph.D. Proposal \nAbstract: One kind of prosodic structure that apparently underlies both music and language is meter. Yet detailed measurements of both music and speech show that the nested periodicities that define metrical structure are noisy in some sense. What kind of system could produce or perceive such variable metrical timing? And what would it take to store particular metrical patterns in the long-term memory of the system? We have developed a network of coupled oscillators that both produces and perceives metrical patterns of pulses. In addition, beginning with an initial state with no biases, it learns to prefer 3-beat patterns (like waltzes) over 2-beat patterns. Models of this general class could learn to entrain to musical patterns. And given a way to process speech to extract appropriate pulses, the model should be applicable to metrical structure in speech as well. Is language metrical? Meter refers both to particular sorts of patterns in time and to an abstract description of such patterns, potentially a cognitive representation of them. In both cases there are two or more hierarchical levels at which equally spaced events occur, and the periods characterizing these levels are integral multiples of each other (usually 2 or 3). The hierarchy is implied in standard Western musical notation, where the different levels are indicated by kinds of notes (quarter notes, half notes, etc.) and by bars separating measures. For example, in a basic waltz-time meter, there are individual beats, all with the same spacing, grouped into sets of three with every third one receiving a stronger accent. In such a meter, there is a hierarchy consisting of both a faster periodic cycle (at the beat level) and a slower one (the measure level) that is 1/3 as fast with its onset (or zero phase angle) coinciding with the zero phase angle of every third beat. Metrical systems like this seem to underlie most forms of music around the world and are often said to underlie human speech as well (Jones, 1932; Martin, 1972). However, an awkward difficulty is that the definition employs the notion of an integer since data on both music and speech show clearly that the perfect temporal ratios predicted by such a definition are not observed in performance. In music performance, various kinds of systematic temporal deviations in the timing specified by musical notation are known to ",
"neighbors": [
627
],
"mask": "Validation"
},
{
"node_id": 339,
"label": 3,
"text": "Title: Abduction as Belief Revision \nAbstract: We propose a model of abduction based on the revision of the epistemic state of an agent. Explanations must be sufficient to induce belief in the sentence to be explained (for instance, some observation), or ensure its consistency with other beliefs, in a manner that adequately accounts for factual and hypothetical sentences. Our model will generate explanations that nonmonotonically predict an observation, thus generalizing most current accounts, which require some deductive relationship between explanation and observation. It also provides a natural preference ordering on explanations, defined in terms of normality or plausibility. To illustrate the generality of our approach, we reconstruct two of the key paradigms for model-based diagnosis, abductive and consistency-based diagnosis, within our framework. This reconstruction provides an alternative semantics for both and extends these systems to accommodate our predictive explanations and semantic preferences on explanations. It also illustrates how more general information can be incorporated in a principled manner. fl Some parts of this paper appeared in preliminary form as Abduction as Belief Revision: A Model of Preferred Explanations, Proc. of Eleventh National Conf. on Artificial Intelligence (AAAI-93), Washington, DC, pp.642-648 (1993). ",
"neighbors": [
270,
342,
495,
1549,
1602
],
"mask": "Train"
},
{
"node_id": 340,
"label": 2,
"text": "Title: Best Probability of Activation and Performance Comparisons for Several Designs of Sparse Distributed Memory \nAbstract: Report R95:09 ISRN : SICS-R-95/09-SE ISSN : 0283-3638 Abstract The optimal probability of activation and the corresponding performance is studied for three designs of Sparse Distributed Memory, namely, Kanerva's original design, Jaeckel's selected-coordinates design and Karlsson's modifi - cation of Jaeckel's design. We will assume that the hard locations (in Karlsson's case, the masks), the storage addresses and the stored data are randomly chosen, and we will consider different levels of random noise in the reading address. ",
"neighbors": [
341,
529,
709
],
"mask": "Train"
},
{
"node_id": 341,
"label": 2,
"text": "Title: Some Comments on the Information Stored in Sparse Distributed Memory \nAbstract: Report R95:11 ISRN : SICS-R--95/11-SE ISSN : 0283-3638 Abstract We consider a sparse distributed memory with randomly chosen hard locations, in which an unknown number T of random data vectors have been stored. A method is given to estimate T from the content of the memory with high accuracy. In fact, our estimate is unbiased, the coefficient of variation being roughly inversely proportional to p MU , where M is the number of hard locations in the memory and U the length of data, so the accuracy can be made arbitrarily high by making the memory big enough. A consequence of this is that the good reading methods in [5] and [6] can be used without any need for the special extra location introduced there. ",
"neighbors": [
340,
529,
709
],
"mask": "Train"
},
{
"node_id": 342,
"label": 3,
"text": "Title: Rank-based systems: A simple approach to belief revision, belief update, and reasoning about evidence and actions. \nAbstract: We describe a ranked-model semantics for if-then rules admitting exceptions, which provides a coherent framework for many facets of evidential and causal reasoning. Rule priorities are automatically extracted form the knowledge base to facilitate the construction and retraction of plausible beliefs. To represent causation, the formalism incorporates the principle of Markov shielding which imposes a stratified set of independence constraints on rankings of interpretations. We show how this formalism resolves some classical problems associated with specificity, prediction and abduction, and how it offers a natural way of unifying belief revision, belief update, and reasoning about actions.",
"neighbors": [
270,
276,
339,
467,
495,
729,
776,
1800,
1945,
1993,
2016,
2546
],
"mask": "Train"
},
{
"node_id": 343,
"label": 1,
"text": "Title: A Promising genetic Algorithm Approach to Job-Shop Scheduling, Rescheduling, and Open-Shop Scheduling Problems \nAbstract: We describe a ranked-model semantics for if-then rules admitting exceptions, which provides a coherent framework for many facets of evidential and causal reasoning. Rule priorities are automatically extracted form the knowledge base to facilitate the construction and retraction of plausible beliefs. To represent causation, the formalism incorporates the principle of Markov shielding which imposes a stratified set of independence constraints on rankings of interpretations. We show how this formalism resolves some classical problems associated with specificity, prediction and abduction, and how it offers a natural way of unifying belief revision, belief update, and reasoning about actions.",
"neighbors": [
530,
1098,
1274,
1303,
1523,
1571,
1577
],
"mask": "Test"
},
{
"node_id": 344,
"label": 5,
"text": "Title: Quinlan, 1990 J.R. Quinlan. Learning logical definitions from relations. Machine Learning, First-order theory revision. In\nAbstract: We describe a ranked-model semantics for if-then rules admitting exceptions, which provides a coherent framework for many facets of evidential and causal reasoning. Rule priorities are automatically extracted form the knowledge base to facilitate the construction and retraction of plausible beliefs. To represent causation, the formalism incorporates the principle of Markov shielding which imposes a stratified set of independence constraints on rankings of interpretations. We show how this formalism resolves some classical problems associated with specificity, prediction and abduction, and how it offers a natural way of unifying belief revision, belief update, and reasoning about actions.",
"neighbors": [
1,
316,
335,
348,
521,
675,
963,
1007,
1244,
1260,
1267,
1275,
1312,
1442,
1445,
1622,
1627,
1671,
1881,
2032,
2171,
2215,
2229,
2290,
2291,
2339,
2426,
2441,
2589,
2609,
2617
],
"mask": "Train"
},
{
"node_id": 345,
"label": 3,
"text": "Title: On Convergence Properties of the EM Algorithm for Gaussian Mixtures \nAbstract: We build up the mathematical connection between the \"Expectation-Maximization\" (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P , and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of P and provide new results analyzing the effect that P has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00000-00-A-0000. The authors were also supported by the HK RGC Earmarked Grant CUHK250/94E, by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, and by grant N00014-90-1-0777 from the Office of Naval Research. Michael I. Jordan is an NSF Presidential Young Investigator. ",
"neighbors": [
74,
76,
117,
261,
622,
1924,
2266,
2389,
2421
],
"mask": "Train"
},
{
"node_id": 346,
"label": 2,
"text": "Title: PERCEPTION OF TIME AS PHASE: TOWARD AN ADAPTIVE-OSCILLATOR MODEL OF RHYTHMIC PATTERN PROCESSING 1 \nAbstract: We build up the mathematical connection between the \"Expectation-Maximization\" (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P , and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of P and provide new results analyzing the effect that P has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00000-00-A-0000. The authors were also supported by the HK RGC Earmarked Grant CUHK250/94E, by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, and by grant N00014-90-1-0777 from the Office of Naval Research. Michael I. Jordan is an NSF Presidential Young Investigator. ",
"neighbors": [
132,
143,
163,
337,
363
],
"mask": "Train"
},
{
"node_id": 347,
"label": 3,
"text": "Title: A Reference Bayesian Test for Nested Hypotheses And its Relationship to the Schwarz Criterion \nAbstract: We build up the mathematical connection between the \"Expectation-Maximization\" (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P , and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of P and provide new results analyzing the effect that P has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00000-00-A-0000. The authors were also supported by the HK RGC Earmarked Grant CUHK250/94E, by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, and by grant N00014-90-1-0777 from the Office of Naval Research. Michael I. Jordan is an NSF Presidential Young Investigator. ",
"neighbors": [
84,
452,
713,
998,
999
],
"mask": "Train"
},
{
"node_id": 348,
"label": 5,
"text": "Title: First Order Regression: Applications in Real-World Domains \nAbstract: A first order regression algorithm capable of handling real-valued (continuous) variables is introduced and some of its applications are presented. Regressional learning assumes real-valued class and discrete or real-valued variables. The algorithm combines regressional learning with standard ILP concepts, such as first order concept description and background knowledge. A clause is generated by successively refining the initial clause by adding literals of the form A = v for the discrete attributes, A v and A v for the real-valued attributes, and background knowledge literals to the clause body. The algorithm employs a covering approach (beam search), a heuristic impurity function, and stopping criteria based on local improvement, minimum number of examples, maximum clause length, minimum local improvement, minimum description length, allowed error, and variable depth. An outline of the algorithm and the results of the system's application in some artificial and real-world domains are presented. The real-world domains comprise: modelling of the water behavior in a surge tank, modelling of the workpiece roughness in a steel grinding process and modelling of the operator's behavior during the process of electrical discharge machining. Special emphasis is given to the evaluation of obtained models by domain experts and their comments on the aspects of practical use of the induced knowledge. The results obtained during the knowledge acquisition process show several important guidelines for knowledge acquisition, concerning mainly the process of interaction with domain experts, exposing primarily the importance of comprehensibility of the induced knowledge.",
"neighbors": [
344,
638,
1244,
1596
],
"mask": "Test"
},
{
"node_id": 349,
"label": 2,
"text": "Title: Measuring Organization and Asymmetry in Bihemispheric Topographic Maps \nAbstract: We address the problem of measuring the degree of hemispheric organization and asymmetry of organization in a computational model of a bihemispheric cerebral cortex. A theoretical framework for such measures is developed and used to produce algorithms for measuring the degree of organization, symmetry, and lateralization in topographic map formation. The performance of the resulting measures is tested for several topographic maps obtained by self-organization of an initially random network, and the results are compared with subjective assessments made by humans. It is found that the closest agreement with the human assessments is obtained by using organization measures based on sigmoid-type error averaging. Measures are developed which correct for large constant displacements as well as curving of the hemispheric topographic maps. ",
"neighbors": [
747
],
"mask": "Validation"
},
{
"node_id": 350,
"label": 2,
"text": "Title: Induction of Multiscale Temporal Structure \nAbstract: Learning structure in temporally-extended sequences is a difficult computational problem because only a fraction of the relevant information is available at any instant. Although variants of back propagation can in principle be used to find structure in sequences, in practice they are not sufficiently powerful to discover arbitrary contingencies, especially those spanning long temporal intervals or involving high order statistics. For example, in designing a connectionist network for music composition, we have encountered the problem that the net is able to learn musical structure that occurs locally in time|e.g., relations among notes within a musical phrase|but not structure that occurs over longer time periods|e.g., relations among phrases. To address this problem, we require a means of constructing a reduced description of the sequence that makes global aspects more explicit or more readily detectable. I propose to achieve this using hidden units that operate with different time constants. Simulation experiments indicate that slower time-scale hidden units are able to pick up global structure, structure that simply can not be learned by standard Many patterns in the world are intrinsically temporal, e.g., speech, music, the unfolding of events. Recurrent neural net architectures have been devised to accommodate time-varying sequences. For example, the architecture shown in Figure 1 can map a sequence of inputs to a sequence of outputs. Learning structure in temporally-extended sequences is a difficult computational problem because the input pattern may not contain all the task-relevant information at any instant. Thus, back propagation.",
"neighbors": [
111,
180,
201,
351,
427,
664,
730,
770
],
"mask": "Test"
},
{
"node_id": 351,
"label": 2,
"text": "Title: Sequence Learning with Incremental Higher-Order Neural Networks \nAbstract: An incremental, higher-order, non-recurrent neural-network combines two properties found to be useful for sequence learning in neural-networks: higher-order connections and the incremental introduction of new units. The incremental, higher-order neural-network adds higher orders when needed by adding new units that dynamically modify connection weights. The new units modify the weights at the next time-step with information from the previous step. Since a theoretically unlimited number of units can be added to the network, information from the arbitrarily distant past can be brought to bear on each prediction. Temporal tasks can thereby be learned without the use of feedback, in contrast to recurrent neural-networks. Because there are no recurrent connections, training is simple and fast. Experiments have demonstrated speedups of two orders of magnitude over recurrent networks.",
"neighbors": [
350,
730
],
"mask": "Train"
},
{
"node_id": 352,
"label": 3,
"text": "Title: Convergence controls for MCMC algorithms, with applications to hidden Markov chains \nAbstract: In complex models like hidden Markov chains, the convergence of the MCMC algorithms used to approximate the posterior distribution and the Bayes estimates of the parameters of interest must be controlled in a robust manner. We propose in this paper a series of on-line controls, which rely on classical non-parametric tests, to evaluate independence from the start-up distribution, stability of the Markov chain, and asymptotic normality. These tests lead to graphical control spreadsheets which are presented in the set-up of normal mixture hidden Markov chains to compare the full Gibbs sampler with an aggregated Gibbs sampler based on the forward-backward formulae. ",
"neighbors": [
41,
904,
1372
],
"mask": "Train"
},
{
"node_id": 353,
"label": 2,
"text": "Title: Application of Neural Networks for the Classification of Diffuse Liver Disease by Quantitative Echography \nAbstract: Three different methods were investigated to determine their ability to detect and classify various categories of diffuse liver disease. A statistical method, i.e., discriminant analysis, a supervised neural network called backpropagation and a nonsupervised, self-organizing feature map were examined. The investigation was performed on the basis of a previously selected set of acoustic and image texture parameters. The limited number of patients was successfully extended by generating additional but independent data with identical statistical properties. The generated data were used for training and test sets. The final test was made with the original patient data as a validation set. It is concluded that neural networks are an attractive alternative to traditional statistical techniques when dealing with medical detection and classification tasks. Moreover, the use of generated data for training the networks and the discriminant classifier has been shown to be justified and profitable. ",
"neighbors": [
427,
747
],
"mask": "Train"
},
{
"node_id": 354,
"label": 2,
"text": "Title: Principal and Independent Components in Neural Networks Recent Developments \nAbstract: Nonlinear extensions of one-unit and multi-unit Principal Component Analysis (PCA) neural networks, introduced earlier by the authors, are reviewed. The networks and their nonlinear Hebbian learning rules are related to other signal expansions like Projection Pursuit (PP) and Independent Component Analysis (ICA). Separation results for mixtures of real world signals and im ages are given.",
"neighbors": [
330,
576,
839,
1072,
1520
],
"mask": "Train"
},
{
"node_id": 355,
"label": 2,
"text": "Title: Generalization and Exclusive Allocation of Credit in Unsupervised Category Learning \nAbstract: Acknowledgements: This research was supported in part by the Office of Naval Research (Cognitive and Neural Sciences, N00014-93-1-0208) and by the Whitaker Foundation (Special Opportunity Grant). We thank George Kalarickal, Charles Schmitt, William Ross, and Douglas Kelly for valuable discussions. ",
"neighbors": [
576,
745,
1093,
1094,
1562,
2068
],
"mask": "Train"
},
{
"node_id": 356,
"label": 2,
"text": "Title: A Flexible Model For Human Circadian Rhythms \nAbstract: Many hormones and other physiological processes vary in a circadian pattern. Although a sine/cosine function can be used to model these patterns, this functional form is not appropriate when there is asymmetry between the peak and nadir phases. In this paper we describe a semi-parametric periodic spline function that can be fit to circadian rhythms. The model includes both phase and amplitude so that the time and the magnitude of the peak or nadir can be estimated. We also describe tests of fit for components in the model. Data from an experiment to study immunological responses in humans are used to demonstrate the methods. ",
"neighbors": [
190,
510
],
"mask": "Validation"
},
{
"node_id": 357,
"label": 1,
"text": "Title: Genetic Algorithms as Multi-Coordinators in Large-Scale Optimization \nAbstract: We present high-level, decomposition-based algorithms for large-scale block-angular optimization problems containing integer variables, and demonstrate their effectiveness in the solution of large-scale graph partitioning problems. These algorithms combine the subproblem-coordination paradigm (and lower bounds) of price-directive decomposition methods with knapsack and genetic approaches to the utilization of \"building blocks\" of partial solutions. Even for graph partitioning problems requiring billions of variables in a standard 0-1 formulation, this approach produces high-quality solutions (as measured by deviations from an easily computed lower bound), and substantially outperforms widely-used graph partitioning techniques based on heuristics and spectral methods.",
"neighbors": [
243,
537,
803,
1563,
2089
],
"mask": "Train"
},
{
"node_id": 358,
"label": 3,
"text": "Title: Hierarchical Spatio-Temporal Mapping of Disease Rates \nAbstract: Maps of regional morbidity and mortality rates are useful tools in determining spatial patterns of disease. Combined with socio-demographic census information, they also permit assessment of environmental justice, i.e., whether certain subgroups suffer disproportionately from certain diseases or other adverse effects of harmful environmental exposures. Bayes and empirical Bayes methods have proven useful in smoothing crude maps of disease risk, eliminating the instability of estimates in low-population areas while maintaining geographic resolution. In this paper we extend existing hierarchical spatial models to account for temporal effects and spatio-temporal interactions. Fitting the resulting highly-parametrized models requires careful implementation of Markov chain Monte Carlo (MCMC) methods, as well as novel techniques for model evaluation and selection. We illustrate our approach using a dataset of county-specific lung cancer rates in the state of Ohio during the period 1968-1988. ",
"neighbors": [
95,
1255
],
"mask": "Train"
},
{
"node_id": 359,
"label": 2,
"text": "Title: Feature Extraction Using an Unsupervised Neural Network \nAbstract: A novel unsupervised neural network for dimensionality reduction that seeks directions emphasizing multimodality is presented, and its connection to exploratory projection pursuit methods is discussed. This leads to a new statistical insight into the synaptic modification equations governing learning in Bienenstock, Cooper, and Munro (BCM) neurons (1982). The importance of a dimensionality reduction principle based solely on distinguishing features is demonstrated using a phoneme recognition experiment. The extracted features are compared with features extracted using a back-propagation network.",
"neighbors": [
203,
330,
808,
1068,
1320,
1342,
1787,
1871,
2322,
2376,
2422,
2498,
2499,
2505,
2567
],
"mask": "Train"
},
{
"node_id": 360,
"label": 2,
"text": "Title: Investigating the Value of a Good Input Representation \nAbstract: This paper is reprinted from Computational Learning Theory and Natural Learning Systems, vol. 3, T. Petsche, S. Judd, and S. Hanson, (eds.), forthcoming 1995. Copyrighted 1995 by MIT Press Abstract The ability of an inductive learning system to find a good solution to a given problem is dependent upon the representation used for the features of the problem. A number of factors, including training-set size and the ability of the learning algorithm to perform constructive induction, can mediate the effect of an input representation on the accuracy of a learned concept description. We present experiments that evaluate the effect of input representation on generalization performance for the real-world problem of finding genes in DNA. Our experiments that demonstrate that: (1) two different input representations for this task result in significantly different generalization performance for both neural networks and decision trees; and (2) both neural and symbolic methods for constructive induction fail to bridge the gap between these two representations. We believe that this real-world domain provides an interesting challenge problem for the machine learning subfield of constructive induction because the relationship between the two representations is well known, and because conceptually, the representational shift involved in constructing the better representation should not be too imposing. ",
"neighbors": [
151,
698
],
"mask": "Train"
},
{
"node_id": 361,
"label": 1,
"text": "Title: Overview of Selection Schemes and a Suggested Classification \nAbstract: In this paper we emphasize the role of selection in evolutionary algorithms. We briefly review some of the most common selection schemes from the fields of Genetic Algorithms, Evolution Strategies and Genetic Programming. However we do not classify selection schemes according to which group of evolutionary algorithm they belong to, but rather distinguish between parent selection schemes, global competition and replacement schemes, and local competition and replacement schemes. This paper does not intend to fully review and analyse each of the presented selection schemes but tries to be a short reference for standard and some more advanced selection schemes. ",
"neighbors": [
55
],
"mask": "Train"
},
{
"node_id": 362,
"label": 2,
"text": "Title: Learning Topology-Preserving Maps Using Self-Supervised Backpropagation \nAbstract: Self-supervised backpropagation is an unsupervised learning procedure for feedforward networks, where the desired output vector is identical with the input vector. For backpropagation, we are able to use powerful simulators running on parallel machines. Topology-preserving maps, on the other hand, can be developed by a variant of the competitive learning procedure. However, in a degenerate case, self-supervised backpropagation is a version of competitive learning. A simple extension of the cost function of backpropagation leads to a competitive version of self-supervised backpropagation, which can be used to produce topographic maps. We demonstrate the approach applied to the Traveling Salesman Problem (TSP). ",
"neighbors": [
747,
2191
],
"mask": "Train"
},
{
"node_id": 363,
"label": 2,
"text": "Title: Representing Rhythmic Patterns in a Network of Oscillators \nAbstract: This paper describes an evolving computational model of the perception and production of simple rhythmic patterns. The model consists of a network of oscillators of different resting frequencies which couple with input patterns and with each other. Oscillators whose frequencies match periodicities in the input tend to become activated. Metrical structure is represented explicitly in the network in the form of clusters of oscillators whose frequencies and phase angles are constrained to maintain the harmonic relationships that characterize meter. Rests in rhythmic patterns are represented by explicit rest oscillators in the network, which become activated when an expected beat in the pattern fails to appear. The model makes predictions about the relative difficulty of The nested periodicity that defines musical, and probably also linguistic, meter appears to be fundamental to the way in which people perceive and produce patterns in time. Meter by itself, however, is not sufficient to describe patterns which are interesting or memorable because of how they deviate from the metrical hierarchy. The simplest deviations are rests or gaps where one or more levels in the hierarchy would normally have a beat. When beats are removed at regular intervals which match the period of some level of the metrical hierarchy, we have what we will call a simple rhythmic pattern. Figure 1 shows an example of a simple rhythmic pattern. Below it is a grid representation of the meter which is behind the pattern. patterns and the effect of deviations from periodicity in the input.",
"neighbors": [
143,
337,
346
],
"mask": "Validation"
},
{
"node_id": 364,
"label": 2,
"text": "Title: Radial Basis Functions: L p -approximation orders with scattered centres \nAbstract: In this paper we generalize several results on uniform approximation orders with radial basis functions in (Buhmann, Dyn and Levin, 1993) and (Dyn and Ron, 1993) to L p -approximation orders. These results apply, in particular, to approximants from spaces spanned by translates of radial basis functions by scattered centres. Examples to which our results apply include quasi-interpolation and least-squares approximation from radial function spaces.",
"neighbors": [
365,
366,
590
],
"mask": "Train"
},
{
"node_id": 365,
"label": 2,
"text": "Title: Radial basis function approximation: from gridded centers to scattered centers \nAbstract: The paper studies L 1 (IR d )-norm approximations from a space spanned by a discrete set of translates of a basis function . Attention here is restricted to functions whose Fourier transform is smooth on IR d n0, and has a singularity at the origin. Examples of such basis functions are the thin-plate splines and the multiquadrics, as well as other types of radial basis functions that are employed in Approximation Theory. The above approximation problem is well-understood in case the set of points ffi used for translating forms a lattice in IR d , and many optimal and quasi-optimal approximation schemes can already be found in the literature. In contrast, only few, mostly specific, results are known for a set ffi of scattered points. The main objective of this paper is to provide a general tool for extending approximation schemes that use integer translates of a basis function to the non-uniform case. We introduce a single, relatively simple, conversion method that preserves the approximation orders provided by a large number of schemes presently in the literature (more precisely, to almost all \"stationary schemes\"). In anticipation of future introduction of new schemes for uniform grids, an effort is made to impose only a few mild conditions on the function , which still allow for a unified error analysis to hold. In the course of the discussion here, the recent results of [BuDL] on scattered center approximation are reproduced and improved upon. ",
"neighbors": [
364,
590,
2112,
2572
],
"mask": "Test"
},
{
"node_id": 366,
"label": 2,
"text": "Title: AN UPPER BOUND ON THE APPROXIMATION POWER OF PRINCIPAL SHIFT-INVARIANT SPACES \nAbstract: An upper bound on the L p -approximation power (1 p 1) provided by principal shift-invariant spaces is derived with only very mild assumptions on the generator. It applies to both stationary and non-stationary ladders, and is shown to apply to spaces generated by (exponential) box splines, polyharmonic splines, multiquadrics, and Gauss kernel. ",
"neighbors": [
364,
590
],
"mask": "Validation"
},
{
"node_id": 367,
"label": 4,
"text": "Title: Machine Learning, Explanation-Based Learning and Reinforcement Learning: A Unified View \nAbstract: In speedup-learning problems, where full descriptions of operators are known, both explanation-based learning (EBL) and reinforcement learning (RL) methods can be applied. This paper shows that both methods involve fundamentally the same process of propagating information backward from the goal toward the starting state. Most RL methods perform this propagation on a state-by-state basis, while EBL methods compute the weakest preconditions of operators, and hence, perform this propagation on a region-by-region basis. Barto, Bradtke, and Singh (1995) have observed that many algorithms for reinforcement learning can be viewed as asynchronous dynamic programming. Based on this observation, this paper shows how to develop dynamic programming versions of EBL, which we call region-based dynamic programming or Explanation-Based Reinforcement Learning (EBRL). The paper compares batch and online versions of EBRL to batch and online versions of point-based dynamic programming and to standard EBL. The results show that region-based dynamic programming combines the strengths of EBL (fast learning and the ability to scale to large state spaces) with the strengths of reinforcement learning algorithms (learning of optimal policies). Results are shown in chess endgames and in synthetic maze tasks. ",
"neighbors": [
440,
483,
552,
565
],
"mask": "Train"
},
{
"node_id": 368,
"label": 2,
"text": "Title: Some Extensions of the K-Means Algorithm for Image Segmentation and Pattern Classification \nAbstract: In this paper we present some extensions to the k-means algorithm for vector quantization that permit its efficient use in image segmentation and pattern classification tasks. It is shown that by introducing state variables that correspond to certain statistics of the dynamic behavior of the algorithm, it is possible to find the representative centers of the lower dimensional manifolds that define the boundaries between classes, for clouds of multi-dimensional, multi-class data; this permits one, for example, to find class boundaries directly from sparse data (e.g., in image segmentation tasks) or to efficiently place centers for pattern classification (e.g., with local Gaussian classifiers). The same state variables can be used to define algorithms for determining adaptively the optimal number of centers for clouds of data with space-varying density. Some examples of the application of these extensions are also given. This report describes research done within CIMAT (Guanajuato, Mexico), the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences, and at the Artificial Intelligence Laboratory. This research is sponsored by grants from the Office of Naval Research under contracts N00014-91-J-1270 and N00014-92-J-1879; by a grant from the National Science Foundation under contract ASC-9217041; and by a grant from the National Institutes of Health under contract NIH 2-S07-RR07047. Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ONR contract N00014-91-J-4038. J.L. Marroquin was supported in part by a grant from the Consejo Nacional de Ciencia y Tecnologia, Mexico. ",
"neighbors": [
611,
747
],
"mask": "Test"
},
{
"node_id": 369,
"label": 2,
"text": "Title: Limitations of self-organizing maps for vector quantization and multidimensional scaling \nAbstract: The limitations of using self-organizing maps (SOM) for either clustering/vector quantization (VQ) or multidimensional scaling (MDS) are being discussed by reviewing recent empirical findings and the relevant theory. SOM's remaining ability of doing both VQ and MDS at the same time is challenged by a new combined technique of online K-means clustering plus Sammon mapping of the cluster centroids. SOM are shown to perform significantly worse in terms of quantization error, in recovering the structure of the clusters and in preserving the topology in a comprehensive empirical study using a series of multivariate normal clustering problems.",
"neighbors": [
747
],
"mask": "Train"
},
{
"node_id": 370,
"label": 4,
"text": "Title: Robust Reinforcement Learning in Motion Planning \nAbstract: While exploring to find better solutions, an agent performing online reinforcement learning (RL) can perform worse than is acceptable. In some cases, exploration might have unsafe, or even catastrophic, results, often modeled in terms of reaching `failure' states of the agent's environment. This paper presents a method that uses domain knowledge to reduce the number of failures during exploration. This method formulates the set of actions from which the RL agent composes a control policy to ensure that exploration is conducted in a policy space that excludes most of the unacceptable policies. The resulting action set has a more abstract relationship to the task being solved than is common in many applications of RL. Although the cost of this added safety is that learning may result in a suboptimal solution, we argue that this is an appropriate tradeoff in many problems. We illustrate this method in the domain of motion planning. ",
"neighbors": [
552,
562,
875
],
"mask": "Validation"
},
{
"node_id": 371,
"label": 3,
"text": "Title: Selecting Input Variables Using Mutual Information and Nonparametric Density Estimation \nAbstract: In learning problems where a connectionist network is trained with a finite sized training set, better generalization performance is often obtained when unneeded weights in the network are eliminated. One source of unneeded weights comes from the inclusion of input variables that provide little information about the output variables. We propose a method for identifying and eliminating these input variables. The method first determines the relationship between input and output variables using nonparametric density estimation and then measures the relevance of input variables using the information theoretic concept of mutual information. We present results from our method on a simple toy problem and a nonlinear time series.",
"neighbors": [
88,
157,
2507
],
"mask": "Validation"
},
{
"node_id": 372,
"label": 1,
"text": "Title: Investigating the role of diploidy in simulated populations of evolving individuals \nAbstract: In most work applying genetic algorithms to populations of neural networks there is no real distinction between genotype and phenotype. In nature both the information contained in the genotype and the mapping of the genetic information into the phenotype are usually much more complex. The genotypes of many organisms exhibit diploidy, i.e., they include two copies of each gene: if the two copies are not identical in their sequences and therefore have a functional difference in their products (usually proteins), the expressed phenotypic feature is termed the dominant one, the other one recessive (not expressed). In this paper we review the literature on the use of diploidy and dominance operators in genetic algorithms; we present the new results we obtained with our own simulations in changing environments; finally, we discuss some results of our simulations that parallel biological findings.",
"neighbors": [
38,
273
],
"mask": "Train"
},
{
"node_id": 373,
"label": 5,
"text": "Title: Task Selection for a Multiscalar Processor \nAbstract: The Multiscalar architecture advocates a distributed processor organization and task-level speculation to exploit high degrees of instruction level parallelism (ILP) in sequential programs without impeding improvements in clock speeds. The main goal of this paper is to understand the key implications of the architectural features of distributed processor organization and task-level speculation for compiler task selection from the point of view of performance. We identify the fundamental performance issues to be: control ow speculation, data communication, data dependence speculation, load imbalance, and task overhead. We show that these issues are intimately related to a few key characteristics of tasks: task size, inter-task control ow, and inter-task data dependence. We describe compiler heuristics to select tasks with favorable characteristics. We report experimental results to show that the heuristics are successful in boosting overall performance by establishing larger ILP windows. ",
"neighbors": [
86,
652
],
"mask": "Test"
},
{
"node_id": 374,
"label": 4,
"text": "Title: An Introspection Approach to Querying a Trainer \nAbstract: Technical Report 96-13 January 22, 1996 Abstract This paper introduces the Introspection Approach, a method by which a learning agent employing reinforcement learning can decide when to ask a training agent for instruction. When using our approach, we find that the same number of trainer's responses produced significantly faster learners than by having the learner ask for aid randomly. Guidance received via our approach is more informative than random guidance. Thus, we can reduce the interaction that the training agent has with the learning agent without reducing the speed with which the learner develops its policy. In fact, by being intelligent about when the learner asks for help, we can even increase the learning speed for the same level of trainer interaction. ",
"neighbors": [
455,
552
],
"mask": "Validation"
},
{
"node_id": 375,
"label": 6,
"text": "Title: Constructive Induction Using a Non-Greedy Strategy for Feature Selection \nAbstract: We present a method for feature construction and selection that finds a minimal set of conjunctive features that are appropriate to perform the classification task. For problems where this bias is appropriate, the method outperforms other constructive induction algorithms and is able to achieve higher classification accuracy. The application of the method in the search for minimal multi-level boolean expressions is presented and analyzed with the help of some examples.",
"neighbors": [
635,
638,
836,
1576
],
"mask": "Test"
},
{
"node_id": 376,
"label": 3,
"text": "Title: Bayesian Finite Mixtures for Nonlinear Modeling of Educational data \nAbstract: In this paper we discuss a Bayesian approach for finding latent classes in the data. In our approach we use finite mixture models to describe the underlying structure in the data, and demonstrate that the possibility to use full joint probability models raises interesting new prospects for exploratory data analysis. The concepts and methods discussed are illustrated with a case study using a data set from a recent educational study. The Bayesian classification approach described has been implemented, and presents an appealing addition to the standard toolbox for exploratory data analysis of educational data.",
"neighbors": [
558,
641,
704
],
"mask": "Train"
},
{
"node_id": 377,
"label": 2,
"text": "Title: Constructive Algorithms for Hierarchical Mixtures of Experts \nAbstract: We present two additions to the hierarchical mixture of experts (HME) architecture. We view the HME as a tree structured classifier. Firstly, by applying a likelihood splitting criteria to each expert in the HME we \"grow\" the tree adaptively during training. Secondly, by considering only the most probable path through the tree we may \"prune\" branches away, either temporarily, or permanently if they become redundant. We demonstrate results for the growing and pruning algorithms which show significant speed ups and more efficient use of parameters over the conventional algorithms in discriminating between two interlocking spirals and classifying 8-bit parity patterns.",
"neighbors": [
74,
622
],
"mask": "Train"
},
{
"node_id": 378,
"label": 6,
"text": "Title: Mingers, 1989 J. Mingers. An empirical comparison of pruning methods for decision tree induction. Machine\nAbstract: Ourston and Mooney, 1990b ] D. Ourston and R. J. Mooney. Improving shared rules in multiple category domain theories. Technical Report AI90-150, Artificial Intelligence Labora tory, University of Texas, Austin, TX, December 1990. ",
"neighbors": [
178,
218,
227,
286,
335,
396,
585,
692,
960,
1027,
1061,
1207,
1238,
1275,
1290,
1539,
1644,
1678,
1963,
2012,
2042,
2195,
2290,
2291,
2447,
2583
],
"mask": "Train"
},
{
"node_id": 379,
"label": 5,
"text": "Title: In: Machine Learning, Meta-reasoning and Logics, pp207-232, Learning from Imperfect Data \nAbstract: Systems interacting with real-world data must address the issues raised by the possible presence of errors in the observations it makes. In this paper we first present a framework for discussing imperfect data and the resulting problems it may cause. We distinguish between two categories of errors in data random errors or `noise', and systematic errors and examine their relationship to the task of describing observations in a way which is also useful for helping in future problem-solving and learning tasks. Secondly we proceed to examine some of the techniques currently used in AI research for recognising such errors.",
"neighbors": [
176,
756
],
"mask": "Train"
},
{
"node_id": 380,
"label": 1,
"text": "Title: Fitness Landscapes and Difficulty in Genetic Programming \nAbstract: The structure of the fitness landscape on which genetic programming operates is examined. The landscapes of a range of problems of known difficulty are analyzed in an attempt to determine which landscape measures correlate with the difficulty of the problem. The autocorrelation of the fitness values of random walks, a measure which has been shown to be related to perceived difficulty using other techniques, is only a weak indicator of the difficulty as perceived by genetic programming. All of these problems show unusually low autocorrelation. Comparison of the range of landscape basin depths at the end of adaptive walks on the landscapes shows good correlation with problem difficulty, over the entire range of problems examined. ",
"neighbors": [
163,
188,
934,
1257,
1473,
1474,
1737,
1784,
2196,
2641
],
"mask": "Validation"
},
{
"node_id": 381,
"label": 6,
"text": "Title: Compression-Based Feature Subset Selection Keywords: Minimum Description Length Principle, Cross Validation, Noise \nAbstract: Irrelevant and redundant features may reduce both predictive accuracy and comprehensibility of induced concepts. Most common Machine Learning approaches for selecting a good subset of relevant features rely on cross-validation. As an alternative, we present the application of a particular Minimum Description Length (MDL) measure to the task of feature subset selection. Using the MDL principle allows taking into account all of the available data at once. The new measure is information-theoretically plausible and yet still simple and therefore efficiently computable. We show empirically that this new method for judging the value of feature subsets is more efficient than and performs at least as well as methods based on cross-validation. Domains with both a large number of training examples and a large number of possible features yield the biggest gains in efficiency. Thus our new approach seems to scale up better to large learning problems than previous methods. ",
"neighbors": [
430,
635,
686,
2342
],
"mask": "Train"
},
{
"node_id": 382,
"label": 6,
"text": "Title: Learning Decision Lists Using Homogeneous Rules \nAbstract: A decision list is an ordered list of conjunctive rules (?). Inductive algorithms such as AQ and CN2 learn decision lists incrementally, one rule at a time. Such algorithms face the rule overlap problem | the classification accuracy of the decision list depends on the overlap between the learned rules. Thus, even though the rules are learned in isolation, they can only be evaluated in concert. Existing algorithms solve this problem by adopting a greedy, iterative structure. Once a rule is learned, the training examples that match the rule are removed from the training set. We propose a novel solution to the problem: composing decision lists from homogeneous rules, rules whose classification accuracy does not change with their position in the decision list. We prove that the problem of finding a maximally accurate decision list can be reduced to the problem of finding maximally accurate homogeneous rules. We report on the performance of our algorithm on data sets from the UCI repository and on the MONK's problems. ",
"neighbors": [
29,
1236,
1837,
2132
],
"mask": "Validation"
},
{
"node_id": 383,
"label": 2,
"text": "Title: Constructing Fuzzy Graphs from Examples \nAbstract: Methods to build function approximators from example data have gained considerable interest in the past. Especially methodologies that build models that allow an interpretation have attracted attention. Most existing algorithms, however, are either complicated to use or infeasible for high-dimensional problems. This article presents an efficient and easy to use algorithm to construct fuzzy graphs from example data. The resulting fuzzy graphs are based on locally independent fuzzy rules that operate solely on selected, important attributes. This enables the application of these fuzzy graphs also to problems in high dimensional spaces. Using illustrative examples and a real world data set it is demonstrated how the resulting fuzzy graphs offer quick insights into the structure of the example data, that is, the underlying model. ",
"neighbors": [
87,
631,
638
],
"mask": "Train"
},
{
"node_id": 384,
"label": 2,
"text": "Title: Hidden Markov Model Analysis of Motifs in Steroid Dehydrogenases and their Homologs \nAbstract: Methods to build function approximators from example data have gained considerable interest in the past. Especially methodologies that build models that allow an interpretation have attracted attention. Most existing algorithms, however, are either complicated to use or infeasible for high-dimensional problems. This article presents an efficient and easy to use algorithm to construct fuzzy graphs from example data. The resulting fuzzy graphs are based on locally independent fuzzy rules that operate solely on selected, important attributes. This enables the application of these fuzzy graphs also to problems in high dimensional spaces. Using illustrative examples and a real world data set it is demonstrated how the resulting fuzzy graphs offer quick insights into the structure of the example data, that is, the underlying model. ",
"neighbors": [
14
],
"mask": "Train"
},
{
"node_id": 385,
"label": 4,
"text": "Title: Modeling the Student with Reinforcement Learning \nAbstract: We describe a methodology for enabling an intelligent teaching system to make high level strategy decisions on the basis of low level student modeling information. This framework is less costly to construct, and superior to hand coding teaching strategies as it is more responsive to the learner's needs. In order to accomplish this, reinforcement learning is used to learn to associate superior teaching actions with certain states of the student's knowledge. Reinforcement learning (RL) has been shown to be flexible in handling noisy data, and does not need expert domain knowledge. A drawback of RL is that it often needs a significant number of trials for learning. We propose an off-line learning methodology using sample data, simulated students, and small amounts of expert knowledge to bypass this problem. ",
"neighbors": [
565,
567
],
"mask": "Train"
},
{
"node_id": 386,
"label": 2,
"text": "Title: Temporal Compositional Processing by a DSOM Hierarchical Model \nAbstract: Any intelligent system, whether human or robotic, must be capable of dealing with patterns over time. Temporal pattern processing can be achieved if the system has a short-term memory capacity (STM) so that different representations can be maintained for some time. In this work we propose a neural model wherein STM is realized by leaky integrators in a self-organizing system. The model exhibits composition-ality, that is, it has the ability to extract and construct progressively complex and structured associations in an hierarchical manner, starting with basic and primitive (temporal) elements.",
"neighbors": [
611,
745,
747
],
"mask": "Train"
},
{
"node_id": 387,
"label": 2,
"text": "Title: JUNG ET AL.: ESTIMATING ALERTNESS FORM THE EEG POWER SPECTRUM 1 Estimating Alertness from the\nAbstract: In tasks requiring sustained attention, human alertness varies on a minute time scale. This can have serious consequences in occupations ranging from air traffic control to monitoring of nuclear power plants. Changes in the electroencephalographic (EEG) power spectrum accompany these fluctuations in the level of alertness, as assessed by measuring simultaneous changes in EEG and performance on an auditory monitoring task. By combining power spectrum estimation, principal component analysis and artificial neural networks, we show that continuous, accurate, noninvasive, and near real-time estimation of an operator's global level of alertness is feasible using EEG measures recorded from as few as two central scalp sites. This demonstration could lead to a practical system for noninvasive monitoring of the cognitive state of human operators in attention-critical settings. ",
"neighbors": [
293
],
"mask": "Test"
},
{
"node_id": 388,
"label": 2,
"text": "Title: Spatial-Temporal Analysis of Temperature Using Smoothing Spline ANOVA \nAbstract: In tasks requiring sustained attention, human alertness varies on a minute time scale. This can have serious consequences in occupations ranging from air traffic control to monitoring of nuclear power plants. Changes in the electroencephalographic (EEG) power spectrum accompany these fluctuations in the level of alertness, as assessed by measuring simultaneous changes in EEG and performance on an auditory monitoring task. By combining power spectrum estimation, principal component analysis and artificial neural networks, we show that continuous, accurate, noninvasive, and near real-time estimation of an operator's global level of alertness is feasible using EEG measures recorded from as few as two central scalp sites. This demonstration could lead to a practical system for noninvasive monitoring of the cognitive state of human operators in attention-critical settings. ",
"neighbors": [
439,
2590
],
"mask": "Test"
},
{
"node_id": 389,
"label": 3,
"text": "Title: Robustness Analysis of Bayesian Networks with Finitely Generated Convex Sets of Distributions \nAbstract: This paper presents exact solutions and convergent approximations for inferences in Bayesian networks associated with finitely generated convex sets of distributions. Robust Bayesian inference is the calculation of bounds on posterior values given perturbations in a probabilistic model. The paper presents exact inference algorithms and analyzes the circumstances where exact inference becomes intractable. Two classes of algorithms for numeric approximations are developed through transformations on the original model. The first transformation reduces the robust inference problem to the estimation of probabilistic parameters in a Bayesian network. The second transformation uses Lavine's bracketing algorithm to generate a sequence of maximization problems in a Bayesian network. The analysis is extended to the *-contaminated, the lower density bounded, the belief function, the sub-sigma, the density bounded, the total variation and the density ratio classes of distributions. c fl1996 Carnegie Mellon University",
"neighbors": [
185,
324,
332,
577,
1937
],
"mask": "Validation"
},
{
"node_id": 390,
"label": 1,
"text": "Title: Chaos, Fractals, and Genetic Algorithms \nAbstract: This paper presents exact solutions and convergent approximations for inferences in Bayesian networks associated with finitely generated convex sets of distributions. Robust Bayesian inference is the calculation of bounds on posterior values given perturbations in a probabilistic model. The paper presents exact inference algorithms and analyzes the circumstances where exact inference becomes intractable. Two classes of algorithms for numeric approximations are developed through transformations on the original model. The first transformation reduces the robust inference problem to the estimation of probabilistic parameters in a Bayesian network. The second transformation uses Lavine's bracketing algorithm to generate a sequence of maximization problems in a Bayesian network. The analysis is extended to the *-contaminated, the lower density bounded, the belief function, the sub-sigma, the density bounded, the total variation and the density ratio classes of distributions. c fl1996 Carnegie Mellon University",
"neighbors": [
145,
163
],
"mask": "Train"
},
{
"node_id": 391,
"label": 2,
"text": "Title: Geometry in Learning \nAbstract: One of the fundamental problems in learning is identifying members of two different classes. For example, to diagnose cancer, one must learn to discriminate between benign and malignant tumors. Through examination of tumors with previously determined diagnosis, one learns some function for distinguishing the benign and malignant tumors. Then the acquired knowledge is used to diagnose new tumors. The perceptron is a simple biologically inspired model for this two-class learning problem. The perceptron is trained or constructed using examples from the two classes. Then the perceptron is used to classify new examples. We describe geometrically what a perceptron is capable of learning. Using duality, we develop a framework for investigating different methods of training a perceptron. Depending on how we define the \"best\" perceptron, different minimization problems are developed for training the perceptron. The effectiveness of these methods is evaluated empirically on four practical applications: breast cancer diagnosis, detection of heart disease, political voting habits, and sonar recognition. This paper does not assume prior knowledge of machine learning or pattern recognition.",
"neighbors": [
142,
230,
427,
438,
1283
],
"mask": "Train"
},
{
"node_id": 392,
"label": 3,
"text": "Title: DRAFT Cluster-Weighted Modeling for Time Series Prediction and Characterization \nAbstract: ",
"neighbors": [
76,
154
],
"mask": "Train"
},
{
"node_id": 393,
"label": 2,
"text": "Title: Density Networks and their Application to Protein Modelling \nAbstract: I define a latent variable model in the form of a neural network for which only target outputs are specified; the inputs are unspecified. Although the inputs are missing, it is still possible to train this model by placing a simple probability distribution on the unknown inputs and maximizing the probability of the data given the parameters. The model can then discover for itself a description of the data in terms of an underlying latent variable space of lower dimensionality. I present preliminary results of the application of these models to protein data. ",
"neighbors": [
14,
157
],
"mask": "Train"
},
{
"node_id": 394,
"label": 2,
"text": "Title: Prior Information and Generalized Questions \nAbstract: This report describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology. This research is sponsored by a grant from National Science Foundation under contract ASC-9217041 and a grant from ONR/ARPA under contract N00014-92-J-1879. The author was supported by a Postdoctoral Fellowship (Le 1014/1-1) from the Deutsche Forschungsgemeinschaft and a NSF/CISE Postdoctoral Fellowship. ",
"neighbors": [
125,
133,
608
],
"mask": "Train"
},
{
"node_id": 395,
"label": 1,
"text": "Title: Evolving Graphs and Networks with Edge Encoding: Preliminary Report \nAbstract: We present an alternative to the cellular encoding technique [Gruau 1992] for evolving graph and network structures via genetic programming. The new technique, called edge encoding, uses edge operators rather than the node operators of cellular encoding. While both cellular encoding and edge encoding can produce all possible graphs, the two encodings bias the genetic search process in different ways; each may therefore be most useful for a different set of problems. The problems for which these techniques may be used, and for which we think edge encoding may be particularly useful, include the evolution of recurrent neural networks, finite automata, and graph-based queries to symbolic knowledge bases. In this preliminary report we present a technical description of edge encoding and an initial comparison to cellular encoding. Experimental investigation of the relative merits of these encoding schemes is currently in progress.",
"neighbors": [
163,
189,
191
],
"mask": "Validation"
},
{
"node_id": 396,
"label": 5,
"text": "Title: Geometric Comparison of Classifications and Rule Sets* \nAbstract: We present a technique for evaluating classifications by geometric comparison of rule sets. Rules are represented as objects in an n-dimensional hyperspace. The similarity of classes is computed from the overlap of the geometric class descriptions. The system produces a correlation matrix that indicates the degree of similarity between each pair of classes. The technique can be applied to classifications generated by different algorithms, with different numbers of classes and different attribute sets. Experimental results from a case study in a medical domain are included. ",
"neighbors": [
378,
478
],
"mask": "Train"
},
{
"node_id": 397,
"label": 2,
"text": "Title: Truth-from-Trash Learning and the Mobot \nAbstract: As natural resources become less abundant, we naturally become more interested in, and more adept at utilisation of waste materials. In doing this we are bringing to bear a ploy which is of key importance in learning | or so I argue in this paper. In the `Truth from Trash' model, learning is viewed as a process which uses environmental feedback to assemble fortuitous sensory predispositions (sensory `trash') into useful, information vehicles, i.e., `truthful' indicators of salient phenomena. The main aim will be to show how a computer implementation of the model has been used to enhance (through learning) the strategic abilities of a simulated, football playing mobot.",
"neighbors": [
659,
747,
2346
],
"mask": "Validation"
},
{
"node_id": 398,
"label": 3,
"text": "Title: Axioms of Causal Relevance \nAbstract: This paper develops axioms and formal semantics for statements of the form \"X is causally irrelevant to Y in context Z,\" which we interpret to mean \"Changing X will not affect Y if we hold Z constant.\" The axiomization of causal irrelevance is contrasted with the axiomization of informational irrelevance, as in \"Learning X will not alter our belief in Y , once we know Z.\" Two versions of causal irrelevance are analyzed, probabilistic and deterministic. We show that, unless stability is assumed, the probabilistic definition yields a very loose structure, that is governed by just two trivial axioms. Under the stability assumption, probabilistic causal irrelevance is isomorphic to path interception in cyclic graphs. Under the deterministic definition, causal irrelevance complies with all of the axioms of path interception in cyclic graphs, with the exception of transitivity. We compare our formalism to that of [Lewis, 1973], and offer a graphical method of proving theorems about causal relevance.",
"neighbors": [
248,
419,
776
],
"mask": "Train"
},
{
"node_id": 399,
"label": 2,
"text": "Title: Representing and Learning Visual Schemas in Neural Networks for Scene Analysis \nAbstract: Using scene analysis as the task, this research focuses on three fundamental problems in neural network systems: (1) limited processing resources, (2) representing schemas, and (3) learning schemas. The first problem arises because no practical neural network can process all the visual input simultaneously and efficiently. The solution is to process a small amount of the input in parallel, and successively focus on the other parts of the input. This strategy requires that the system maintains structured knowledge for describing and interpreting the gathered information. The system should also learn to represent structured knowledge from examples of objects and scenes. VISOR, the system described in this paper, consists of three main components. The Low-Level Visual Module (simulated using procedural programs) extracts featural and positional information from the visual input. The Schema Module encodes structured knowledge about possible objects, and provides top-down information for the Low-Level Visual Module to focus attention at different parts of the scene. The Response Module learns to associate the schema activation patterns with external responses. It enables the external environment to provide reinforcement feedback for the learning of schematic structures. ",
"neighbors": [
15,
427,
1250,
1251
],
"mask": "Train"
},
{
"node_id": 400,
"label": 6,
"text": "Title: Learning Algorithms with Applications to Robot Navigation and Protein Folding \nAbstract: Using scene analysis as the task, this research focuses on three fundamental problems in neural network systems: (1) limited processing resources, (2) representing schemas, and (3) learning schemas. The first problem arises because no practical neural network can process all the visual input simultaneously and efficiently. The solution is to process a small amount of the input in parallel, and successively focus on the other parts of the input. This strategy requires that the system maintains structured knowledge for describing and interpreting the gathered information. The system should also learn to represent structured knowledge from examples of objects and scenes. VISOR, the system described in this paper, consists of three main components. The Low-Level Visual Module (simulated using procedural programs) extracts featural and positional information from the visual input. The Schema Module encodes structured knowledge about possible objects, and provides top-down information for the Low-Level Visual Module to focus attention at different parts of the scene. The Response Module learns to associate the schema activation patterns with external responses. It enables the external environment to provide reinforcement feedback for the learning of schematic structures. ",
"neighbors": [
14,
258,
555,
2354,
2360
],
"mask": "Test"
},
{
"node_id": 401,
"label": 3,
"text": "Title: Learning Limited Dependence Bayesian Classifiers \nAbstract: We present a framework for characterizing Bayesian classification methods. This framework can be thought of as a spectrum of allowable dependence in a given probabilistic model with the Naive Bayes algorithm at the most restrictive end and the learning of full Bayesian networks at the most general extreme. While much work has been carried out along the two ends of this spectrum, there has been surprising little done along the middle. We analyze the assumptions made as one moves along this spectrum and show the tradeoffs between model accuracy and learning speed which become critical to consider in a variety of data mining domains. We then present a general induction algorithm that allows for traversal of this spectrum depending on the available computational power for carrying out induction and show its application in a number of domains with different properties. ",
"neighbors": [
442,
577,
632,
2462
],
"mask": "Validation"
},
{
"node_id": 402,
"label": 1,
"text": "Title: The Evolutionary Cost of Learning \nAbstract: Traits that are acquired by members of an evolving population during their lifetime, through adaptive processes such as learning, can become genetically specified in later generations. Thus there is a change in the level of learning in the population over evolutionary time. This paper explores the idea that as well as the benefits to be gained from learning, there may also be costs to be paid for the ability to learn. It is these costs that supply the selection pressure for the genetic assimilation of acquired traits. Two models are presented that attempt to illustrate this assertion. The first uses Kauffman's NK fitness landscapes to show the effect that both explicit and implicit costs have on the assimilation of learnt traits. A characteristic `hump' is observed in the graph of the level of plasticity in the population showing that learning is first selected for and then against as evolution progresses. The second model is a practical example in which neural network controllers are evolved for a small mobile robot. Results from this experiment also show the hump. ",
"neighbors": [
163,
219,
403,
538,
681
],
"mask": "Train"
},
{
"node_id": 403,
"label": 1,
"text": "Title: Landscapes, Learning Costs and Genetic Assimilation. \nAbstract: The evolution of a population can be guided by phenotypic traits acquired by members of that population during their lifetime. This phenomenon, known as the Baldwin Effect, can speed the evolutionary process as traits that are initially acquired become genetically specified in later generations. This paper presents conditions under which this genetic assimilation can take place. As well as the benefits that lifetime adaptation can give a population, there may be a cost to be paid for that adaptive ability. It is the evolutionary trade-off between these costs and benefits that provides the selection pressure for acquired traits to become genetically specified. It is also noted that genotypic space, in which evolution operates, and phenotypic space, on which adaptive processes (such as learning) operate, are, in general, of a different nature. To guarantee an acquired characteristic can become genetically specified, then these spaces must have the property of neighbourhood correlation which means that a small distance between two individuals in phenotypic space implies that there is a small distance between the same two individuals in genotypic space.",
"neighbors": [
402,
2104,
2302,
2309
],
"mask": "Validation"
},
{
"node_id": 404,
"label": 2,
"text": "Title: EE380L:Neural Networks for Pattern Recognition POp Trees under the guidance of \nAbstract: Decision Trees have been widely used for classification/regression tasks. They are relatively much faster to build as compared to Neural Networks and are understandable by humans. In normal decision trees, based on the input vector, only one branch is followed. In Probabilistic OPtion trees, based on the input vector we follow all of the subtrees with some probability. These probabilities are learned by the system. Probabilistic decisions are likely to be useful, when the boundary of classes submerge in each other, or when there is noise in the input data. In addition they provide us with a confidence measure. We allow option nodes in our trees, Again, instead of uniform voting, we learn the weightage of every subtree.",
"neighbors": [
102
],
"mask": "Test"
},
{
"node_id": 405,
"label": 2,
"text": "Title: Finite State Machines and Recurrent Neural Networks Automata and Dynamical Systems Approaches \nAbstract: Decision Trees have been widely used for classification/regression tasks. They are relatively much faster to build as compared to Neural Networks and are understandable by humans. In normal decision trees, based on the input vector, only one branch is followed. In Probabilistic OPtion trees, based on the input vector we follow all of the subtrees with some probability. These probabilities are learned by the system. Probabilistic decisions are likely to be useful, when the boundary of classes submerge in each other, or when there is noise in the input data. In addition they provide us with a confidence measure. We allow option nodes in our trees, Again, instead of uniform voting, we learn the weightage of every subtree.",
"neighbors": [
753,
1285,
1592
],
"mask": "Train"
},
{
"node_id": 406,
"label": 2,
"text": "Title: Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization \nAbstract: The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method . Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error function. The results presented cover serial and parallel online BP, modified BP with a momentum term, and BP with weight decay. ",
"neighbors": [
230,
311,
427,
2307
],
"mask": "Train"
},
{
"node_id": 407,
"label": 2,
"text": "Title: Constructing Deterministic Finite-State Automata in Recurrent Neural Networks \nAbstract: Recurrent neural networks that are trained to behave like deterministic finite-state automata (DFAs) can show deteriorating performance when tested on long strings. This deteriorating performance can be attributed to the instability of the internal representation of the learned DFA states. The use of a sigmoidal discriminant function together with the recurrent structure contribute to this instability. We prove that a simple algorithm can construct second-order recurrent neural networks with a sparse interconnection topology and sigmoidal discriminant function such that the internal DFA state representations are stable, i.e. the constructed network correctly classifies strings of arbitrary length. The algorithm is based on encoding strengths of weights directly into the neural network. We derive a relationship between the weight strength and the number of DFA states for robust string classification. For a DFA with n states and m input alphabet symbols, the constructive algorithm generates a \"programmed\" neural network with O(n) neurons and O(mn) weights. We compare our algorithm to other methods proposed in the literature. ",
"neighbors": [
512,
1298,
1763,
1875,
2439
],
"mask": "Test"
},
{
"node_id": 408,
"label": 2,
"text": "Title: Constructing Deterministic Finite-State Automata in Recurrent Neural Networks \nAbstract: Report SYCON-93-09 Recent Results on Lyapunov-theoretic Techniques for Nonlinear Stability ABSTRACT This paper presents a Converse Lyapunov Function Theorem motivated by robust control analysis and design. Our result is based upon, but generalizes, various aspects of well-known classical theorems. In a unified and natural manner, it (1) includes arbitrary bounded disturbances acting on the system, (2) deals with global asymptotic stability, (3) results in smooth (infinitely differentiable) Lyapunov functions, and (4) applies to stability with respect to not necessarily compact invariant sets. As a corollary of the obtained Converse Theorem, we show that the well-known Lyapunov sufficient condition for \"input-to-state stability\" is also necessary, settling positively an open question raised by several authors during the past few years. ",
"neighbors": [
630
],
"mask": "Train"
},
{
"node_id": 409,
"label": 2,
"text": "Title: Extraction of Rules from Discrete-Time Recurrent Neural Networks \nAbstract: The extraction of symbolic knowledge from trained neural networks and the direct encoding of (partial) knowledge into networks prior to training are important issues. They allow the exchange of information between symbolic and connectionist knowledge representations. The focus of this paper is on the quality of the rules that are extracted from recurrent neural networks. Discrete-time recurrent neural networks can be trained to correctly classify strings of a regular language. Rules defining the learned grammar can be extracted from networks in the form of deterministic finite-state automata (DFA's) by applying clustering algorithms in the output space of recurrent state neurons. Our algorithm can extract different finite-state automata that are consistent with a training set from the same network. We compare the generalization performances of these different models and the trained network and we introduce a heuristic that permits us to choose among the consistent DFA's the model which best approximates the learned regular grammar. Keywords: Recurrent Neural Networks, Grammatical Inference, Regular Languages, Deterministic Finite-State Automata, Rule Extraction, Generalization Performance, Model Selection, Occam's Razor. fl Technical Report CS-TR-3465 and UMIACS-TR-95-54, University of Maryland, College Park, MD 20742. Ac cepted for publication in Neural Networks. ",
"neighbors": [
753,
1298,
1606,
2582
],
"mask": "Validation"
},
{
"node_id": 410,
"label": 4,
"text": "Title: High-Performance Job-Shop Scheduling With A Time-Delay TD() Network \nAbstract: Job-shop scheduling is an important task for manufacturing industries. We are interested in the particular task of scheduling payload processing for NASA's space shuttle program. This paper summarizes our previous work on formulating this task for solution by the reinforcement learning algorithm T D(). A shortcoming of this previous work was its reliance on hand-engineered input features. This paper shows how to extend the time-delay neural network (TDNN) architecture to apply it to irregular-length schedules. Experimental tests show that this TDNN-T D() network can match the performance of our previous hand-engineered system. The tests also show that both neural network approaches significantly outperform the best previous (non-learning) solution to this problem in terms of the quality of the resulting schedules and the number of search steps required to construct them.",
"neighbors": [
2,
82,
298,
305,
565
],
"mask": "Validation"
},
{
"node_id": 411,
"label": 2,
"text": "Title: POWER OF NEURAL NETS \nAbstract: Report SYCON-91-11 ABSTRACT This paper deals with the simulation of Turing machines by neural networks. Such networks are made up of interconnections of synchronously evolving processors, each of which updates its state according to a \"sigmoidal\" linear combination of the previous states of all units. The main result states that one may simulate all Turing machines by nets, in linear time. In particular, it is possible to give a net made up of about 1,000 processors which computes a universal partial-recursive function. (This is an update of Report SYCON-91-08; new results include the simulation in linear time of binary-tape machines, as opposed to the unary alphabets used in the previous version.) ",
"neighbors": [
512,
536,
1470,
1600,
1891,
2232,
2582,
2594
],
"mask": "Train"
},
{
"node_id": 412,
"label": 4,
"text": "Title: The Influence of Domain Properties on the Performance of Real-Time Search Algorithms \nAbstract: This research is sponsored by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsoring organizations. ",
"neighbors": [
552,
688
],
"mask": "Train"
},
{
"node_id": 413,
"label": 2,
"text": "Title: Convergence Rates of Approximation by Translates \nAbstract: In this paper we consider the problem of approximating a function belonging to some function space by a linear combination of n translates of a given function G. Using a lemma by Jones (1990) and Barron (1991) we show that it is possible to define function spaces and functions G for which the rate of convergence to zero of the error is O( 1 p n ) in any number of dimensions. The apparent avoidance of the \"curse of dimensionality\" is due to the fact that these function spaces are more and more constrained as the dimension increases. Examples include spaces of the Sobolev type, in which the number of weak derivatives is required to be larger than the number of dimensions. We give results both for approximation in the L 2 norm and in the L 1 norm. The interesting feature of these results is that, thanks to the constructive nature of Jones' and Barron's lemma, an iterative procedure is defined that can achieve this rate. This paper describes research done within the Center for Biological Information Processing, in the Department of Brain and Cognitive Sciences, at the Artificial Intelligence Laboratory and at the Department of Mathematics, University of Trento, Italy. Gabriele Anzellotti is with the Department of Mathematics, University of Trento, Italy. This research is sponsored by a grant from the Office of Naval Research (ONR), Cognitive and Neural Sciences Division; by the Artificial Intelligence Center of Hughes Aircraft Corporation (S1-801534-2). Support for the A. I. Laboratory's artificial intelligence research is provided by the Advanced Research Projects Agency of the Department of Defense under Army contract DACA76-85-C-0010, and in part by ONR contract N00014-85-K-0124. c fl Massachusetts Institute of Technology, 1992",
"neighbors": [
611
],
"mask": "Train"
},
{
"node_id": 414,
"label": 0,
"text": "Title: Acquiring Recursive and Iterative Concepts with Explanation-Based Learning explanation-based generalization, generalizing explanation structures, generalizing to\nAbstract: University of Wisconsin Computer Sciences Technical Report 876 (September 1989) Abstract In explanation-based learning, a specific problem's solution is generalized into a form that can be later used to solve conceptually similar problems. Most research in explanation-based learning involves relaxing constraints on the variables in the explanation of a specific example, rather than generalizing the graphical structure of the explanation itself. However, this precludes the acquisition of concepts where an iterative or recursive process is implicitly represented in the explanation by a fixed number of applications. This paper presents an algorithm that generalizes explanation structures and reports empirical results that demonstrate the value of acquiring recursive and iterative concepts. The BAGGER2 algorithm learns recursive and iterative concepts, integrates results from multiple examples, and extracts useful subconcepts during generalization. On problems where learning a recursive rule is not appropriate, the system produces the same result as standard explanation-based methods. Applying the learned recursive rules only requires a minor extension to a PROLOG-like problem solver, namely, the ability to explicitly call a specific rule. Empirical studies demonstrate that generalizing the structure of explanations helps avoid the recently reported negative effects of learning. ",
"neighbors": [
440,
908,
1174,
1442,
1445,
1877
],
"mask": "Train"
},
{
"node_id": 415,
"label": 1,
"text": "Title: Competitive Environments Evolve Better Solutions for Complex Tasks \nAbstract: University of Wisconsin Computer Sciences Technical Report 876 (September 1989) Abstract In explanation-based learning, a specific problem's solution is generalized into a form that can be later used to solve conceptually similar problems. Most research in explanation-based learning involves relaxing constraints on the variables in the explanation of a specific example, rather than generalizing the graphical structure of the explanation itself. However, this precludes the acquisition of concepts where an iterative or recursive process is implicitly represented in the explanation by a fixed number of applications. This paper presents an algorithm that generalizes explanation structures and reports empirical results that demonstrate the value of acquiring recursive and iterative concepts. The BAGGER2 algorithm learns recursive and iterative concepts, integrates results from multiple examples, and extracts useful subconcepts during generalization. On problems where learning a recursive rule is not appropriate, the system produces the same result as standard explanation-based methods. Applying the learned recursive rules only requires a minor extension to a PROLOG-like problem solver, namely, the ability to explicitly call a specific rule. Empirical studies demonstrate that generalizing the structure of explanations helps avoid the recently reported negative effects of learning. ",
"neighbors": [
163,
188,
209,
523,
712,
789,
995,
1737,
1790,
1832,
1836,
2103,
2353,
2664
],
"mask": "Train"
},
{
"node_id": 416,
"label": 3,
"text": "Title: A note on convergence rates of Gibbs sampling for nonparametric mixtures \nAbstract: We consider a mixture model where the mixing distribution is random and is given a Dirichlet process prior. We describe the general structure of two Gibbs sampling algorithms that are useful for approximating Bayesian inferences in this problem. When the kernel f(x j ) of the mixture is bounded, we show that the Markov chains resulting from the Gibbs sampling are uniformly ergodic, and we provide an explicit rate bound. Unfortunately, the bound is not sharp in general; improving sensibly the bound seems however quite difficult.",
"neighbors": [
137,
138,
1713,
1913,
2510
],
"mask": "Train"
},
{
"node_id": 417,
"label": 5,
"text": "Title: Constructing Intermediate Concepts by Decomposition of Real Functions \nAbstract: In learning from examples it is often useful to expand an attribute-vector representation by intermediate concepts. The usual advantage of such structuring of the learning problem is that it makes the learning easier and improves the comprehensibility of induced descriptions. In this paper, we develop a technique for discovering useful intermediate concepts when both the class and the attributes are real-valued. The technique is based on a decomposition method originally developed for the design of switching circuits and recently extended to handle incompletely specified multi-valued functions. It was also applied to machine learning tasks. In this paper, we introduce modifications, needed to decompose real functions and to present them in symbolic form. The method is evaluated on a number of test functions. The results show that the method correctly decomposes fairly complex functions. The decomposition hierarchy does not depend on a given repertoir of basic functions (background knowledge). ",
"neighbors": [
317,
508
],
"mask": "Validation"
},
{
"node_id": 418,
"label": 6,
"text": "Title: Heterogeneous Uncertainty Sampling for Supervised Learning \nAbstract: Uncertainty sampling methods iteratively request class labels for training instances whose classes are uncertain despite the previous labeled instances. These methods can greatly reduce the number of instances that an expert need label. One problem with this approach is that the classifier best suited for an application may be too expensive to train or use during the selection of instances. We test the use of one classifier (a highly efficient probabilistic one) to select examples for training another (the C4.5 rule induction program). Despite being chosen by this heterogeneous approach, the uncertainty samples yielded classifiers with lower error rates than random samples ten times larger.",
"neighbors": [
135,
479,
740,
1198,
1269,
1312
],
"mask": "Train"
},
{
"node_id": 419,
"label": 3,
"text": "Title: On the Testability of Causal Models with Latent and Instrumental Variables \nAbstract: Certain causal models involving unmeasured variables induce no independence constraints among the observed variables but imply, nevertheless, inequality constraints on the observed distribution. This paper derives a general formula for such inequality constraints as induced by instrumental variables, that is, exogenous variables that directly affect some variables but not all. With the help of this formula, it is possible to test whether a model involving instrumental variables may account for the data, or, conversely, whether a given vari able can be deemed instrumental.",
"neighbors": [
248,
260,
398,
1326,
1527,
1747
],
"mask": "Test"
},
{
"node_id": 420,
"label": 2,
"text": "Title: Sample Size Calculations for Smoothing Splines Based on Bayesian Confidence Intervals \nAbstract: Bayesian confidence intervals of a smoothing spline are often used to distinguish two curves. In this paper, we provide an asymptotic formula for sample size calculations based on Bayesian confidence intervals. Approximations and simulations on special functions indicate that this asymptotic formula is reasonably accurate. Key Words: Bayesian confidence intervals; sample size; smoothing spline. fl Address: Department of Statistics and Applied Probability, University of California, Santa Barbara, CA 93106-3110. Tel.: (805)893-4870. Fax: (805)893-2334. E-mail: yuedong@pstat.ucsb.edu. Supported by the National Institute of Health under Grants R01 EY09946, P60 DK20572 and P30 HD18258. ",
"neighbors": [
10,
192,
193,
439,
519,
2214,
2590
],
"mask": "Train"
},
{
"node_id": 421,
"label": 6,
"text": "Title: Improved Boosting Algorithms Using Confidence-rated Predictions \nAbstract: We describe several improvements to Freund and Schapire's AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a simplified analysis of AdaBoost in this setting, and we show how this analysis can be used to find improved parameter settings as well as a refined criterion for training weak hypotheses. We give a specific method for assigning confidences to the predictions of decision trees, a method closely related to one used by Quinlan. This method also suggests a technique for growing decision trees which turns out to be identical to one proposed by Kearns and Mansour. We focus next on how to apply the new boosting algorithms to multiclass classification problems, particularly to the multi-label case in which each example may belong to more than one class. We give two boosting methods for this problem. One of these leads to a new method for handling the single-label case which is simpler but as effective as techniques suggested by Freund and Schapire. Finally, we give some experimental results comparing a few of the algorithms discussed in this paper. ",
"neighbors": [
255
],
"mask": "Validation"
},
{
"node_id": 422,
"label": 1,
"text": "Title: Genetic Self-Learning \nAbstract: Evolutionary Algorithms are direct random search algorithms which imitate the principles of natural evolution as a method to solve adaptation (learning) tasks in general. As such they have several features in common which can be observed on the genetic and phenotypic level of living species. In this paper the algorithms' capability of adaptation or learning in a wider sense is demonstrated, and it is focused on Genetic Algorithms to illustrate the learning process on the population level (first level learning), and on Evolution Strategies to demonstrate the learning process on the meta-level of strategy parameters (second level learning).",
"neighbors": [
163,
1069,
1685,
1691
],
"mask": "Train"
},
{
"node_id": 423,
"label": 3,
"text": "Title: LEARNING BAYESIAN NETWORKS WITH LOCAL STRUCTURE \nAbstract: We examine a novel addition to the known methods for learning Bayesian networks from data that improves the quality of the learned networks. Our approach explicitly represents and learns the local structure in the conditional probability distributions (CPDs) that quantify these networks. This increases the space of possible models, enabling the representation of CPDs with a variable number of parameters. The resulting learning procedure induces models that better emulate the interactions present in the data. We describe the theoretical foundations and practical aspects of learning local structures and provide an empirical evaluation of the proposed learning procedure. This evaluation indicates that learning curves characterizing this procedure converge faster, in the number of training instances, than those of the standard procedure, which ignores the local structure of the CPDs. Our results also show that networks learned with local structures tend to be more complex (in terms of arcs), yet require fewer parameters. ",
"neighbors": [
62,
557,
558,
1290,
1934,
2425
],
"mask": "Test"
},
{
"node_id": 424,
"label": 6,
"text": "Title: Validation of Voting Committees \nAbstract: This paper contains a method to bound the test errors of voting committees with members chosen from a pool of trained classifiers. There are so many prospective committees that validating them directly does not achieve useful error bounds. Because there are fewer classifiers than prospective committees, it is better to validate the classifiers individually, then use linear programming to infer committee error bounds. We test the method using credit card data. Also, we extend the method to infer bounds for classifiers in general. ",
"neighbors": [
571
],
"mask": "Test"
},
{
"node_id": 425,
"label": 4,
"text": "Title: Reinforcement Learning, Neural Networks and PI Control Applied to a Heating Coil \nAbstract: An accurate simulation of a heating coil is used to compare the performance of a PI controller, a neural network trained to predict the steady-state output of the PI controller, a neural network trained to minimize the n-step ahead error between the coil output and the set point, and a reinforcement learning agent trained to minimize the sum of the squared error over time. Although the PI controller works very well for this task, the neural networks do result in improved performance. ",
"neighbors": [
85,
565
],
"mask": "Train"
},
{
"node_id": 426,
"label": 5,
"text": "Title: Rule Induction with CN2: Some Recent Improvements \nAbstract: The CN2 algorithm induces an ordered list of classification rules from examples using entropy as its search heuristic. In this short paper, we describe two improvements to this algorithm. Firstly, we present the use of the Laplacian error estimate as an alternative evaluation function and secondly, we show how unordered as well as ordered rules can be generated. We experimentally demonstrate significantly improved performances resulting from these changes, thus enhancing the usefulness of CN2 as an inductive tool. Comparisons with Quinlan's C4.5 are also made. ",
"neighbors": [
29,
303,
318,
335,
836,
937,
1061,
1187,
1275,
1486,
1528,
1576,
2126,
2369,
2431
],
"mask": "Validation"
},
{
"node_id": 427,
"label": 2,
"text": "Title: Book Review Introduction to the Theory of Neural Computation Reviewed by: 2 \nAbstract: Neural computation, also called connectionism, parallel distributed processing, neural network modeling or brain-style computation, has grown rapidly in the last decade. Despite this explosion, and ultimately because of impressive applications, there has been a dire need for a concise introduction from a theoretical perspective, analyzing the strengths and weaknesses of connectionist approaches and establishing links to other disciplines, such as statistics or control theory. The Introduction to the Theory of Neural Computation by Hertz, Krogh and Palmer (subsequently referred to as HKP) is written from the perspective of physics, the home discipline of the authors. The book fulfills its mission as an introduction for neural network novices, provided that they have some background in calculus, linear algebra, and statistics. It covers a number of models that are often viewed as disjoint. Critical analyses and fruitful comparisons between these models ",
"neighbors": [
18,
146,
201,
202,
205,
230,
250,
304,
312,
350,
353,
391,
399,
406,
461,
477,
493,
494,
526,
561,
584,
587,
610,
658,
665,
674,
696,
698,
823,
916,
946,
954,
1037,
1103,
1274,
1283,
1284,
1318,
1340,
1399,
1405,
1488,
1547,
1577,
1760,
1766,
1811,
1914,
1975,
1999,
2024,
2044,
2255,
2258,
2295,
2306,
2383,
2388,
2412,
2437,
2448,
2453,
2574,
2658,
2683
],
"mask": "Train"
},
{
"node_id": 428,
"label": 5,
"text": "Title: Selective Eager Execution on the PolyPath Architecture \nAbstract: Control-flow misprediction penalties are a major impediment to high performance in wide-issue superscalar processors. In this paper we present Selective Eager Execution (SEE), an execution model to overcome mis-speculation penalties by executing both paths after diffident branches. We present the micro-architecture of the PolyPath processor, which is an extension of an aggressive superscalar, out-of-order architecture. The PolyPath architecture uses a novel instruction tagging and register renaming mechanism to execute instructions from multiple paths simultaneously in the same processor pipeline, while retaining maximum resource availability for single-path code sequences. Performance results of our detailed execution-driven, pipeline-level simulations show that the SEE concept achieves a potential average performance improvement of 48% on the SPECint95 benchmarks. A realistic implementation with a dynamic branch confidence estimator can improve performance by as much as 36% for the go benchmark, and an average of 14% on SPECint95, when compared to a normal superscalar, out-of-order, speculative execution, monopath processor. Moreover, our architectural model is both elegant and practical to implement, using a small amount of additional state and control logic. ",
"neighbors": [
184,
302,
432,
433,
598
],
"mask": "Test"
},
{
"node_id": 429,
"label": 3,
"text": "Title: Classifiers: A Theoretical and Empirical Study \nAbstract: This paper describes how a competitive tree learning algorithm can be derived from first principles. The algorithm approximates the Bayesian decision theoretic solution to the learning task. Comparative experiments with the algorithm and the several mature AI and statistical families of tree learning algorithms currently in use show the derived Bayesian algorithm is consistently as good or better, although sometimes at computational cost. Using the same strategy, we can design algorithms for many other supervised and model learning tasks given just a probabilistic representation for the kind of knowledge to be learned. As an illustration, a second learning algorithm is derived for learning Bayesian networks from data. Implications to incremental learning and the use of multiple models are also discussed.",
"neighbors": [
29,
1290,
1514
],
"mask": "Test"
},
{
"node_id": 430,
"label": 6,
"text": "Title: Irrelevant Features and the Subset Selection Problem \nAbstract: We address the problem of finding a subset of features that allows a supervised induction algorithm to induce small high-accuracy concepts. We examine notions of relevance and irrelevance, and show that the definitions used in the machine learning literature do not adequately partition the features into useful categories of relevance. We present definitions for irrelevance and for two degrees of relevance. These definitions improve our understanding of the behavior of previous subset selection algorithms, and help define the subset of features that should be sought. The features selected should depend not only on the features and the target concept, but also on the induction algorithm. We describe a method for feature subset selection using cross-validation that is applicable to any induction algorithm, and discuss experiments conducted with ID3 and C4.5 on artificial and real datasets.",
"neighbors": [
52,
89,
119,
172,
177,
208,
223,
236,
256,
381,
524,
547,
632,
634,
635,
651,
683,
686,
1020,
1165,
1207,
1270,
1284,
1568,
1569,
1617,
1637,
1698,
1792,
2033,
2137,
2197,
2343,
2487,
2557,
2593
],
"mask": "Test"
},
{
"node_id": 431,
"label": 6,
"text": "Title: Rule-based Machine Learning Methods for Functional Prediction \nAbstract: We describe a machine learning method for predicting the value of a real-valued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rule-based decision model can be extended to search efficiently for similar cases prior to approximating function values. Experimental results on real-world data demonstrate that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance.",
"neighbors": [
156,
1608,
2137
],
"mask": "Train"
},
{
"node_id": 432,
"label": 5,
"text": "Title: Limited Dual Path Execution \nAbstract: This work presents a hybrid branch predictor scheme that uses a limited form of dual path execution along with dynamic branch prediction to improve execution times. The ability to execute down both paths of a conditional branch enables the branch penalty to be minimized; however, relying exclusively on dual path execution is infeasible due because instruction fetch rates far exceed the capability of the pipeline to retire a single branch before others must be processed. By using confidence information, available in the dynamic branch prediction state tables, a limited form of dual path execution becomes feasible. This reduces the burden on the branch predictor by allowing predictions of low confidence to be avoided. In this study we present a new approach to gather branch prediction confidence with little or no overhead, and use this confidence mechanism to determine whether dual path execution or branch prediction should be used. Comparing this hybrid predictor model to the dynamic branch predictor shows a dramatic decrease in misprediction rate, which translates to an reduction in runtime of over 20%. These results imply that dual path execution, which often is thought to be an excessively resource consuming method, may be a worthy approach if restricted with an appropriate predicting set. ",
"neighbors": [
165,
184,
302,
428,
433,
598
],
"mask": "Train"
},
{
"node_id": 433,
"label": 5,
"text": "Title: Threaded Multiple Path Execution \nAbstract: This paper presents Threaded Multi-Path Execution (TME), which exploits existing hardware on a Simultaneous Multi-threading (SMT) processor to speculatively execute multiple paths of execution. When there are fewer threads in an SMT processor than hardware contexts, threaded multi-path execution uses spare contexts to fetch and execute code along the less likely path of hard-to-predict branches. This paper describes the hardware mechanisms needed to enable an SMT processor to efficiently spawn speculative threads for threaded multi-path execution. The Mapping Synchronization Bus is described, which enables the spawning of these multiple paths. Policies are examined for deciding which branches to fork, and for managing competition between primary and alternate path threads for critical resources. Our results show that TME increases the single program performance of an SMT with eight thread contexts by 14%-23% on average, depending on the misprediction penalty, for programs with a high misprediction rate. ",
"neighbors": [
158,
184,
428,
432
],
"mask": "Test"
},
{
"node_id": 434,
"label": 0,
"text": "Title: Computational Learning in Humans and Machines \nAbstract: In this paper we review research on machine learning and its relation to computational models of human learning. We focus initially on concept induction, examining five main approaches to this problem, then consider the more complex issue of learning sequential behaviors. After this, we compare the rhetoric that sometimes appears in the machine learning and psychological literature with the growing evidence that different theoretical paradigms typically produce similar results. In response, we suggest that concrete computational models, which currently dominate the field, may be less useful than simulations that operate at a more abstract level. We illustrate this point with an abstract simulation that explains a challenging phenomenon in the area of category learning, and we conclude with some general observations about such abstract models. ",
"neighbors": [
597,
1339,
2473
],
"mask": "Train"
},
{
"node_id": 435,
"label": 2,
"text": "Title: Homology Detection via Family Pairwise Search a straightforward generalization of pairwise sequence comparison algorithms to\nAbstract: The function of an unknown biological sequence can often be accurately inferred by identifying sequences homologous to the original sequence. Given a query set of known homologs, there exist at least three general classes of techniques for finding additional homologs: pairwise sequence comparisons, motif analysis, and hidden Markov modeling. Pairwise sequence comparisons are typically employed when only a single query sequence is known. Hidden Markov models (HMMs), on the other hand, are usually trained with sets of more than 100 sequences. Motif-based methods fall in between these two extremes. ",
"neighbors": [
0,
8,
14,
258,
751
],
"mask": "Train"
},
{
"node_id": 436,
"label": 6,
"text": "Title: Pattern Theoretic Knowledge Discovery \nAbstract: Future research directions in Knowledge Discovery in Databases (KDD) include the ability to extract an overlying concept relating useful data. Current limitations involve the search complexity to find that concept and what it means to be \"useful.\" The Pattern Theory research crosses over in a natural way to the aforementioned domain. The goal of this paper is threefold. First, we present a new approach to the problem of learning by Discovery and robust pattern finding. Second, we explore the current limitations of a Pattern Theoretic approach as applied to the general KDD problem. Third, we exhibit its performance with experimental results on binary functions, and we compare those results with C4.5. This new approach to learning demonstrates a powerful method for finding patterns in a robust manner. ",
"neighbors": [
635
],
"mask": "Train"
},
{
"node_id": 437,
"label": 2,
"text": "Title: A Gentle Guide to Multiple Alignment Version Please send comments, critique, flames and praise Instructions\nAbstract: Prerequisites. An understanding of the dynamic programming (edit distance) approach to pairwise sequence alignment is useful for parts 1.3, 1.4, and 2. Also, familiarity with the use of Internet resources would be helpful for part 3. For the former, see Chapters 1.1 - 1.3, and for the latter, see Chapter 2 of the Hypertext Book of the GNA-VSNS Biocomputing Course at http://www.techfak.uni-bielefeld.de/bcd/Curric/welcome.html. General Rationale. You will understand why Multiple Alignment is considered a challenging problem, you will study approaches that try to reduce the number of steps needed to calculate the optimal solution, and you will study fast heuristics. In a case study involving immunoglobulin sequences, you will study multiple alignments obtained from WWW servers, recapitulating results from an original paper. Revision History. Version 1.01 on 17 Sep 1995. Expanded Ex.9. Updated Ex.46. Revised Solution Sheet -re- Ex.3+12. Marked more Exercises by \"A\" (to be submitted to the Instructor). Various minor clarifications in content ",
"neighbors": [
14
],
"mask": "Test"
},
{
"node_id": 438,
"label": 6,
"text": "Title: A System for Induction of Oblique Decision Trees \nAbstract: This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees.",
"neighbors": [
21,
79,
96,
227,
296,
391,
550,
607,
616,
692,
710,
720
],
"mask": "Train"
},
{
"node_id": 439,
"label": 2,
"text": "Title: Adaptive tuning of numerical weather prediction models: Randomized GCV in three and four dimensional data assimilation \nAbstract: This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees.",
"neighbors": [
97,
192,
388,
420
],
"mask": "Validation"
},
{
"node_id": 440,
"label": 4,
"text": "Title: Hierarchical Explanation-Based Reinforcement Learning \nAbstract: Explanation-Based Reinforcement Learning (EBRL) was introduced by Dietterich and Flann as a way of combining the ability of Reinforcement Learning (RL) to learn optimal plans with the generalization ability of Explanation-Based Learning (EBL) (Di-etterich & Flann, 1995). We extend this work to domains where the agent must order and achieve a sequence of subgoals in an optimal fashion. Hierarchical EBRL can effectively learn optimal policies in some of these sequential task domains even when the subgoals weakly interact with each other. We also show that when a planner that can achieve the individual subgoals is available, our method converges even faster. ",
"neighbors": [
367,
414,
562
],
"mask": "Train"
},
{
"node_id": 441,
"label": 0,
"text": "Title: BRACE: A Paradigm For the Discretization of Continuously Valued Data \nAbstract: Discretization of continuously valued data is a useful and necessary tool because many learning paradigms assume nominal data. A list of objectives for efficient and effective discretization is presented. A paradigm called BRACE (Boundary Ranking And Classification Evaluation) that attempts to meet the objectives is presented along with an algorithm that follows the paradigm. The paradigm meets many of the objectives, with potential for extension to meet the remainder. Empirical results have been promising. For these reasons BRACE has potential as an effective and efficient method for discretization of continuously valued data. A further advantage of BRACE is that it is general enough to be extended to other types of clustering/unsupervised learning. ",
"neighbors": [
297,
638
],
"mask": "Validation"
},
{
"node_id": 442,
"label": 3,
"text": "Title: Searching for dependencies in Bayesian classifiers j A n V n j If the attributes\nAbstract: Naive Bayesian classifiers which make independence assumptions perform remarkably well on some data sets but poorly on others. We explore ways to improve the Bayesian classifier by searching for dependencies among attributes. We propose and evaluate two algorithms for detecting dependencies among attributes and show that the backward sequential elimination and joining algorithm provides the most improvement over the naive Bayesian classifier. The domains on which the most improvement occurs are those domains on which the naive Bayesian classifier is significantly less accurate than a decision tree learner. This suggests that the attributes used in some common databases are not independent conditioned on the class and that the violations of the independence assumption that affect the accuracy of the classifier The Bayesian classifier (Duda & Hart, 1973) is a probabilistic method for classification. It can be used to determine the probability that an example j belongs to class C i given values of attributes of an example represented as a set of n nominally-valued attribute-value pairs of the form A 1 = V 1 j : ^ P (A k = V k j jC i ) may be estimated from the training data. To determine the most likely class of a test example, the probability of each class is computed with Equation 1. A classifier created in this manner is sometimes called a simple (Langley, 1993) or naive (Kononenko, 1990) Bayesian classifier. One important evaluation metric for machine learning methods is the predictive accuracy on unseen examples. This is measured by randomly selecting a subset of the examples in a database to use as training examples and reserving the remainder to be used as test examples. In the case of the simple Bayesian classifier, the training examples are used to estimate probabilities and Equation 1.1 is then used can be detected from training data.",
"neighbors": [
401,
635,
1908
],
"mask": "Train"
},
{
"node_id": 443,
"label": 2,
"text": "Title: Parameterization studies for the SAM and HMMER methods of hidden Markov model generation \nAbstract: Multiple sequence alignment of distantly related viral proteins remains a challenge to all currently available alignment methods. The hidden Markov model approach offers a new, flexible method for the generation of multiple sequence alignments. The results of studies attempting to infer appropriate parameter constraints for the generation of de novo HMMs for globin, kinase, aspartic acid protease, and ribonuclease H sequences by both the SAM and HMMER methods are described. ",
"neighbors": [
14
],
"mask": "Train"
},
{
"node_id": 444,
"label": 2,
"text": "Title: Fools Gold: Extracting Finite State Machines From Recurrent Network Dynamics \nAbstract: Several recurrent networks have been proposed as representations for the task of formal language learning. After training a recurrent network, the next step is to understand the information processing carried out by the network. Some researchers (Giles et al., 1992; Watrous & Kuhn, 1992; Cleeremans et al., 1989) have resorted to extracting finite state machines from the internal state trajectories of their recurrent networks. This paper describes two conditions, sensitivity to initial conditions and frivolous computational explanations due to discrete measurements (Kolen & Pollack, 1993), which allow these extraction methods to return illusionary finite state descriptions.",
"neighbors": [
144,
753
],
"mask": "Train"
},
{
"node_id": 445,
"label": 0,
"text": "Title: Bias and the Probability of Generalization \nAbstract: In order to be useful, a learning algorithm must be able to generalize well when faced with inputs not previously presented to the system. A bias is necessary for any generalization, and as shown by several researchers in recent years, no bias can lead to strictly better generalization than any other when summed over all possible functions or applications. This paper provides examples to illustrate this fact, but also explains how a bias or learning algorithm can be better than another in practice when the probability of the occurrence of functions is taken into account. It shows how domain knowledge and an understanding of the conditions under which each learning algorithm performs well can be used to increase the probability of accurate generalization, and identifies several of the conditions that should be considered when attempting to select an appropriate bias for a particular problem. ",
"neighbors": [
318,
690
],
"mask": "Validation"
},
{
"node_id": 446,
"label": 4,
"text": "Title: H-learning: A Reinforcement Learning Method to Optimize Undiscounted Average Reward \nAbstract: In this paper, we introduce a model-based reinforcement learning method called H-learning, which optimizes undiscounted average reward. We compare it with three other reinforcement learning methods in the domain of scheduling Automatic Guided Vehicles, transportation robots used in modern manufacturing plants and facilities. The four methods differ along two dimensions. They are either model-based or model-free, and optimize discounted total reward or undiscounted average reward. Our experimental results indicate that H-learning is more robust with respect to changes in the domain parameters, and in many cases, converges in fewer steps to better average reward per time step than all the other methods. An added advantage is that unlike the other methods it does not have any parameters to tune.",
"neighbors": [
552,
554
],
"mask": "Train"
},
{
"node_id": 447,
"label": 2,
"text": "Title: A Smooth Converse Lyapunov Theorem for Robust Stability \nAbstract: This paper presents a Converse Lyapunov Function Theorem motivated by robust control analysis and design. Our result is based upon, but generalizes, various aspects of well-known classical theorems. In a unified and natural manner, it (1) allows arbitrary bounded time-varying parameters in the system description, (2) deals with global asymptotic stability, (3) results in smooth (infinitely differentiable) Lyapunov functions, and (4) applies to stability with respect to not necessarily compact invariant sets. 1. Introduction. This work is motivated by problems of robust nonlinear stabilization. One of our main ",
"neighbors": [
630,
1471,
1501
],
"mask": "Test"
},
{
"node_id": 448,
"label": 1,
"text": "Title: Grounding Robotic Control with Genetic Neural Networks \nAbstract: Technical Report AI94-223 May 1994 Abstract An important but often neglected problem in the field of Artificial Intelligence is that of grounding systems in their environment such that the representations they manipulate have inherent meaning for the system. Since humans rely so heavily on semantics, it seems likely that the grounding is crucial to the development of truly intelligent behavior. This study investigates the use of simulated robotic agents with neural network processors as part of a method to ensure grounding. Both the topology and weights of the neural networks are optimized through genetic algorithms. Although such comprehensive optimization is difficult, the empirical evidence gathered here shows that the method is not only tractable but quite fruitful. In the experiments, the agents evolved a wall-following control strategy and were able to transfer it to novel environments. Their behavior suggests that they were also learning to build cognitive maps. ",
"neighbors": [
163,
191
],
"mask": "Validation"
},
{
"node_id": 449,
"label": 0,
"text": "Title: Correcting Imperfect Domain Theories: A Knowledge-Level Analysis \nAbstract: Explanation-Based Learning [Mitchell et al., 1986; DeJong and Mooney, 1986] has shown promise as a powerful analytical learning technique. However, EBL is severely hampered by the requirement of a complete and correct domain theory for successful learning to occur. Clearly, in non-trivial domains, developing such a domain theory is a nearly impossible task. Therefore, much research has been devoted to understanding how an imperfect domain theory can be corrected and extended during system performance. In this paper, we present a characterization of this problem, and use it to analyze past research in the area. Past characterizations of the problem (e.g, [Mitchell et al., 1986; Rajamoney and DeJong, 1987]) have viewed the types of performance errors caused by a faulty domain theory as primary. In contrast, we focus primarily on the types of knowledge deficiencies present in the theory, and from these derive the types of performance errors that can result. Correcting the theory can be viewed as a search through the space of possible domain theories, with a variety of knowledge sources that can be used to guide the search. We examine the types of knowledge used by a variety of past systems for this purpose. The hope is that this analysis will indicate the need for a \"universal weak method\" of domain theory correction, in which different sources of knowledge for theory correction can be freely and flexibly combined. ",
"neighbors": [
479,
566,
638,
1539
],
"mask": "Test"
},
{
"node_id": 450,
"label": 3,
"text": "Title: Mapping Bayesian Networks to Boltzmann Machines \nAbstract: We study the task of tnding a maximal a posteriori (MAP) instantiation of Bayesian network variables, given a partial value assignment as an initial constraint. This problem is known to be NP-hard, so we concentrate on a stochastic approximation algorithm, simulated annealing. This stochastic algorithm can be realized as a sequential process on the set of Bayesian network variables, where only one variable is allowed to change at a time. Consequently, the method can become impractically slow as the number of variables increases. We present a method for mapping a given Bayesian network to a massively parallel Bolztmann machine neural network architecture, in the sense that instead of using the normal sequential simulated annealing algorithm, we can use a massively parallel stochastic process on the Boltzmann machine architecture. The neural network updating process provably converges to a state which solves a given MAP task.",
"neighbors": [
646,
954,
2558
],
"mask": "Validation"
},
{
"node_id": 451,
"label": 4,
"text": "Title: Parameterized Heuristics for Intelligent Adaptive Network Routing in Large Communication Networks \nAbstract: Parameterized heuristics offers an elegant and powerful theoretical framework for design and analysis of autonomous adaptive communication networks. Routing of messages in such networks presents a real-time instance of a multi-criterion optimization problem in a dynamic and uncertain environment. This paper describes a framework for heuristic routing in large networks. The effectiveness of the heuristic routing mechanism upon which Quo Vadis is based is described as part of a simulation study within a network with grid topology. A formal analysis of the underlying principles is presented through the incremental design of a set of heuristic decision functions that can be used to guide messages along a near-optimal (e.g., minimum delay) path in a large network. This paper carefully derives the properties of such heuristics under a set of simplifying assumptions about the network topology and load dynamics and identify the conditions under which they are guaranteed to route messages along an optimal path. The paper concludes with a discussion of the relevance of the theoretical results presented in the paper to the design of intelligent autonomous adaptive communication networks and an outline of some directions of future research.",
"neighbors": [
552,
2537,
2666
],
"mask": "Test"
},
{
"node_id": 452,
"label": 3,
"text": "Title: Principal Curve Clustering With Noise \nAbstract: Technical Report 317 Department of Statistics University of Washington. 1 Derek Stanford is Graduate Research Assistant and Adrian E. Raftery is Professor of Statistics and Sociology, both at the Department of Statistics, University of Washington, Box 354322, Seattle, WA 98195-4322, USA. E-mail: stanford@stat.washington.edu and raftery@stat.washington.edu. Web: http://www.stat.washington.edu/raftery. This research was supported by ONR grants N00014-96-1-0192 and N00014-96-1-0330. The authors are grateful to Simon Byers, Gilles Celeux and Christian Posse for helpful discussions. ",
"neighbors": [
117,
347,
513
],
"mask": "Train"
},
{
"node_id": 453,
"label": 6,
"text": "Title: How to Use Expert Advice (Extended Abstract) \nAbstract: We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worst-case situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We give implications of this result on the performance of batch learning algorithms in a PAC setting which improve on the best results currently known in this context. We also extend our analysis to the case in which log loss is used instead of the expected number of mistakes. ",
"neighbors": [
9,
514,
549,
591,
706,
876,
1025,
1124,
1269,
1358,
1449,
1566,
1661,
1712,
2034,
2059,
2092,
2098,
2099,
2156,
2354,
2455,
2618
],
"mask": "Train"
},
{
"node_id": 454,
"label": 0,
"text": "Title: Towards Formalizations in Case-Based Reasoning for Synthesis \nAbstract: This paper presents the formalization of a novel approach to structural similarity assessment and adaptation in case-based reasoning (Cbr) for synthesis. The approach has been informally presented, exemplified, and implemented for the domain of industrial building design (Borner 1993). By relating the approach to existing theories we provide the foundation of its systematic evaluation and appropriate usage. Cases, the primary repository of knowledge, are represented structurally using an algebraic approach. Similarity relations provide structure preserving case modifications modulo the underlying algebra and an equational theory over the algebra (so available). This representation of a modeled universe of discourse enables theory-based inference of adapted solutions. The approach enables us to incorporate formally generalization, abstraction, geometrical transformation, and their combinations into Cbr. ",
"neighbors": [
183,
539,
541,
1368
],
"mask": "Train"
},
{
"node_id": 455,
"label": 4,
"text": "Title: Learning from an Automated Training Agent \nAbstract: A learning agent employing reinforcement learning is hindered because it only receives the critic's sparse and weakly informative training information. We present an approach in which an automated training agent may also provide occasional instruction to the learner in the form of actions for the learner to perform. The learner has access to both the critic's feedback and the trainer's instruction. In the experiments, we vary the level of the trainer's interaction with the learner, from allowing the trainer to instruct the learner at almost every time step, to not allowing the trainer to respond at all. We also vary a parameter that controls how the learner incorporates the trainer's actions. The results show significant reductions in the average number of training trials necessary to learn to perform the task.",
"neighbors": [
374,
552
],
"mask": "Validation"
},
{
"node_id": 456,
"label": 6,
"text": "Title: Boosting a weak learning algorithm by majority To be published in Information and Computation \nAbstract: We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas presented by Schapire in his paper \"The strength of weak learnability\", and represents an improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant's polynomial PAC learning framework, which are the best general upper bounds known today. We show that the number of hypotheses that are combined by our algorithm is the smallest number possible. Other outcomes of our analysis are results regarding the representational power of threshold circuits, the relation between learnability and compression, and a method for parallelizing PAC learning algorithms. We provide extensions of our algorithms to cases in which the concepts are not binary and to the case where the accuracy of the learning algorithm depends on the distribution of the instances. ",
"neighbors": [
25,
549,
569,
672,
1296,
1560,
1748,
2099,
2455,
2653
],
"mask": "Train"
},
{
"node_id": 457,
"label": 0,
"text": "Title: A Computational Model of Ratio Decidendi \nAbstract: This paper proposes a model of ratio decidendi as a justification structure consisting of a series of reasoning steps, some of which relate abstract predicates to other abstract predicates and some of which relate abstract predicates to specific facts. This model satisfies an important set of characteristics of ratio decidendi identified from the jurisprudential literature. In particular, the model shows how the theory under which a case is decided controls its precedential effect. By contrast, a purely exemplar-based model of ratio decidendi fails to account for the dependency of prece-dential effect on the theory of decision. ",
"neighbors": [
166,
649
],
"mask": "Train"
},
{
"node_id": 458,
"label": 2,
"text": "Title: Quantifying neighbourhood preservation in topographic mappings \nAbstract: Mappings that preserve neighbourhood relationships are important in many contexts, from neurobiology to multivariate data analysis. It is important to be clear about precisely what is meant by preserving neighbourhoods. At least three issues have to be addressed: how neighbourhoods are defined, how a perfectly neighbourhood preserving mapping is defined, and how an objective function for measuring discrepancies from perfect neighbour-hood preservation is defined. We review several standard methods, and using a simple example mapping problem show that the different assumptions of each lead to non-trivially different answers. We also introduce a particular measure for topographic distortion, which has the form of a quadratic assignment problem. Many previous methods are closely related to this measure, which thus serves to unify disparate approaches.",
"neighbors": [
745,
747
],
"mask": "Train"
},
{
"node_id": 459,
"label": 6,
"text": "Title: Pac Learning, Noise, and Geometry \nAbstract: This paper describes the probably approximately correct model of concept learning, paying special attention to the case where instances are points in Euclidean n-space. The problem of learning from noisy training data is also studied. ",
"neighbors": [
109,
130,
267,
640,
1705
],
"mask": "Train"
},
{
"node_id": 460,
"label": 4,
"text": "Title: Learning Roles: Behavioral Diversity in Robot Teams \nAbstract: This paper describes research investigating behavioral specialization in learning robot teams. Each agent is provided a common set of skills (motor schema-based behavioral assemblages) from which it builds a task-achieving strategy using reinforcement learning. The agents learn individually to activate particular behavioral assemblages given their current situation and a reward signal. The experiments, conducted in robot soccer simulations, evaluate the agents in terms of performance, policy convergence, and behavioral diversity. The results show that in many cases, robots will automatically diversify by choosing heterogeneous behaviors. The degree of diversification and the performance of the team depend on the reward structure. When the entire team is jointly rewarded or penalized (global reinforcement), teams tend towards heterogeneous behavior. When agents are provided feedback individually (local reinforcement), they converge to identical policies. ",
"neighbors": [
148,
281
],
"mask": "Test"
},
{
"node_id": 461,
"label": 2,
"text": "Title: Product Unit Learning constructive algorithm is then introduced which adds product units to a network\nAbstract: Product units provide a method of automatically learning the higher-order input combinations required for the efficient synthesis of Boolean logic functions by neural networks. Product units also have a higher information capacity than sigmoidal networks. However, this activation function has not received much attention in the literature. A possible reason for this is that one encounters some problems when using standard backpropagation to train networks containing these units. This report examines these problems, and evaluates the performance of three training algorithms on networks of this type. Empirical results indicate that the error surface of networks containing product units have more local minima than corresponding networks with summation units. For this reason, a combination of local and global training algorithms were found to provide the most reliable convergence. We then investigate how `hints' can be added to the training algorithm. By extracting a common frequency from the input weights, and training this frequency separately, we show that convergence can be accelerated. In order to compare their performance with other transfer functions, product units were implemented as candidate units in the Cascade Correlation (CC) [13] system. Using these candidate units resulted in smaller networks which trained faster than when the any of the standard (three sigmoidal types and one Gaussian) transfer functions were used. This superiority was confirmed when a pool of candidate units of four different nonlinear activation functions were used, which have to compete for addition to the network. Extensive simulations showed that for the problem of implementing random Boolean logic functions, product units are always chosen above any of the other transfer functions.",
"neighbors": [
427
],
"mask": "Train"
},
{
"node_id": 462,
"label": 2,
"text": "Title: Draft Symbolic Representation of Neural Networks \nAbstract: An early and shorter version of this paper has been accepted for presenta tion at IJCAI'95. ",
"neighbors": [
187,
1644,
2582
],
"mask": "Test"
},
{
"node_id": 463,
"label": 4,
"text": "Title: A Cognitive Model of Learning to Navigate \nAbstract: Our goal is to develop a cognitive model of how humans acquire skills on complex cognitive tasks. We are pursuing this goal by designing computational architectures for the NRL Navigation task, which requires competent sensorimotor coordination. In this paper, we analyze the NRL Navigation task in depth. We then use data from experiments with human subjects learning this task to guide us in constructing a cognitive model of skill acquisition for the task. Verbal protocol data augments the black box view provided by execution traces of inputs and outputs. Computational experiments allow us to explore a space of alternative architectures for the task, guided by the quality of fit to human performance data. ",
"neighbors": [
3,
333,
477,
483,
564
],
"mask": "Validation"
},
{
"node_id": 464,
"label": 3,
"text": "Title: On the Logic of Iterated Belief Revision \nAbstract: We show in this paper that the AGM postulates are too week to ensure the rational preservation of conditional beliefs during belief revision, thus permitting improper responses to sequences of observations. We remedy this weakness by proposing four additional postulates, which are sound relative to a qualitative version of probabilistic conditioning. Contrary to the AGM framework, the proposed postulates characterize belief revision as a process which may depend on elements of an epistemic state that are not necessarily captured by a belief set. We also show that a simple modification to the AGM framework can allow belief revision to be a function of epistemic states. We establish a model-based representation theorem which characterizes the proposed postulates and constrains, in turn, the way in which entrenchment orderings may be transformed under iterated belief revision. ",
"neighbors": [
275,
279,
467,
573
],
"mask": "Validation"
},
{
"node_id": 465,
"label": 4,
"text": "Title: Strategy Learning with Multilayer Connectionist Representations 1 \nAbstract: Results are presented that demonstrate the learning and fine-tuning of search strategies using connectionist mechanisms. Previous studies of strategy learning within the symbolic, production-rule formalism have not addressed fine-tuning behavior. Here a two-layer connectionist system is presented that develops its search from a weak to a task-specific strategy and fine-tunes its performance. The system is applied to a simulated, real-time, balance-control task. We compare the performance of one-layer and two-layer networks, showing that the ability of the two-layer network to discover new features and thus enhance the original representation is critical to solving the balancing task. ",
"neighbors": [
85,
103,
118,
294,
466,
523,
565,
566,
2027,
2368,
2672
],
"mask": "Train"
},
{
"node_id": 466,
"label": 4,
"text": "Title: On the Computational Economics of Reinforcement Learning \nAbstract: Following terminology used in adaptive control, we distinguish between indirect learning methods, which learn explicit models of the dynamic structure of the system to be controlled, and direct learning methods, which do not. We compare an existing indirect method, which uses a conventional dynamic programming algorithm, with a closely related direct reinforcement learning method by applying both methods to an infinite horizon Markov decision problem with unknown state-transition probabilities. The simulations show that although the direct method requires much less space and dramatically less computation per control action, its learning ability in this task is superior to, or compares favorably with, that of the more complex indirect method. Although these results do not address how the methods' performances compare as problems become more difficult, they suggest that given a fixed amount of computational power available per control action, it may be better to use a direct reinforcement learning method augmented with indirect techniques than to devote all available resources to a computation-ally costly indirect method. Comprehensive answers to the questions raised by this study depend on many factors making up the eco nomic context of the computation.",
"neighbors": [
16,
294,
465,
552,
565,
566
],
"mask": "Validation"
},
{
"node_id": 467,
"label": 3,
"text": "Title: A Knowledge-Based Framework for Belief Change Part I: Foundations \nAbstract: We propose a general framework in which to study belief change. We begin by defining belief in terms of knowledge and plausibility: an agent believes ' if he knows that ' is true in all the worlds he considers most plausible. We then consider some properties defining the interaction between knowledge and plausibility, and show how these properties affect the properties of belief. In particular, we show that by assuming two of the most natural properties, belief becomes a KD45 operator. Finally, we add time to the picture. This gives us a framework in which we can talk about knowledge, plausibility (and hence belief), and time, which extends the framework of Halpern and Fagin [HF89] for modeling knowledge in multi-agent systems. We show that our framework is quite expressive and lets us model in a natural way a number of different scenarios for belief change. For example, we show how we can capture an analogue to prior probabilities, which can be updated by \"conditioning\". In a related paper, we show how the two best studied scenarios, belief revision and belief update, fit into the framework. ",
"neighbors": [
270,
276,
342,
464,
495,
729,
2000,
2016
],
"mask": "Train"
},
{
"node_id": 468,
"label": 3,
"text": "Title: Adaptive Markov Chain Monte Carlo through Regeneration Summary \nAbstract: Markov chain Monte Carlo (MCMC) is used for evaluating expectations of functions of interest under a target distribution . This is done by calculating averages over the sample path of a Markov chain having as its stationary distribution. For computational efficiency, the Markov chain should be rapidly mixing. This can sometimes be achieved only by careful design of the transition kernel of the chain, on the basis of a detailed preliminary exploratory analysis of . An alternative approach might be to allow the transition kernel to adapt whenever new features of are encountered during the MCMC run. However, if such adaptation occurs infinitely often, the stationary distribution of the chain may be disturbed. We describe a framework, based on the concept of Markov chain regeneration, which allows adaptation to occur infinitely often, but which does not disturb the stationary distribution of the chain or the consistency of sample-path averages. Key Words: Adaptive method; Bayesian inference; Gibbs sampling; Markov chain Monte Carlo; ",
"neighbors": [
182,
491,
896,
1713,
2377
],
"mask": "Train"
},
{
"node_id": 469,
"label": 2,
"text": "Title: Interpolation Models with Multiple \nAbstract: A traditional interpolation model is characterized by the choice of reg-ularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant ff, and the noise model has a single parameter fi. The ratio ff=fi alone is responsible for determining globally all these attributes of the interpolant: its `complexity', `flexibility', `smoothness', `characteristic scale length', and `characteristic amplitude'. We suggest that interpolation models should be able to capture more than just one flavour of simplicity and complexity. We describe Bayesian models in which the interpolant has a smoothness that varies spatially. We emphasize the importance, in practical implementation, of the concept of `conditional convexity' when designing models with many hyperparameters. We apply the new models to the interpolation of neuronal spike data and demonstrate a substantial improvement in generalization error. ",
"neighbors": [
78,
160,
214
],
"mask": "Test"
},
{
"node_id": 470,
"label": 0,
"text": "Title: What Daimler-Benz has learned as an industrial partner from the Machine Learning Project StatLog \nAbstract: Author of this paper was co-ordinator of the Machine Learning project StatLog during 1990-1993. This project was supported financially by the European Community. The main aim of StatLog was to evaluate different learning algorithms using real industrial and commercial applications. As an industrial partner and contributor, Daimler-Benz has introduced different applications to Stat-Log among them fault diagnosis, letter and digit recognition, credit-scoring and prediction of the number of registered trucks. We have learned a lot of lessons from this project which have effected our application oriented research in the field of Machine Learning (ML) in Daimler-Benz. We have distinguished that, especially, more research is necessary to prepare the ML-algorithms to handle the real industrial and commercial applications. In this paper we describe, shortly, the Daimler-Benz applications in StatLog, we discuss shortcomings of the applied ML-algorithms and finally we outline the fields where we think further research is necessary. ",
"neighbors": [
478
],
"mask": "Test"
},
{
"node_id": 471,
"label": 4,
"text": "Title: In Improving Elevator Performance Using Reinforcement Learning \nAbstract: This paper describes the application of reinforcement learning (RL) to the difficult real world problem of elevator dispatching. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their states are not fully observable and they are nonstationary due to changing passenger arrival rates. In addition, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale stochastic dynamic optimization problem of practical utility.",
"neighbors": [
2,
103,
295,
621,
1045,
1632,
1859
],
"mask": "Validation"
},
{
"node_id": 472,
"label": 4,
"text": "Title: Category: Control, Navigation and Planning. Key words: Reinforcement learning, Exploration, Hidden state. Prefer oral presentation.\nAbstract: This paper presents Fringe Exploration, a technique for efficient exploration in partially observable domains. The key idea, (applicable to many exploration techniques), is to keep statistics in the space of possible short-term memories, instead of in the agent's current state space. Experimental results in a partially observable maze and in a difficult driving task with visual routines show dramatic performance improvements.",
"neighbors": [
552,
566,
650,
1006
],
"mask": "Validation"
},
{
"node_id": 473,
"label": 4,
"text": "Title: Improving Policies without Measuring Merits \nAbstract: Performing policy iteration in dynamic programming should only require knowledge of relative rather than absolute measures of the utility of actions what Baird (1993) calls the advantages of actions at states. Nevertheless, existing methods in dynamic programming (including Baird's) compute some form of absolute utility function. For smooth problems, advantages satisfy two differential consistency conditions (including the requirement that they be free of curl), and we show that enforcing these can lead to appropriate policy improvement solely in terms of advantages.",
"neighbors": [
552,
1459
],
"mask": "Train"
},
{
"node_id": 474,
"label": 2,
"text": "Title: Protein Structure Prediction: Selecting Salient Features from Large Candidate Pools \nAbstract: We introduce a parallel approach, \"DT-Select,\" for selecting features used by inductive learning algorithms to predict protein secondary structure. DT-Select is able to rapidly choose small, nonredundant feature sets from pools containing hundreds of thousands of potentially useful features. It does this by building a decision tree, using features from the pool, that classifies a set of training examples. The features included in the tree provide a compact description of the training data and are thus suitable for use as inputs to other inductive learning algorithms. Empirical experiments in the protein secondary-structure task, in which sets of complex features chosen by DT-Select are used to augment a standard artificial neural network representation, yield surprisingly little performance gain, even though features are selected from very large feature pools. We discuss some possible reasons for this result. 1 ",
"neighbors": [
635,
698
],
"mask": "Train"
},
{
"node_id": 475,
"label": 1,
"text": "Title: Basic PSugal an extension package for the development of Distributed Genetic Algorithms \nAbstract: This paper presents the extension package developed by the author at the Faculty of Sciences and Technology of the New University of Lisbon, designed for experimentation with Coarse-Grained Distributed Genetic Algorithms (DGA). The package was implemented as an extension to the Basic Sugal system, developed by Andrew Hunter at the University of Sunderland, U.K., which is primarily intended to be used in the research of Sequential or Serial Genetic Algorithms (SGA). ",
"neighbors": [
168
],
"mask": "Train"
},
{
"node_id": 476,
"label": 2,
"text": "Title: A self-organizing multiple-view representation of 3D objects \nAbstract: We explore representation of 3D objects in which several distinct 2D views are stored for each object. We demonstrate the ability of a two-layer network of thresholded summation units to support such representations. Using unsupervised Hebbian relaxation, the network learned to recognize ten objects from different viewpoints. The training process led to the emergence of compact representations of the specific input views. When tested on novel views of the same objects, the network exhibited a substantial generalization capability. In simulated psychophysical experiments, the network's behavior was qualitatively similar to that of human subjects. ",
"neighbors": [
605,
1056,
1091
],
"mask": "Validation"
},
{
"node_id": 477,
"label": 4,
"text": "Title: Forward models: Supervised learning with a distal teacher \nAbstract: Internal models of the environment have an important role to play in adaptive systems in general and are of particular importance for the supervised learning paradigm. In this paper we demonstrate that certain classical problems associated with the notion of the \"teacher\" in supervised learning can be solved by judicious use of learned internal models as components of the adaptive system. In particular, we show how supervised learning algorithms can be utilized in cases in which an unknown dynamical system intervenes between actions and desired outcomes. Our approach applies to any supervised learning algorithm that is capable of learning in multi-layer networks. *This paper is a revised version of MIT Center for Cognitive Science Occasional Paper #40. We wish to thank Michael Mozer, Andrew Barto, Robert Jacobs, Eric Loeb, and James McClelland for helpful comments on the manuscript. This project was supported in part by BRSG 2 S07 RR07047-23 awarded by the Biomedical Research Support Grant Program, Division of Research Resources, National Institutes of Health, by a grant from ATR Auditory and Visual Perception Research Laboratories, by a grant from Siemens Corporation, by a grant from the Human Frontier Science Program, and by grant N00014-90-J-1942 awarded by the Office of Naval Research. ",
"neighbors": [
229,
294,
333,
427,
463,
565,
566,
745,
1645,
1766,
2409,
2658
],
"mask": "Train"
},
{
"node_id": 478,
"label": 0,
"text": "Title: An Improved Algorithm for Incremental Induction of Decision Trees \nAbstract: Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree. ",
"neighbors": [
52,
96,
215,
218,
227,
274,
303,
396,
470,
479,
497,
520,
523,
565,
568,
618,
754
],
"mask": "Train"
},
{
"node_id": 479,
"label": 0,
"text": "Title: Learning physical descriptions from functional definitions, examples, Learning from examples: The effect of different conceptual\nAbstract: Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree. ",
"neighbors": [
92,
136,
147,
418,
449,
478,
649,
1354,
1627,
2300,
2636
],
"mask": "Train"
},
{
"node_id": 480,
"label": 2,
"text": "Title: Modelling the Manifolds of Images of Handwritten Digits \nAbstract: Technical Report 94-07 February 7, 1994 (updated April 25, 1994) This paper will appear in Proceedings of the Eleventh International Conference on Machine Learning. Abstract This paper presents an algorithm for incremental induction of decision trees that is able to handle both numeric and symbolic variables. In order to handle numeric variables, a new tree revision operator called `slewing' is introduced. Finally, a non-incremental method is given for finding a decision tree based on a direct metric of a candidate tree. ",
"neighbors": [
257,
667,
2270,
2570
],
"mask": "Train"
},
{
"node_id": 481,
"label": 6,
"text": "Title: The Weighted Majority Algorithm \nAbstract: fl This research was primarily conducted while this author was at the University of Calif. at Santa Cruz with support from ONR grant N00014-86-K-0454, and at Harvard University, supported by ONR grant N00014-85-K-0445 and DARPA grant AFOSR-89-0506. Current address: NEC Research Institute, 4 Independence Way, Princeton, NJ 08540. E-mail address: nickl@research.nj.nec.com. y Supported by ONR grants N00014-86-K-0454 and N00014-91-J-1162. Part of this research was done while this author was on sabbatical at Aiken Computation Laboratory, Harvard, with partial support from the ONR grants N00014-85-K-0445 and N00014-86-K-0454. Address: Department of Computer Science, University of California at Santa Cruz. E-mail address: manfred@cs.ucsc.edu. ",
"neighbors": [
9
],
"mask": "Train"
},
{
"node_id": 482,
"label": 0,
"text": "Title: Simple Selection of Utile Control Rules in Speedup Learning \nAbstract: Many recent approaches to avoiding the utility problem in speedup learning rely on sophisticated utility measures and significant numbers of training data to accurately estimate the utility of control knowledge. Empirical results presented here and elsewhere indicate that a simple selection strategy of retaining all control rules derived from a training problem explanation quickly defines an efficient set of control knowledge from few training problems. This simple selection strategy provides a low-cost alternative to example-intensive approaches for improving the speed of a problem solver.",
"neighbors": [
13,
251,
551,
578
],
"mask": "Train"
},
{
"node_id": 483,
"label": 4,
"text": "Title: The Parti-game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-spaces \nAbstract: Parti-game is a new algorithm for learning feasible trajectories to goal regions in high dimensional continuous state-spaces. In high dimensions it is essential that learning does not plan uniformly over a state-space. Parti-game maintains a decision-tree partitioning of state-space and applies techniques from game-theory and computational geometry to efficiently and adaptively concentrate high resolution only on critical areas. The current version of the algorithm is designed to find feasible paths or trajectories to goal regions in high dimensional spaces. Future versions will be designed to find a solution that optimizes a real-valued criterion. Many simulated problems have been tested, ranging from two-dimensional to nine-dimensional state-spaces, including mazes, path planning, non-linear dynamics, and planar snake robots in restricted spaces. In all cases, a good solution is found in less than ten trials and a few minutes. ",
"neighbors": [
277,
294,
367,
463,
552,
566,
650,
749,
933
],
"mask": "Train"
},
{
"node_id": 484,
"label": 3,
"text": "Title: Comparing Predictive Inference Methods for Discrete Domains \nAbstract: Predictive inference is seen here as the process of determining the predictive distribution of a discrete variable, given a data set of training examples and the values for the other problem domain variables. We consider three approaches for computing this predictive distribution, and assume that the joint probability distribution for the variables belongs to a set of distributions determined by a set of parametric models. In the simplest case, the predictive distribution is computed by using the model with the maximum a posteriori (MAP) posterior probability. In the evidence approach, the predictive distribution is obtained by averaging over all the individual models in the model family. In the third case, we define the predictive distribution by using Rissanen's new definition of stochastic complexity. Our experiments performed with the family of Naive Bayes models suggest that when using all the data available, the stochastic complexity approach produces the most accurate predictions in the log-score sense. However, when the amount of available training data is decreased, the evidence approach clearly outperforms the two other approaches. The MAP predictive distribution is clearly inferior in the log-score sense to the two more sophisticated approaches, but for the 0/1-score the MAP approach may still in some cases produce the best results. ",
"neighbors": [
641,
642,
1574
],
"mask": "Validation"
},
{
"node_id": 485,
"label": 3,
"text": "Title: Bayesian Case-Based Reasoning with Neural Networks \nAbstract: Given a problem, a case-based reasoning (CBR) system will search its case memory and use the stored cases to find the solution, possibly modifying retrieved cases to adapt to the required input specifications. In this paper we introduce a neural network architecture for efficient case-based reasoning. We show how a rigorous Bayesian probability propagation algorithm can be implemented as a feedforward neural network and adapted for CBR. In our approach the efficient indexing problem of CBR is naturally implemented by the parallel architecture, and heuristic matching is replaced by a probability metric. This allows our CBR to perform theoretically sound Bayesian reasoning. We also show how the probability propagation actually offers a solution to the adaptation problem in a very natural way. ",
"neighbors": [
711,
1838,
2380,
2514,
2561
],
"mask": "Train"
},
{
"node_id": 486,
"label": 0,
"text": "Title: CASE-BASED CREATIVE DESIGN \nAbstract: Designers across a variety of domains engage in many of the same creative activities. Since much creativity stems from using old solutions in novel ways, we believe that case-based reasoning can be used to explain many creative design processes. ",
"neighbors": [
64,
231,
284,
285,
1148,
1278,
1597,
2276
],
"mask": "Train"
},
{
"node_id": 487,
"label": 2,
"text": "Title: Language as a dynamical system \nAbstract: Designers across a variety of domains engage in many of the same creative activities. Since much creativity stems from using old solutions in novel ways, we believe that case-based reasoning can be used to explain many creative design processes. ",
"neighbors": [
291,
538
],
"mask": "Train"
},
{
"node_id": 488,
"label": 6,
"text": "Title: Prediction, Learning, Uniform Convergence, and Scale-sensitive Dimensions \nAbstract: We present a new general-purpose algorithm for learning classes of [0; 1]-valued functions in a generalization of the prediction model, and prove a general upper bound on the expected absolute error of this algorithm in terms of a scale-sensitive generalization of the Vapnik dimension proposed by Alon, Ben-David, Cesa-Bianchi and Haussler. We give lower bounds implying that our upper bounds cannot be improved by more than a constant factor in general. We apply this result, together with techniques due to Haussler and to Benedek and Itai, to obtain new upper bounds on packing numbers in terms of this scale-sensitive notion of dimension. Using a different technique, we obtain new bounds on packing numbers in terms of Kearns and Schapire's fat-shattering function. We show how to apply both packing bounds to obtain improved general bounds on the sample complexity of agnostic learning. For each * > 0, we establish weaker sufficient and stronger necessary conditions for a class of [0; 1]-valued functions to be agnostically learnable to within *, and to be an *-uniform Glivenko-Cantelli class. ",
"neighbors": [
109,
549,
591,
2053
],
"mask": "Train"
},
{
"node_id": 489,
"label": 2,
"text": "Title: Multiple Network Systems (Minos) Modules: Task Division and Module Discrimination 1 \nAbstract: It is widely considered an ultimate connectionist objective to incorporate neural networks into intelligent systems. These systems are intended to possess a varied repertoire of functions enabling adaptable interaction with a non-static environment. The first step in this direction is to develop various neural network algorithms and models, the second step is to combine such networks into a modular structure that might be incorporated into a workable system. In this paper we consider one aspect of the second point, namely: processing reliability and hiding of wetware details. Pre- sented is an architecture for a type of neural expert module, named an Authority. An Authority consists of a number of Minos modules. Each of the Minos modules in an Authority has the same processing capabilities, but varies with respect to its particular specialization to aspects of the problem domain. The Authority employs the collection of Minoses like a panel of experts. The expert with the highest confidence is believed, and it is the answer and confidence quotient that are transmitted to other levels in a system hierarchy. ",
"neighbors": [
46,
238,
301,
747,
1815,
2670
],
"mask": "Train"
},
{
"node_id": 490,
"label": 4,
"text": "Title: Learning policies for partially observable environments: Scaling up \nAbstract: Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems, existing techniques for finding optimal behavior do not appear to scale well and have been unable to find satisfactory policies for problems with more than a dozen states. After a brief review of pomdp's, this paper discusses several simple solution methods and shows that all are capable of finding near-optimal policies for a selection of extremely small pomdp's taken from the learning literature. In contrast, we show that none are able to solve a slightly larger and noisier problem based on robot navigation. We find that a combination of two novel approaches performs well on these problems and suggest methods for scaling to even larger and more complicated domains.",
"neighbors": [
5,
6,
45,
213,
220,
492,
734
],
"mask": "Validation"
},
{
"node_id": 491,
"label": 3,
"text": "Title: Self Regenerative Markov Chain Monte Carlo Summary \nAbstract: We propose a new method of construction of Markov chains with a given stationary distribution . This method is based on construction of an auxiliary chain with some other stationary distribution and picking elements of this auxiliary chain a suitable number of times. The proposed method has many advantages over its rivals. It is easy to implement; it provides a simple analysis; it can be faster and more efficient than the currently available techniques and it can also be adapted during the course of the simulation. We make theoretical and numerical comparisons of the characteristics of the proposed algorithm with some other MCMC techniques. ",
"neighbors": [
182,
468,
2318,
2377
],
"mask": "Train"
},
{
"node_id": 492,
"label": 4,
"text": "Title: Approximating Optimal Policies for Partially Observable Stochastic Domains \nAbstract: The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligence. If the state of the world is known at all times, the world can be modeled as a Markov Decision Process (MDP). MDPs have been studied extensively and many methods are known for determining optimal courses of action, or policies. The more realistic case where state information is only partially observable, Partially Observable Markov Decision Processes (POMDPs), have received much less attention. The best exact algorithms for these problems can be very inefficient in both space and time. We introduce Smooth Partially Observable Value Approximation (SPOVA), a new approximation method that can quickly yield good approximations which can improve over time. This method can be combined with reinforcement learning methods, a combination that was very effective in our test cases. ",
"neighbors": [
45,
213,
490,
565,
734,
1186,
1741,
2323,
2419
],
"mask": "Validation"
},
{
"node_id": 493,
"label": 1,
"text": "Title: Parallel Search for Neural Network Under the guidance of \nAbstract: The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligence. If the state of the world is known at all times, the world can be modeled as a Markov Decision Process (MDP). MDPs have been studied extensively and many methods are known for determining optimal courses of action, or policies. The more realistic case where state information is only partially observable, Partially Observable Markov Decision Processes (POMDPs), have received much less attention. The best exact algorithms for these problems can be very inefficient in both space and time. We introduce Smooth Partially Observable Value Approximation (SPOVA), a new approximation method that can quickly yield good approximations which can improve over time. This method can be combined with reinforcement learning methods, a combination that was very effective in our test cases. ",
"neighbors": [
427
],
"mask": "Validation"
},
{
"node_id": 494,
"label": 2,
"text": "Title: Connectionist Modeling of the Fast Mapping Phenomenon \nAbstract: The problem of making optimal decisions in uncertain conditions is central to Artificial Intelligence. If the state of the world is known at all times, the world can be modeled as a Markov Decision Process (MDP). MDPs have been studied extensively and many methods are known for determining optimal courses of action, or policies. The more realistic case where state information is only partially observable, Partially Observable Markov Decision Processes (POMDPs), have received much less attention. The best exact algorithms for these problems can be very inefficient in both space and time. We introduce Smooth Partially Observable Value Approximation (SPOVA), a new approximation method that can quickly yield good approximations which can improve over time. This method can be combined with reinforcement learning methods, a combination that was very effective in our test cases. ",
"neighbors": [
427,
747
],
"mask": "Train"
},
{
"node_id": 495,
"label": 3,
"text": "Title: Abduction to Plausible Causes: An Event-based Model of Belief Update \nAbstract: The Katsuno and Mendelzon (KM) theory of belief update has been proposed as a reasonable model for revising beliefs about a changing world. However, the semantics of update relies on information which is not readily available. We describe an alternative semantical view of update in which observations are incorporated into a belief set by: a) explaining the observation in terms of a set of plausible events that might have caused that observation; and b) predicting further consequences of those explanations. We also allow the possibility of conditional explanations. We show that this picture naturally induces an update operator conforming to the KM postulates under certain assumptions. However, we argue that these assumptions are not always reasonable, and they restrict our ability to integrate update with other forms of revision when reasoning about action. fl Some parts of this report appeared in preliminary form as An Event-Based Abductive Model of Update, Proc. of Tenth Canadian Conf. on in AI, Banff, Alta., (1994). ",
"neighbors": [
270,
339,
342,
467
],
"mask": "Validation"
},
{
"node_id": 496,
"label": 2,
"text": "Title: BRAINSTRUCTURED CONNECTIONIST NETWORKS THAT PERCEIVE AND LEARN \nAbstract: This paper specifies the main features of Brain-like, Neuronal, and Connectionist models; argues for the need for, and usefulness of, appropriate successively larger brain-like structures; and examines parallel-hierarchical Recognition Cone models of perception from this perspective, as examples of such structures. The anatomy, physiology, behavior, and development of the visual system are briefly summarized to motivate the architecture of brain-structured networks for perceptual recognition. Results are presented from simulations of carefully pre-designed Recognition Cone structures that perceive objects (e.g., houses) in digitized photographs. A framework for perceptual learning is introduced, including mechanisms for generation-discovery (feedback-guided growth of new links and nodes, subject to brain-like constraints (e.g., local receptive fields, global convergence-divergence). The information processing transforms discovered through generation are fine-tuned by feedback-guided reweight-ing of links. Some preliminary results are presented of brain-structured networks that learn to recognize simple objects (e.g., letters of the alphabet, cups, apples, bananas) through feedback-guided generation and reweighting. These show large improvements over networks that either lack brain-like structure or/and learn by reweighting of links alone. ",
"neighbors": [
501,
663,
1896,
2393
],
"mask": "Train"
},
{
"node_id": 497,
"label": 6,
"text": "Title: Decision Tree Induction Based on Efficient Tree Restructuring \nAbstract: The ability to restructure a decision tree efficiently enables a variety of approaches to decision tree induction that would otherwise be prohibitively expensive. Two such approaches are described here, one being incremental tree induction (ITI), and the other being non-incremental tree induction using a measure of tree quality instead of test quality (DMTI). These approaches and several variants offer new computational and classifier characteristics that lend themselves to particular applications. ",
"neighbors": [
478,
2342
],
"mask": "Test"
},
{
"node_id": 498,
"label": 3,
"text": "Title: A variational approach to Bayesian logistic regression models and their extensions \nAbstract: We consider a logistic regression model with a Gaussian prior distribution over the parameters. We show that accurate variational techniques can be used to obtain a closed form posterior distribution over the parameters given the data thereby yielding a posterior predictive model. The results are readily extended to (binary) belief networks. For belief networks we also derive closed form posteriors in the presence of missing values. Finally, we show that the dual of the regression problem gives a latent variable density model, the variational formulation of which leads to exactly solvable EM updates.",
"neighbors": [
107,
108,
250
],
"mask": "Train"
},
{
"node_id": 499,
"label": 3,
"text": "Title: IMPROVING THE MEAN FIELD APPROXIMATION VIA THE USE OF MIXTURE DISTRIBUTIONS \nAbstract: Mean field methods provide computationally efficient approximations to posterior probability distributions for graphical models. Simple mean field methods make a completely factorized approximation to the posterior, which is unlikely to be accurate when the posterior is multimodal. Indeed, if the posterior is multi-modal, only one of the modes can be captured. To improve the mean field approximation in such cases, we employ mixture models as posterior approximations, where each mixture component is a factorized distribution. We describe efficient methods for optimizing the parameters in these models. ",
"neighbors": [
250,
1287,
1288
],
"mask": "Train"
},
{
"node_id": 500,
"label": 4,
"text": "Title: 2-D Pole Balancing with Recurrent Evolutionary Networks \nAbstract: The success of evolutionary methods on standard control learning tasks has created a need for new benchmarks. The classic pole balancing problem is no longer difficult enough to serve as a viable yardstick for measuring the learning efficiency of these systems. In this paper we present a more difficult version to the classic problem where the cart and pole can move in a plane. We demonstrate a neuroevolution system (Enforced Sub-Populations, or ESP) that can solve this difficult problem without velocity information.",
"neighbors": [
247,
563,
1767,
2444
],
"mask": "Train"
},
{
"node_id": 501,
"label": 2,
"text": "Title: Some Biases for Efficient Learning of Spatial, Temporal, and Spatio-Temporal Patterns \nAbstract: This paper introduces and explores some representational biases for efficient learning of spatial, temporal, or spatio-temporal patterns in connectionist networks (CN) massively parallel networks of simple computing elements. It examines learning mechanisms that constructively build up network structures that encode information from environmental stimuli at successively higher resolutions as needed for the tasks (e.g., perceptual recognition) that the network has to perform. Some simple examples are presented to illustrate the the basic structures and processes used in such networks to ensure the parsimony of learned representations by guiding the system to focus its efforts at the minimal adequate resolution. Several extensions of the basic algorithm for efficient learning using multi-resolution representations of spatial, temporal, or spatio-temporal patterns are discussed. ",
"neighbors": [
174,
496,
503,
663
],
"mask": "Train"
},
{
"node_id": 502,
"label": 4,
"text": "Title: Fast Online Q() \nAbstract: Q()-learning uses TD()-methods to accelerate Q-learning. The update complexity of previous online Q() implementations based on lookup-tables is bounded by the size of the state/action space. Our faster algorithm's update complexity is bounded by the number of actions. The method is based on the observation that Q-value updates may be postponed until they are needed. ",
"neighbors": [
294,
565,
567,
747,
2536
],
"mask": "Train"
},
{
"node_id": 503,
"label": 2,
"text": "Title: Generative Learning Structures and Processes for Generalized Connectionist Networks \nAbstract: Massively parallel networks of relatively simple computing elements offer an attractive and versatile framework for exploring a variety of learning structures and processes for intelligent systems. This paper briefly summarizes some popular learning structures and processes used in such networks. It outlines a range of potentially more powerful alternatives for pattern-directed inductive learning in such systems. It motivates and develops a class of new learning algorithms for massively parallel networks of simple computing elements. We call this class of learning processes generative for they offer a set of mechanisms for constructive and adaptive determination of the network architecture the number of processing elements and the connectivity among them as a function of experience. Generative learning algorithms attempt to overcome some of the limitations of some approaches to learning in networks that rely on modification of weights on the links within an otherwise fixed network topology e.g., rather slow learning and the need for an a-priori choice of a network architecture. Several alternative designs as well as a range of control structures and processes which can be used to regulate the form and content of internal representations learned by such networks are examined. Empirical results from the study of some generative learning algorithms are briefly summarized and several extensions and refinements of such algorithms, and directions for future research are outlined. ",
"neighbors": [
174,
501,
1813,
1851,
1952,
2029,
2396
],
"mask": "Train"
},
{
"node_id": 504,
"label": 2,
"text": "Title: MANIAC: A Next Generation Neurally Based Autonomous Road Follower \nAbstract: The use of artificial neural networks in the domain of autonomous vehicle navigation has produced promising results. ALVINN [Pomerleau, 1991] has shown that a neural system can drive a vehicle reliably and safely on many different types of roads, ranging from paved paths to interstate highways. Even with these impressive results, several areas within the neural paradigm for autonomous road following still need to be addressed. These include transparent navigation between roads of different type, simultaneous use of different sensors, and generalization to road types which the neural system has never seen. The system presented here addresses these issue with a modular neural architecture which uses pre-trained ALVINN networks and a connectionist superstructure to robustly drive on many dif ferent types of roads.",
"neighbors": [
702
],
"mask": "Test"
},
{
"node_id": 505,
"label": 2,
"text": "Title: From Isolation to Cooperation: An Alternative View of a System of Experts \nAbstract: We introduce a constructive, incremental learning system for regression problems that models data by means of locally linear experts. In contrast to other approaches, the experts are trained independently and do not compete for data during learning. Only when a prediction for a query is required do the experts cooperate by blen ding their individual predictions. Each expert is trained by minimizing a penalized local cross validation error using second order methods. In this way, an expert is able to adjust the size and shape of the receptive field in which its predictions are valid, and also to adjust its bias on the importance of individual input dimensions. The size and shape adjustment corresponds to finding a local distance metric, while the bias adjustment accomplishes local dimensio n-ality reduction. We derive asymptotic results for our method. In a variety of simulations we demonstrate the properties of the algorithm with respect to interference, learning speed, prediction accuracy, feature detection, and task or i-ented incremental learning. ",
"neighbors": [
74,
134
],
"mask": "Train"
},
{
"node_id": 506,
"label": 4,
"text": "Title: Dynamic Non-Bayesian Decision Making \nAbstract: The model of a non-Bayesian agent who faces a repeated game with incomplete information against Nature is an appropriate tool for modeling general agent-environment interactions. In such a model the environment state (controlled by Nature) may change arbitrarily, and the feedback/reward function is initially unknown. The agent is not Bayesian, that is he does not form a prior probability neither on the state selection strategy of Nature, nor on his reward function. A policy for the agent is a function which assigns an action to every history of observations and actions. Two basic feedback structures are considered. In one of them the perfect monitoring case the agent is able to observe the previous environment state as part of his feedback, while in the other the imperfect monitoring case all that is available to the agent is the reward obtained. Both of these settings refer to partially observable processes, where the current environment state is unknown. Our main result refers to the competitive ratio criterion in the perfect monitoring case. We prove the existence of an efficient stochastic policy that ensures that the competitive ratio is obtained at almost all stages with an arbitrarily high probability, where efficiency is measured in terms of rate of convergence. It is further shown that such an optimal policy does not exist in the imperfect monitoring case. Moreover, it is proved that in the perfect monitoring case there does not exist a deterministic policy that satisfies our long run optimality criterion. In addition, we discuss the maxmin criterion and prove that a deterministic efficient optimal strategy does exist in the imperfect monitoring case under this criterion. Finally we show that our approach to long-run optimality can be viewed as qualitative, which distinguishes it from previous work in this area.",
"neighbors": [
514
],
"mask": "Train"
},
{
"node_id": 507,
"label": 6,
"text": "Title: PAC Learning Axis-aligned Rectangles with Respect to Product Distributions from Multiple-instance Examples \nAbstract: We describe a polynomial-time algorithm for learning axis-aligned rectangles in Q d with respect to product distributions from multiple-instance examples in the PAC model. Here, each example consists of n elements of Q d together with a label indicating whether any of the n points is in the rectangle to be learned. We assume that there is an unknown product distribution D over Q d such that all instances are independently drawn according to D. The accuracy of a hypothesis is measured by the probability that it would incorrectly predict whether one of n more points drawn from D was in the rectangle to be learned. Our algorithm achieves accuracy * with probability 1 ffi in ",
"neighbors": [
109,
549,
798,
1888,
2391,
2427,
2548
],
"mask": "Validation"
},
{
"node_id": 508,
"label": 6,
"text": "Title: Machine Learning by Function Decomposition \nAbstract: We present a new machine learning method that, given a set of training examples, induces a definition of the target concept in terms of a hierarchy of intermediate concepts and their definitions. This effectively decomposes the problem into smaller, less complex problems. The method is inspired by the Boolean function decomposition approach to the design of digital circuits. To cope with high time complexity of finding an optimal decomposition, we propose a suboptimal heuristic algorithm. The method, implemented in program HINT (HIerarchy Induction Tool), is experimentally evaluated using a set of artificial and real-world learning problems. It is shown that the method performs well both in terms of classification accuracy and discovery of meaningful concept hierarchies.",
"neighbors": [
317,
417,
523,
2326
],
"mask": "Test"
},
{
"node_id": 509,
"label": 5,
"text": "Title: The Bayesian Approach to Tree-Structured Regression \nAbstract: In the context of inductive learning, the Bayesian approach turned out to be very successful in estimating probabilities of events when there are only a few learning examples. The m-probability estimate was developed to handle such situations. In this paper we present the m-distribution estimate, an extension to the m-probability estimate which, besides the estimation of probabilities, covers also the estimation of probability distributions. We focus on its application in the construction of regression trees. The theoretical results were incorporated into a system for automatic induction of regression trees. The results of applying the upgraded system to several domains are presented and compared to previous results. ",
"neighbors": [
314,
669
],
"mask": "Test"
},
{
"node_id": 510,
"label": 3,
"text": "Title: The Bayesian Approach to Tree-Structured Regression \nAbstract: TECHNICAL REPORT NO. 967 August 1996 ",
"neighbors": [
192,
356,
519,
2223
],
"mask": "Validation"
},
{
"node_id": 511,
"label": 3,
"text": "Title: Learning from incomplete data \nAbstract: Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives|the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster et al., 1977)|both for the estimation of mixture components and for coping with the missing data. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense. The authors were supported in part by a grant from ATR Auditory and Visual Perception Research Laboratories, by a grant from Siemens Corporation, by grant IRI-9013991 from the National Science Foundation, and by grant N00014-90-J-1942 from the Office of Naval Research. Zoubin Ghahramani was supported by a grant from the McDonnell-Pew Foundation. Michael I. Jordan is a NSF Presidential Young Investigator. ",
"neighbors": [
74,
611
],
"mask": "Train"
},
{
"node_id": 512,
"label": 2,
"text": "Title: Fault-Tolerant Implementation of Finite-State Automata in Recurrent Neural Networks \nAbstract: Recently, we have proven that the dynamics of any deterministic finite-state automata (DFA) with n states and m input symbols can be implemented in a sparse second-order recurrent neural network (SORNN) with n + 1 state neurons and O(mn) second-order weights and sigmoidal discriminant functions [5]. We investigate how that constructive algorithm can be extended to fault-tolerant neural DFA implementations where faults in an analog implementation of neurons or weights do not affect the desired network performance. We show that tolerance to weight perturbation can be achieved easily; tolerance to weight and/or neuron stuck-at-zero faults, however, requires duplication of the network resources. This result has an impact on the construction of neural DFAs with a dense internal representation of DFA states.",
"neighbors": [
407,
411
],
"mask": "Train"
},
{
"node_id": 513,
"label": 3,
"text": "Title: Detecting Features in Spatial Point Processes with Clutter via Model-Based Clustering \nAbstract: Technical Report No. 295 Department of Statistics, University of Washington October, 1995 1 Abhijit Dasgupta is a graduate student at the Department of Biostatistics, University of Washington, Box 357232, Seattle, WA 98195-7232, and his e-mail address is dasgupta@biostat.washington.edu. Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, University of Washington, Box 354322, Seattle, WA 98195-4322, and his e-mail address is raftery@stat.washington.edu. This research was supported by Office of Naval Research Grant no. N-00014-91-J-1074. The authors are grateful to Peter Guttorp, Girardeau Henderson and Robert Muise for helpful discussions. ",
"neighbors": [
117,
155,
452
],
"mask": "Train"
},
{
"node_id": 514,
"label": 6,
"text": "Title: Gambling in a rigged casino: The adversarial multi-armed bandit problem \nAbstract: In the multi-armed bandit problem, a gambler must decide which arm of K non-identical slot machines to play in a sequence of trials so as to maximize his reward. This classical problem has received much attention because of the simple model it provides of the trade-off between exploration (trying out each arm to find the best one) and exploitation (playing the arm believed to give the best payoff). Past solutions for the bandit problem have almost always relied on assumptions about the statistics of the slot machines. In this work, we make no statistical assumptions whatsoever about the nature of the process generating the payoffs of the slot machines. We give a solution to the bandit problem in which an adversary, rather than a well-behaved stochastic process, has complete control over the payoffs. In a sequence of T plays, we prove that the expected per-round payoff of our algorithm approaches that of the best arm at the rate O(T 1=2 ), and we give an improved rate of convergence when the best arm has fairly low payoff. We also prove a general matching lower bound on the best possible performance of any algorithm in our setting. In addition, we consider a setting in which the player has a team of experts advising him on which arm to play; here, we give a strategy that will guarantee expected payoff close to that of the best expert. Finally, we apply our result to the problem of learning to play an unknown repeated matrix game against an all-powerful adversary.",
"neighbors": [
453,
506,
569
],
"mask": "Validation"
},
{
"node_id": 515,
"label": 3,
"text": "Title: Sensitivities: An Alternative to Conditional Probabilities for Bayesian Belief Networks \nAbstract: We show an alternative way of representing a Bayesian belief network by sensitivities and probability distributions. This representation is equivalent to the traditional representation by conditional probabilities, but makes dependencies between nodes apparent and intuitively easy to understand. We also propose a QR matrix representation for the sensitivities and/or conditional probabilities which is more efficient, in both memory requirements and computational speed, than the traditional representation for computer-based implementations of probabilistic inference. We use sensitivities to show that for a certain class of binary networks, the computation time for approximate probabilistic inference with any positive upper bound on the error of the result is independent of the size of the network. Finally, as an alternative to traditional algorithms that use conditional probabilities, we describe an exact algorithm for probabilistic inference that uses the QR-representation for sensitivities and updates probability distributions of nodes in a network according to messages from the neigh bors.",
"neighbors": [
637,
2164
],
"mask": "Validation"
},
{
"node_id": 516,
"label": 2,
"text": "Title: A Supercomputer for Neural Computation \nAbstract: The requirement to train large neural networks quickly has prompted the design of a new massively parallel supercomputer using custom VLSI. This design features 128 processing nodes, communicating over a mesh network connected directly to the processor chip. Studies show peak performance in the range of 160 billion arithmetic operations per second. This paper presents the case for custom hardware that combines neural network-specific features with a general programmable machine architecture, and briefly describes the design in progress. ",
"neighbors": [
272
],
"mask": "Train"
},
{
"node_id": 517,
"label": 6,
"text": "Title: Active Learning with Committees for Text Categorization \nAbstract: In many real-world domains like text categorization, supervised learning requires a large number of training examples. In this paper we describe an active learning method that uses a committee of learners to reduce the number of training examples required for learning. Our approach is similar to the Query by Committee framework, where disagreement among the committee members on the predicted label for the input part of the example is used to signal the need for knowing the actual value of the label. Our experiments in text categorization using a committee of Winnow-based learners demonstrate that this approach can reduce the number of labeled training examples required over that used by a single Winnow learner by 1-2 orders of magnitude. This paper is not under review or accepted for publication in another conference or journal. Acknowledgements: The availability of the Reuters-22173 corpus [Reuters] and of the | STAT Data Manipulation and Analysis Programs [Perlman] has greatly assisted in our research to date. ",
"neighbors": [
164,
1170,
1198,
2509
],
"mask": "Train"
},
{
"node_id": 518,
"label": 6,
"text": "Title: Developments in Probabilistic Modelling with Neural Networks|Ensemble Learning \nAbstract: In this paper I give a review of ensemble learning using a simple example. ",
"neighbors": [
76,
181,
662,
766,
2532
],
"mask": "Train"
},
{
"node_id": 519,
"label": 2,
"text": "Title: Smoothing Spline ANOVA for Exponential Families, with Application to the Wisconsin Epidemiological Study of Diabetic\nAbstract: In this paper I give a review of ensemble learning using a simple example. ",
"neighbors": [
10,
190,
192,
193,
280,
420,
510,
705,
2223,
2448,
2549,
2590,
2608
],
"mask": "Train"
},
{
"node_id": 520,
"label": 2,
"text": "Title: CANCER DIAGNOSIS AND PROGNOSIS VIA LINEAR-PROGRAMMING-BASED MACHINE LEARNING \nAbstract: In this paper I give a review of ensemble learning using a simple example. ",
"neighbors": [
142,
230,
478,
524,
719
],
"mask": "Train"
},
{
"node_id": 521,
"label": 5,
"text": "Title: Covering vs. Divide-and-Conquer for Top-Down Induction of Logic Programs \nAbstract: covering has been formalized and used extensively. In this work, the divide-and-conquer technique is formalized as well and compared to the covering technique in a logic programming framework. Covering works by repeatedly specializing an overly general hypothesis, on each iteration focusing on finding a clause with a high coverage of positive examples. Divide-and-conquer works by specializing an overly general hypothesis once, focusing on discriminating positive from negative examples. Experimental results are presented demonstrating that there are cases when more accurate hypotheses can be found by divide-and-conquer than by covering. Moreover, since covering considers the same alternatives repeatedly it tends to be less efficient than divide-and-conquer, which never considers the same alternative twice. On the other hand, covering searches a larger hypothesis space, which may result in that more compact hypotheses are found by this technique than by divide-and-conquer. Furthermore, divide-and-conquer is, in contrast to covering, not applicable to learn ing recursive definitions.",
"neighbors": [
156,
344,
638,
1081,
1082,
1259,
2312
],
"mask": "Train"
},
{
"node_id": 522,
"label": 6,
"text": "Title: THE DISCOVERY OF ALGORITHMIC PROBABILITY \nAbstract: covering has been formalized and used extensively. In this work, the divide-and-conquer technique is formalized as well and compared to the covering technique in a logic programming framework. Covering works by repeatedly specializing an overly general hypothesis, on each iteration focusing on finding a clause with a high coverage of positive examples. Divide-and-conquer works by specializing an overly general hypothesis once, focusing on discriminating positive from negative examples. Experimental results are presented demonstrating that there are cases when more accurate hypotheses can be found by divide-and-conquer than by covering. Moreover, since covering considers the same alternatives repeatedly it tends to be less efficient than divide-and-conquer, which never considers the same alternative twice. On the other hand, covering searches a larger hypothesis space, which may result in that more compact hypotheses are found by this technique than by divide-and-conquer. Furthermore, divide-and-conquer is, in contrast to covering, not applicable to learn ing recursive definitions.",
"neighbors": [
68,
525
],
"mask": "Train"
},
{
"node_id": 523,
"label": 1,
"text": "Title: Some studies in machine learning using the game of checkers. IBM Journal, 3(3):211-229, 1959. Some\nAbstract: covering has been formalized and used extensively. In this work, the divide-and-conquer technique is formalized as well and compared to the covering technique in a logic programming framework. Covering works by repeatedly specializing an overly general hypothesis, on each iteration focusing on finding a clause with a high coverage of positive examples. Divide-and-conquer works by specializing an overly general hypothesis once, focusing on discriminating positive from negative examples. Experimental results are presented demonstrating that there are cases when more accurate hypotheses can be found by divide-and-conquer than by covering. Moreover, since covering considers the same alternatives repeatedly it tends to be less efficient than divide-and-conquer, which never considers the same alternative twice. On the other hand, covering searches a larger hypothesis space, which may result in that more compact hypotheses are found by this technique than by divide-and-conquer. Furthermore, divide-and-conquer is, in contrast to covering, not applicable to learn ing recursive definitions.",
"neighbors": [
54,
163,
188,
277,
283,
415,
465,
478,
508,
540,
551,
565,
717,
870,
882,
910,
961,
1214,
1616,
1676,
1687,
1790,
1921,
1931,
2408,
2442,
2480,
2551,
2600,
2642
],
"mask": "Validation"
},
{
"node_id": 524,
"label": 2,
"text": "Title: An Inductive Learning Approach to Prognostic Prediction \nAbstract: This paper introduces the Recurrence Surface Approximation, an inductive learning method based on linear programming that predicts recurrence times using censored training examples, that is, examples in which the available training output may be only a lower bound on the \"right answer.\" This approach is augmented with a feature selection method that chooses an appropriate feature set within the context of the linear programming generalizer. Computational results in the field of breast cancer prognosis are shown. A straightforward translation of the prediction method to an artificial neural network model is also proposed.",
"neighbors": [
430,
520,
1169,
1454
],
"mask": "Train"
},
{
"node_id": 525,
"label": 6,
"text": "Title: MML mixture modelling of multi-state, Poisson, von Mises circular and Gaussian distributions \nAbstract: Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also consistent and efficient. We provide a brief overview of MML inductive inference (Wallace and Boulton (1968), Wallace and Freeman (1987)), and how it has both an information-theoretic and a Bayesian interpretation. We then outline how MML is used for statistical parameter estimation, and how the MML mixture mod-elling program, Snob (Wallace and Boulton (1968), Wal-lace (1986), Wallace and Dowe(1994)) uses the message lengths from various parameter estimates to enable it to combine parameter estimation with selection of the number of components. The message length is (to within a constant) the logarithm of the posterior probability of the theory. So, the MML theory can also be regarded as the theory with the highest posterior probability. Snob currently assumes that variables are uncorrelated, and permits multi-variate data from Gaussian, discrete multi-state, Poisson and von Mises circular distributions. ",
"neighbors": [
522,
684,
1419,
1425,
1427
],
"mask": "Test"
},
{
"node_id": 526,
"label": 2,
"text": "Title: MML mixture modelling of multi-state, Poisson, von Mises circular and Gaussian distributions \nAbstract: 11] M.H. Overmars. A random approach to motion planning. Technical Report RUU-CS-92-32, Department of Computer Science, Utrecht University, October 1992. ",
"neighbors": [
427,
747
],
"mask": "Train"
},
{
"node_id": 527,
"label": 2,
"text": "Title: VISIT: An Efficient Computational Model of Human Visual Attention \nAbstract: One of the challenges for models of cognitive phenomena is the development of efficient and exible interfaces between low level sensory information and high level processes. For visual processing, researchers have long argued that an attentional mechanism is required to perform many of the tasks required by high level vision. This thesis presents VISIT, a connectionist model of covert visual attention that has been used as a vehicle for studying this interface. The model is efficient, exible, and is biologically plausible. The complexity of the network is linear in the number of pixels. Effective parallel strategies are used to minimize the number of iterations required. The resulting system is able to efficiently solve two tasks that are particularly difficult for standard bottom-up models of vision: computing spatial relations and visual search. Simulations show that the networks behavior matches much of the known psychophysical data on human visual attention. The general architecture of the model also closely matches the known physiological data on the human attention system. Various extensions to VISIT are discussed, including methods for learning the component modules. ",
"neighbors": [
747,
1656,
1968,
2606,
2662
],
"mask": "Train"
},
{
"node_id": 528,
"label": 2,
"text": "Title: Minimax and Hamiltonian Dynamics of Excitatory-Inhibitory Networks \nAbstract: A Lyapunov function for excitatory-inhibitory networks is constructed. The construction assumes symmetric interactions within excitatory and inhibitory populations of neurons, and antisymmetric interactions between populations. The Lyapunov function yields sufficient conditions for the global asymptotic stability of fixed points. If these conditions are violated, limit cycles may be stable. The relations of the Lyapunov function to optimization theory and classical mechanics are revealed by The dynamics of a neural network with symmetric interactions provably converges to fixed points under very general assumptions[1, 2]. This mathematical result helped to establish the paradigm of neural computation with fixed point attractors[3]. But in reality, interactions between neurons in the brain are asymmetric. Furthermore, the dynamical behaviors seen in the brain are not confined to fixed point attractors, but also include oscillations and complex nonperiodic behavior. These other types of dynamics can be realized by asymmetric networks, and may be useful for neural computation. For these reasons, it is important to understand the global behavior of asymmetric neural networks. The interaction between an excitatory neuron and an inhibitory neuron is clearly asymmetric. Here we consider a class of networks that incorporates this fundamental asymmetry of the brain's microcircuitry. Networks of this class have distinct populations of excitatory and inhibitory neurons, with antisymmetric interactions minimax and dissipative Hamiltonian forms of the network dynamics.",
"neighbors": [
678
],
"mask": "Validation"
},
{
"node_id": 529,
"label": 2,
"text": "Title: Capacity of SDM \nAbstract: Report R95:12 ISRN : SICS-R--95/12-SE ISSN : 0283-3638 Abstract A more efficient way of reading the SDM memory is presented. This is accomplished by using implicit information, hitherto not utilized, to find the information-carrying units and thus removing unnecessary noise when reading the memory. ",
"neighbors": [
340,
341,
709
],
"mask": "Train"
},
{
"node_id": 530,
"label": 1,
"text": "Title: operations: operation machine duration \nAbstract: Report R95:12 ISRN : SICS-R--95/12-SE ISSN : 0283-3638 Abstract A more efficient way of reading the SDM memory is presented. This is accomplished by using implicit information, hitherto not utilized, to find the information-carrying units and thus removing unnecessary noise when reading the memory. ",
"neighbors": [
163,
343
],
"mask": "Train"
},
{
"node_id": 531,
"label": 2,
"text": "Title: FEEDBACK STABILIZATION OF NONLINEAR SYSTEMS \nAbstract: This paper surveys some well-known facts as well as some recent developments on the topic of stabilization of nonlinear systems. ",
"neighbors": [
693,
1490,
2187
],
"mask": "Validation"
},
{
"node_id": 532,
"label": 3,
"text": "Title: Hierarchical Selection Models with Applications in Meta-Analysis \nAbstract: This paper surveys some well-known facts as well as some recent developments on the topic of stabilization of nonlinear systems. ",
"neighbors": [
27
],
"mask": "Train"
},
{
"node_id": 533,
"label": 3,
"text": "Title: Estimating Ratios of Normalizing Constants for Densities with Different Dimensions \nAbstract: In Bayesian inference, a Bayes factor is defined as the ratio of posterior odds versus prior odds where posterior odds is simply a ratio of the normalizing constants of two posterior densities. In many practical problems, the two posteriors have different dimensions. For such cases, the current Monte Carlo methods such as the bridge sampling method (Meng and Wong 1996), the path sampling method (Gelman and Meng 1994), and the ratio importance sampling method (Chen and Shao 1994) cannot directly be applied. In this article, we extend importance sampling, bridge sampling, and ratio importance sampling to problems of different dimensions. Then we find global optimal importance sampling, bridge sampling, and ratio importance sampling in the sense of minimizing asymptotic relative mean-square errors of estimators. Implementation algorithms, which can asymptotically achieve the optimal simulation errors, are developed and two illustrative examples are also provided. ",
"neighbors": [
41,
777
],
"mask": "Validation"
},
{
"node_id": 534,
"label": 0,
"text": "Title: Massively Parallel Matching of Knowledge Structures \nAbstract: As knowledge bases used for AI systems increase in size, access to relevant information is the dominant factor in the cost of inference. This is especially true for analogical (or case-based) reasoning, in which the ability of the system to perform inference is dependent on efficient and flexible access to a large base of exemplars (cases) judged likely to be relevant to solving a problem at hand. In this chapter we discuss a novel algorithm for efficient associative matching of relational structures in large semantic networks. The structure matching algorithm uses massively parallel hardware to search memory for knowledge structures matching a given probe structure. The algorithm is built on top of PARKA, a massively parallel knowledge representation system which runs on the Connection Machine. We are currently exploring the utility of this algorithm in CaPER, a case-based planning system. ",
"neighbors": [
313
],
"mask": "Train"
},
{
"node_id": 535,
"label": 6,
"text": "Title: Sequential PAC Learning \nAbstract: We consider the use of \"on-line\" stopping rules to reduce the number of training examples needed to pac-learn. Rather than collect a large training sample that can be proved sufficient to eliminate all bad hypotheses a priori, the idea is instead to observe training examples one-at-a-time and decide \"on-line\" whether to stop and return a hypothesis, or continue training. The primary benefit of this approach is that we can detect when a hypothesizer has actually \"converged,\" and halt training before the standard fixed-sample-size bounds. This paper presents a series of such sequential learning procedures for: distribution-free pac-learning, \"mistake-bounded to pac\" conversion, and distribution-specific pac-learning, respectively. We analyze the worst case expected training sample size of these procedures, and show that this is often smaller than existing fixed sample size bounds | while providing the exact same worst case pac-guarantees. We also provide lower bounds that show these reductions can at best involve constant (and possibly log) factors. However, empirical studies show that these sequential learning procedures actually use many times fewer training examples in prac tice.",
"neighbors": [
109,
672,
761,
1560
],
"mask": "Train"
},
{
"node_id": 536,
"label": 2,
"text": "Title: Dimension of Recurrent Neural Networks \nAbstract: DIMACS Technical Report 96-56 December 1996 ",
"neighbors": [
58,
200,
206,
411
],
"mask": "Train"
},
{
"node_id": 537,
"label": 1,
"text": "Title: Adaptive Global Optimization with Local Search \nAbstract: DIMACS Technical Report 96-56 December 1996 ",
"neighbors": [
357,
606,
1204
],
"mask": "Test"
},
{
"node_id": 538,
"label": 1,
"text": "Title: Learning and evolution in neural networks \nAbstract: DIMACS Technical Report 96-56 December 1996 ",
"neighbors": [
15,
129,
402,
487,
1036,
1204,
1689,
1738,
2165,
2193,
2309
],
"mask": "Train"
},
{
"node_id": 539,
"label": 0,
"text": "Title: Structural Similarity as Guidance in Case-Based Design \nAbstract: This paper presents a novel approach to determine structural similarity as guidance for adaptation in case-based reasoning (Cbr). We advance structural similarity assessment which provides not only a single numeric value but the most specific structure two cases have in common, inclusive of the modification rules needed to obtain this structure from the two cases. Our approach treats retrieval, matching and adaptation as a group of dependent processes. This guarantees the retrieval and matching of not only similar but adaptable cases. Both together enlarge the overall problem solving performance of Cbr and the explainability of case selection and adaptation considerably. Although our approach is more theoretical in nature and not restricted to a specific domain, we will give an example taken from the domain of industrial building design. Additionally, we will sketch two prototypical implementations of this approach.",
"neighbors": [
183,
454,
541,
1123,
1209,
1210,
1453,
1665
],
"mask": "Train"
},
{
"node_id": 540,
"label": 0,
"text": "Title: A Model-Based Approach to Blame-Assignment in Design \nAbstract: We analyze the blame-assignment task in the context of experience-based design and redesign of physical devices. We identify three types of blame-assignment tasks that differ in the types of information they take as input: the design does not achieve a desired behavior of the device, the design results in an undesirable behavior, a specific structural element in the design misbehaves. We then describe a model-based approach for solving the blame-assignment task. This approach uses structure-behavior-function models that capture a designer's comprehension of the way a device works in terms of causal explanations of how its structure results in its behaviors. We also address the issue of indexing the models in memory. We discuss how the three types of blame-assignment tasks require different types of indices for accessing the models. Finally we describe the KRITIK2 system that implements and evaluates this model-based approach to blame assignment.",
"neighbors": [
523,
543,
603,
1121,
1640
],
"mask": "Validation"
},
{
"node_id": 541,
"label": 0,
"text": "Title: Task-Oriented Knowledge Acquisition and Reasoning for Design Support Systems \nAbstract: We present a framework for task-driven knowledge acquisition in the development of design support systems. Different types of knowledge that enter the knowledge base of a design support system are defined and illustrated both from a formal and from a knowledge acquisition vantage point. Special emphasis is placed on the task-structure, which is used to guide both acquisition and application of knowledge. Starting with knowledge for planning steps in design and augmenting this with problem-solving knowledge that supports design, a formal integrated model of knowledge for design is constructed. Based on the notion of knowledge acquisition as an incremental process we give an account of possibilities for problem solving depending on the knowledge that is at the disposal of the system. Finally, we depict how different kinds of knowledge interact in a design support system. ? This research was supported by the German Ministry for Research and Technology (BMFT) within the joint project FABEL under contract no. 413-4001-01IW104. Project partners in FABEL are German National Research Center of Computer Science (GMD), Sankt Augustin, BSR Consulting GmbH, Munchen, Technical University of Dresden, HTWK Leipzig, University of Freiburg, and University of Karlsruhe. ",
"neighbors": [
183,
454,
539,
1123
],
"mask": "Validation"
},
{
"node_id": 542,
"label": 2,
"text": "Title: Comparison of Bayesian and Neural Net Unsupervised Classification Techniques \nAbstract: Unsupervised classification is the classification of data into a number of classes in such a way that data in each class are all similar to each other. In the past there have been few if any studies done to compare the performance of different unsupervised classification techniques. In this paper we review Bayesian and neural net approaches to unsupervised classification and present results of experiments that we did to compare Autoclass, a Bayesian classification system, and ART2, a neural net classification algorithm.",
"neighbors": [
747,
779,
1203
],
"mask": "Train"
},
{
"node_id": 543,
"label": 0,
"text": "Title: Meta-Cases: Explaining Case-Based Reasoning \nAbstract: AI research on case-based reasoning has led to the development of many laboratory case-based systems. As we move towards introducing these systems into work environments, explaining the processes of case-based reasoning is becoming an increasingly important issue. In this paper we describe the notion of a meta-case for illustrating, explaining and justifying case-based reasoning. A meta-case contains a trace of the processing in a problem-solving episode, and provides an explanation of the problem-solving decisions and a (partial) justification for the solution. The language for representing the problem-solving trace depends on the model of problem solving. We describe a task-method-knowledge (TMK) model of problem-solving and describe the representation of meta-cases in the TMK language. We illustrate this explanatory scheme with examples from Interactive Kritik, a computer-based de sign and learning environment presently under development.",
"neighbors": [
540
],
"mask": "Train"
},
{
"node_id": 544,
"label": 2,
"text": "Title: Minimum-Risk Profiles of Protein Families Based on Statistical Decision Theory \nAbstract: Statistical decision theory provides a principled way to estimate amino acid frequencies in conserved positions of a protein family. The goal is to minimize the risk function, or the expected squared-error distance between the estimates and the true population frequencies. The minimum-risk estimates are obtained by adding an optimal number of pseudocounts to the observed data. Two formulas are presented, one for pseudocounts based on marginal amino acid frequencies and one for pseudocounts based on the observed data. Experimental results show that profiles constructed using minimal-risk estimates are more discriminating than those constructed using existing methods.",
"neighbors": [
0,
14,
258,
751
],
"mask": "Test"
},
{
"node_id": 545,
"label": 2,
"text": "Title: Characterising Innateness in Artificial and Natural Learning \nAbstract: The purpose of this paper is to propose a refinement of the notion of innateness. If we merely identify innateness with bias, then we obtain a poor characterisation of this notion, since any learning device relies on a bias that makes it choose a given hypothesis instead of another. We show that our intuition of innateness is better captured by a characteristic of bias, related to isotropy. Generalist models of learning are shown to rely on an isotropic bias, whereas the bias of specialised models, which include some specific a priori knowledge about what is to be learned, is necessarily anisotropic. The socalled generalist models, however, turn out to be specialised in some way: they learn symmetrical forms preferentially, and have strictly no deficiencies in their learning ability. Because some learning beings do not always show these two properties, such generalist models may be sometimes ruled out as bad candidates for cognitive modelling. ",
"neighbors": [
747
],
"mask": "Validation"
},
{
"node_id": 546,
"label": 1,
"text": "Title: GREQE a Diplome des Etudes Approfondies en Economie Mathematique et Econometrie A Genetic Algorithm for\nAbstract: The purpose of this paper is to propose a refinement of the notion of innateness. If we merely identify innateness with bias, then we obtain a poor characterisation of this notion, since any learning device relies on a bias that makes it choose a given hypothesis instead of another. We show that our intuition of innateness is better captured by a characteristic of bias, related to isotropy. Generalist models of learning are shown to rely on an isotropic bias, whereas the bias of specialised models, which include some specific a priori knowledge about what is to be learned, is necessarily anisotropic. The socalled generalist models, however, turn out to be specialised in some way: they learn symmetrical forms preferentially, and have strictly no deficiencies in their learning ability. Because some learning beings do not always show these two properties, such generalist models may be sometimes ruled out as bad candidates for cognitive modelling. ",
"neighbors": [
163
],
"mask": "Train"
},
{
"node_id": 547,
"label": 2,
"text": "Title: Expectation-Based Selective Attention for Visual Monitoring and Control of a Robot Vehicle \nAbstract: Reliable vision-based control of an autonomous vehicle requires the ability to focus attention on the important features in an input scene. Previous work with an autonomous lane following system, ALVINN [Pomerleau, 1993], has yielded good results in uncluttered conditions. This paper presents an artificial neural network based learning approach for handling difficult scenes which will confuse the ALVINN system. This work presents a mechanism for achieving task-specific focus of attention by exploiting temporal coherence. A saliency map, which is based upon a computed expectation of the contents of the inputs in the next time step, indicates which regions of the input retina are important for performing the task. The saliency map can be used to accentuate the features which are important for the task, and de-emphasize those which are not. ",
"neighbors": [
74,
430
],
"mask": "Train"
},
{
"node_id": 548,
"label": 4,
"text": "Title: Value Function Based Production Scheduling \nAbstract: Production scheduling, the problem of sequentially configuring a factory to meet forecasted demands, is a critical problem throughout the manufacturing industry. The requirement of maintaining product inventories in the face of unpredictable demand and stochastic factory output makes standard scheduling models, such as job-shop, inadequate. Currently applied algorithms, such as simulated annealing and constraint propagation, must employ ad-hoc methods such as frequent replanning to cope with uncertainty. In this paper, we describe a Markov Decision Process (MDP) formulation of production scheduling which captures stochasticity in both production and demands. The solution to this MDP is a value function which can be used to generate optimal scheduling decisions online. A simple example illustrates the theoretical superiority of this approach over replanning-based methods. We then describe an industrial application and two reinforcement learning methods for generating an approximate value function on this domain. Our results demonstrate that in both deterministic and noisy scenarios, value function approx imation is an effective technique. ",
"neighbors": [
82,
552,
565,
1859,
1860
],
"mask": "Train"
},
{
"node_id": 549,
"label": 6,
"text": "Title: Efficient Distribution-free Learning of Probabilistic Concepts \nAbstract: In this paper we investigate a new formal model of machine learning in which the concept (boolean function) to be learned may exhibit uncertain or probabilistic behavior|thus, the same input may sometimes be classified as a positive example and sometimes as a negative example. Such probabilistic concepts (or p-concepts) may arise in situations such as weather prediction, where the measured variables and their accuracy are insufficient to determine the outcome with certainty. We adopt from the Valiant model of learning [27] the demands that learning algorithms be efficient and general in the sense that they perform well for a wide class of p-concepts and for any distribution over the domain. In addition to giving many efficient algorithms for learning natural classes of p-concepts, we study and develop in detail an underlying theory of learning p-concepts. ",
"neighbors": [
287,
453,
456,
488,
507,
574,
591,
640,
672
],
"mask": "Train"
},
{
"node_id": 550,
"label": 6,
"text": "Title: LEARNING BY USING DYNAMIC FEATURE COMBINATION AND SELECTION \nAbstract: ",
"neighbors": [
438,
569,
1422,
2423
],
"mask": "Test"
},
{
"node_id": 551,
"label": 4,
"text": "Title: Utilization Filtering a method for reducing the inherent harmfulness of deductively learned knowledge field of\nAbstract: This paper highlights a phenomenon that causes deductively learned knowledge to be harmful when used for problem solving. The problem occurs when deductive problem solvers encounter a failure branch of the search tree. The backtracking mechanism of such problem solvers will force the program to traverse the whole subtree thus visiting many nodes twice - once by using the deductively learned rule and once by using the rules that generated the learned rule in the first place. We suggest an approach called utilization filtering to solve that problem. Learners that use this approach submit to the problem solver a filter function together with the knowledge that was acquired. The function decides for each problem whether to use the learned knowledge and what part of it to use. We have tested the idea in the context of a lemma learning system, where the filter uses the probability of a subgoal failing to decide whether to turn lemma usage off. Experiments show an improvement of performance by a factor of 3. This paper is concerned with a particular type of harmful redundancy that occurs in deductive problem solvers that employ backtracking in their search procedure, and use deductively learned knowledge to accelerate the search. The problem is that in failure branches of the search tree, the backtracking mechanism of the problem solver forces exploration of the whole subtree. Thus, the search procedure will visit many states twice - once by using the deductively learned rule, and once by using the search path that produced the rule in the first place. ",
"neighbors": [
482,
523,
2215,
2473
],
"mask": "Train"
},
{
"node_id": 552,
"label": 4,
"text": "Title: Learning to Act using Real-Time Dynamic Programming \nAbstract: fl The authors thank Rich Yee, Vijay Gullapalli, Brian Pinette, and Jonathan Bachrach for helping to clarify the relationships between heuristic search and control. We thank Rich Sutton, Chris Watkins, Paul Werbos, and Ron Williams for sharing their fundamental insights into this subject through numerous discussions, and we further thank Rich Sutton for first making us aware of Korf's research and for his very thoughtful comments on the manuscript. We are very grateful to Dimitri Bertsekas and Steven Sullivan for independently pointing out an error in an earlier version of this article. Finally, we thank Harry Klopf, whose insight and persistence encouraged our interest in this class of learning problems. This research was supported by grants to A.G. Barto from the National Science Foundation (ECS-8912623 and ECS-9214866) and the Air Force Office of Scientific Research, Bolling AFB (AFOSR-89-0526). ",
"neighbors": [
2,
16,
57,
60,
85,
92,
167,
173,
210,
220,
239,
277,
298,
305,
306,
311,
367,
370,
374,
412,
446,
451,
455,
466,
472,
473,
483,
548,
554,
559,
575,
601,
621,
636,
644,
653,
671,
688,
691,
723,
738,
749
],
"mask": "Train"
},
{
"node_id": 553,
"label": 2,
"text": "Title: Object Selection Based on Oscillatory Correlation \nAbstract: 1 Technical Report: OSU-CISRC-12/96 - TR67, 1996 Abstract One of the classical topics in neural networks is winner-take-all (WTA), which has been widely used in unsupervised (competitive) learning, cortical processing, and attentional control. Because of global connectivity, WTA networks, however, do not encode spatial relations in the input, and thus cannot support sensory and perceptual processing where spatial relations are important. We propose a new architecture that maintains spatial relations between input features. This selection network builds on LEGION (Locally Excitatory Globally Inhibitory Oscillator Networks) dynamics and slow inhibition. In an input scene with many objects (patterns), the network selects the largest object. This system can be easily adjusted to select several largest objects, which then alternate in time. We further show that a twostage selection network gains efficiency by combining selection with parallel removal of noisy regions. The network is applied to select the most salient object in real images. As a special case, the selection network without local excitation gives rise to a new form of oscillatory WTA. ",
"neighbors": [
123,
2459
],
"mask": "Train"
},
{
"node_id": 554,
"label": 4,
"text": "Title: Reinforcement Learning Algorithms for Average-Payoff Markovian Decision Processes \nAbstract: Reinforcement learning (RL) has become a central paradigm for solving learning-control problems in robotics and artificial intelligence. RL researchers have focussed almost exclusively on problems where the controller has to maximize the discounted sum of payoffs. However, as emphasized by Schwartz (1993), in many problems, e.g., those for which the optimal behavior is a limit cycle, it is more natural and com-putationally advantageous to formulate tasks so that the controller's objective is to maximize the average payoff received per time step. In this paper I derive new average-payoff RL algorithms as stochastic approximation methods for solving the system of equations associated with the policy evaluation and optimal control questions in average-payoff RL tasks. These algorithms are analogous to the popular TD and Q-learning algorithms already developed for the discounted-payoff case. One of the algorithms derived here is a significant variation of Schwartz's R-learning algorithm. Preliminary empirical results are presented to validate these new algorithms. ",
"neighbors": [
167,
294,
306,
446,
552,
565,
875,
1859
],
"mask": "Train"
},
{
"node_id": 555,
"label": 6,
"text": "Title: Exactly Learning Automata with Small Cover Time \nAbstract: We present algorithms for exactly learning unknown environments that can be described by deterministic finite automata. The learner performs a walk on the target automaton, where at each step it observes the output of the state it is at, and chooses a labeled edge to traverse to the next state. We assume that the learner has no means of a reset, and we also assume that the learner does not have access to a teacher that answers equivalence queries and gives the learner counterexamples to its hypotheses. We present two algorithms, one assumes that the outputs observed by the learner are always correct and the other assumes that the outputs might be erroneous. The running times of both algorithms are polynomial in the cover time of the underlying graph of the target automaton. ",
"neighbors": [
400,
556,
615,
2354
],
"mask": "Test"
},
{
"node_id": 556,
"label": 6,
"text": "Title: The Power of a Pebble: Exploring and Mapping Directed Graphs \nAbstract: Exploring and mapping an unknown environment is a fundamental problem, which is studied in a variety of contexts. Many works have focused on finding efficient solutions to restricted versions of the problem. In this paper, we consider a model that makes very limited assumptions on the environment and solve the mapping problem in this general setting. We model the environment by an unknown directed graph G, and consider the problem of a robot exploring and mapping G. We do not assume that the vertices of G are labeled, and thus the robot has no hope of succeeding unless it is given some means of distinguishing between vertices. For this reason we provide the robot with a pebble a device that it can place on a vertex and use to identify the vertex later. In this paper we show: (1) If the robot knows an upper bound on the number of vertices then it can learn the graph efficiently with only one pebble. (2) If the robot does not know an upper bound on the number of vertices n, then fi(log log n) pebbles are both necessary and sufficient. In both cases our algorithms are deterministic. ",
"neighbors": [
555,
615,
2354,
2360
],
"mask": "Test"
},
{
"node_id": 557,
"label": 3,
"text": "Title: On the Sample Complexity of Learning Bayesian Networks \nAbstract: In recent years there has been an increasing interest in learning Bayesian networks from data. One of the most effective methods for learning such networks is based on the minimum description length (MDL) principle. Previous work has shown that this learning procedure is asymptotically successful: with probability one, it will converge to the target distribution, given a sufficient number of samples. However, the rate of this convergence has been hitherto unknown. In this work we examine the sample complexity of MDL based learning procedures for Bayesian networks. We show that the number of samples needed to learn an *-close approximation (in terms of entropy distance) with confidence ffi is O * ) 3 log 1 ffi log log 1 . This means that the sample complexity is a low-order polynomial in the error threshold and sub-linear in the confidence bound. We also discuss how the constants in this term depend on the complexity of the target distribution. Finally, we address questions of asymptotic minimality and propose a method for using the sample complexity results to speed up the learning process. ",
"neighbors": [
423,
558
],
"mask": "Train"
},
{
"node_id": 558,
"label": 3,
"text": "Title: A Tutorial on Learning With Bayesian Networks \nAbstract: Technical Report MSR-TR-95-06 ",
"neighbors": [
376,
423,
557,
905,
1137,
1532,
1555,
1641,
1816,
1934,
2034,
2463,
2660
],
"mask": "Train"
},
{
"node_id": 559,
"label": 4,
"text": "Title: Scaling Up Average Reward Reinforcement Learning by Approximating the Domain Models and the Value Function \nAbstract: Almost all the work in Average-reward Re- inforcement Learning (ARL) so far has focused on table-based methods which do not scale to domains with large state spaces. In this paper, we propose two extensions to a model-based ARL method called H-learning to address the scale-up problem. We extend H-learning to learn action models and reward functions in the form of Bayesian networks, and approximate its value function using local linear regression. We test our algorithms on several scheduling tasks for a simulated Automatic Guided Vehicle (AGV) and show that they are effective in significantly reducing the space requirement of H-learning and making it converge faster. To the best of our knowledge, our results are the first in apply ",
"neighbors": [
34,
167,
552,
1378,
1816,
2341
],
"mask": "Train"
},
{
"node_id": 560,
"label": 6,
"text": "Title: Bayesian Methods for Adaptive Models \nAbstract: Almost all the work in Average-reward Re- inforcement Learning (ARL) so far has focused on table-based methods which do not scale to domains with large state spaces. In this paper, we propose two extensions to a model-based ARL method called H-learning to address the scale-up problem. We extend H-learning to learn action models and reward functions in the form of Bayesian networks, and approximate its value function using local linear regression. We test our algorithms on several scheduling tasks for a simulated Automatic Guided Vehicle (AGV) and show that they are effective in significantly reducing the space requirement of H-learning and making it converge faster. To the best of our knowledge, our results are the first in apply ",
"neighbors": [
78,
157,
246,
740,
938,
955,
1375,
1452,
2287
],
"mask": "Test"
},
{
"node_id": 561,
"label": 2,
"text": "Title: Visualizing High-Dimensional Structure with the Incremental Grid Growing Neural Network \nAbstract: Almost all the work in Average-reward Re- inforcement Learning (ARL) so far has focused on table-based methods which do not scale to domains with large state spaces. In this paper, we propose two extensions to a model-based ARL method called H-learning to address the scale-up problem. We extend H-learning to learn action models and reward functions in the form of Bayesian networks, and approximate its value function using local linear regression. We test our algorithms on several scheduling tasks for a simulated Automatic Guided Vehicle (AGV) and show that they are effective in significantly reducing the space requirement of H-learning and making it converge faster. To the best of our knowledge, our results are the first in apply ",
"neighbors": [
427,
745,
747
],
"mask": "Train"
},
{
"node_id": 562,
"label": 4,
"text": "Title: Transfer of Learning by Composing Solutions of Elemental Sequential Tasks \nAbstract: Although building sophisticated learning agents that operate in complex environments will require learning to perform multiple tasks, most applications of reinforcement learning have focussed on single tasks. In this paper I consider a class of sequential decision tasks (SDTs), called composite sequential decision tasks, formed by temporally concatenating a number of elemental sequential decision tasks. Elemental SDTs cannot be decomposed into simpler SDTs. I consider a learning agent that has to learn to solve a set of elemental and composite SDTs. I assume that the structure of the composite tasks is unknown to the learning agent. The straightforward application of reinforcement learning to multiple tasks requires learning the tasks separately, which can waste computational resources, both memory and time. I present a new learning algorithm and a modular architecture that learns the decomposition of composite SDTs, and achieves transfer of learning by sharing the solutions of elemental SDTs across multiple composite SDTs. The solution of a composite SDT is constructed by computationally inexpensive modifications of the solutions of its constituent elemental SDTs. I provide a proof of one aspect of the learning algorithm. ",
"neighbors": [
60,
252,
370,
440,
671,
1117,
1183,
1401,
1889,
2014,
2018
],
"mask": "Train"
},
{
"node_id": 563,
"label": 4,
"text": "Title: Evolving Obstacle Avoidance Behavior in a Robot Arm \nAbstract: Existing approaches for learning to control a robot arm rely on supervised methods where correct behavior is explicitly given. It is difficult to learn to avoid obstacles using such methods, however, because examples of obstacle avoidance behavior are hard to generate. This paper presents an alternative approach that evolves neural network controllers through genetic algorithms. No input/output examples are necessary, since neuro-evolution learns from a single performance measurement over the entire task of grasping an object. The approach is tested in a simulation of the OSCAR-6 robot arm which receives both visual and sensory input. Neural networks evolved to effectively avoid obstacles at various locations to reach random target locations.",
"neighbors": [
37,
38,
163,
219,
247,
500
],
"mask": "Train"
},
{
"node_id": 564,
"label": 4,
"text": "Title: Reinforcement Learning with Soft State Aggregation \nAbstract: It is widely accepted that the use of more compact representations than lookup tables is crucial to scaling reinforcement learning (RL) algorithms to real-world problems. Unfortunately almost all of the theory of reinforcement learning assumes lookup table representations. In this paper we address the pressing issue of combining function approximation and RL, and present 1) a function approx-imator based on a simple extension to state aggregation (a commonly used form of compact representation), namely soft state aggregation, 2) a theory of convergence for RL with arbitrary, but fixed, soft state aggregation, 3) a novel intuitive understanding of the effect of state aggregation on online RL, and 4) a new heuristic adaptive state aggregation algorithm that finds improved compact representations by exploiting the non-discrete nature of soft state aggregation. Preliminary empirical results are also presented. ",
"neighbors": [
294,
463,
565,
738,
1841
],
"mask": "Train"
},
{
"node_id": 565,
"label": 4,
"text": "Title: Machine Learning Learning to Predict by the Methods of Temporal Differences Keywords: Incremental learning, prediction,\nAbstract: This article introduces a class of incremental learning procedures specialized for prediction|that is, for using past experience with an incompletely known system to predict its future behavior. Whereas conventional prediction-learning methods assign credit by means of the difference between predicted and actual outcomes, the new methods assign credit by means of the difference between temporally successive predictions. Although such temporal-difference methods have been used in Samuel's checker player, Holland's bucket brigade, and the author's Adaptive Heuristic Critic, they have remained poorly understood. Here we prove their convergence and optimality for special cases and relate them to supervised-learning methods. For most real-world prediction problems, temporal-difference methods require less memory and less peak computation than conventional methods; and they produce more accurate predictions. We argue that most problems to which supervised learning is currently applied are really prediction problems of the sort to which temporal-difference methods can be applied to advantage. ",
"neighbors": [
2,
34,
57,
60,
82,
85,
92,
103,
118,
128,
173,
239,
244,
283,
294,
295,
305,
306,
333,
367,
385,
410,
425,
465,
466,
477,
478,
492,
502,
523,
548,
554,
564,
566,
575,
601,
621,
633,
644,
671,
691,
738,
773,
842,
882,
910,
1012,
1213,
1316,
1373,
1376,
1378,
1401,
1438,
1440,
1529,
1540,
1546,
1585,
1616,
1676,
1727,
1741,
1782,
1790,
1859,
1957,
1975,
2027,
2118,
2442,
2472,
2480,
2485,
2628,
2629,
2642,
2672
],
"mask": "Train"
},
{
"node_id": 566,
"label": 4,
"text": "Title: Integrated Architectures for Learning, Planning, and Reacting Based on Approximating Dynamic Programming \nAbstract: This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments.",
"neighbors": [
16,
34,
173,
186,
274,
294,
321,
333,
449,
465,
466,
472,
477,
483,
565,
588,
633,
671,
688,
699,
733,
858,
1447,
1459,
1544,
1643,
1782,
1816,
2221,
2480,
2485,
2658
],
"mask": "Validation"
},
{
"node_id": 567,
"label": 4,
"text": "Title: Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding \nAbstract: On large problems, reinforcement learning systems must use parameterized function approximators such as neural networks in order to generalize between similar situations and actions. In these cases there are no strong theoretical results on the accuracy of convergence, and computational results have been mixed. In particular, Boyan and Moore reported at last year's meeting a series of negative results in attempting to apply dynamic programming together with function approximation to simple control problems with continuous state spaces. In this paper, we present positive results for all the control tasks they attempted, and for one that is significantly larger. The most important differences are that we used sparse-coarse-coded function approximators (CMACs) whereas they used mostly global function approximators, and that we learned online whereas they learned o*ine. Boyan and Moore and others have suggested that the problems they encountered could be solved by using actual outcomes (\"rollouts\"), as in classical Monte Carlo methods, and as in the TD() algorithm when = 1. However, in our experiments this always resulted in substantially poorer performance. We conclude that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general .",
"neighbors": [
21,
277,
385,
502,
970,
1828
],
"mask": "Train"
},
{
"node_id": 568,
"label": 4,
"text": "Title: Online Learning with Random Representations \nAbstract: We consider the requirements of online learning|learning which must be done incrementally and in realtime, with the results of learning available soon after each new example is acquired. Despite the abundance of methods for learning from examples, there are few that can be used effectively for online learning, e.g., as components of reinforcement learning systems. Most of these few, including radial basis functions, CMACs, Ko-honen's self-organizing maps, and those developed in this paper, share the same structure. All expand the original input representation into a higher dimensional representation in an unsupervised way, and then map that representation to the final answer using a relatively simple supervised learner, such as a perceptron or LMS rule. Such structures learn very rapidly and reliably, but have been thought either to scale poorly or to require extensive domain knowledge. To the contrary, some researchers (Rosenblatt, 1962; Gallant & Smith, 1987; Kanerva, 1988; Prager & Fallside, 1988) have argued that the expanded representation can be chosen largely at random with good results. The main contribution of this paper is to develop and test this hypothesis. We show that simple random-representation methods can perform as well as nearest-neighbor methods (while being more suited to online learning), and significantly better than backpropagation. We find that the size of the random representation does increase with the dimensionality of the problem, but not unreasonably so, and that the required size can be reduced substantially using unsupervised-learning techniques. Our results suggest that randomness has a useful role to play in online supervised learning and constructive induction. ",
"neighbors": [
478,
843
],
"mask": "Train"
},
{
"node_id": 569,
"label": 6,
"text": "Title: A decision-theoretic generalization of on-line learning and an application to boosting how the weight-update rule\nAbstract: We consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update rule of Littlestone and Warmuth [10] can be adapted to this model yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games and prediction of points in R n",
"neighbors": [
255,
456,
514,
550,
710,
767,
1000,
1025,
1092,
1181,
1269,
1273,
1430,
1457,
1522,
1712,
1986,
2099
],
"mask": "Train"
},
{
"node_id": 570,
"label": 2,
"text": "Title: A New Learning Algorithm for Blind Signal Separation \nAbstract: A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the on-line learning algorithm which has an equivariant property and is easily implemented on a neural network like model. The validity of the new learning algorithm is verified by computer simulations. ",
"neighbors": [
59,
169,
212,
576,
872,
874,
1067,
1200,
1243,
1245,
1246,
1258,
1381,
1520,
1524,
1709,
1814,
1922,
2026
],
"mask": "Train"
},
{
"node_id": 571,
"label": 6,
"text": "Title: The Central Classifier Bound ANew Error Bound for the Classifier Chosen by Early Stopping Key\nAbstract: A new on-line learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the on-line learning algorithm which has an equivariant property and is easily implemented on a neural network like model. The validity of the new learning algorithm is verified by computer simulations. ",
"neighbors": [
19,
424,
1762,
2331,
2495,
2694
],
"mask": "Validation"
},
{
"node_id": 572,
"label": 2,
"text": "Title: Avoiding Overfitting with BP-SOM \nAbstract: Overfitting is a well-known problem in the fields of symbolic and connectionist machine learning. It describes the deterioration of gen-eralisation performance of a trained model. In this paper, we investigate the ability of a novel artificial neural network, bp-som, to avoid overfitting. bp-som is a hybrid neural network which combines a multi-layered feed-forward network (mfn) with Kohonen's self-organising maps (soms). During training, supervised back-propagation learning and unsupervised som learning cooperate in finding adequate hidden-layer representations. We show that bp-som outperforms standard backpropagation, and also back-propagation with a weight decay when dealing with the problem of overfitting. In addition, we show that bp-som succeeds in preserving generalisation performance under hidden-unit pruning, where both other methods fail.",
"neighbors": [
112,
624,
747,
881
],
"mask": "Train"
},
{
"node_id": 573,
"label": 3,
"text": "Title: Iterated Revision and Minimal Change of Conditional Beliefs \nAbstract: We describe a model of iterated belief revision that extends the AGM theory of revision to account for the effect of a revision on the conditional beliefs of an agent. In particular, this model ensures that an agent makes as few changes as possible to the conditional component of its belief set. Adopting the Ramsey test, minimal conditional revision provides acceptance conditions for arbitrary right-nested conditionals. We show that problem of determining acceptance of any such nested conditional can be reduced to acceptance tests for unnested conditionals. Thus, iterated revision can be accomplished in a virtual manner, using uniterated revision.",
"neighbors": [
270,
464
],
"mask": "Test"
},
{
"node_id": 574,
"label": 6,
"text": "Title: On the Learnability of Discrete Distributions (extended abstract) \nAbstract: We describe a model of iterated belief revision that extends the AGM theory of revision to account for the effect of a revision on the conditional beliefs of an agent. In particular, this model ensures that an agent makes as few changes as possible to the conditional component of its belief set. Adopting the Ramsey test, minimal conditional revision provides acceptance conditions for arbitrary right-nested conditionals. We show that problem of determining acceptance of any such nested conditional can be reduced to acceptance tests for unnested conditionals. Thus, iterated revision can be accomplished in a virtual manner, using uniterated revision.",
"neighbors": [
242,
549,
640,
672,
1006,
1827,
1962,
2040,
2360,
2475
],
"mask": "Train"
},
{
"node_id": 575,
"label": 4,
"text": "Title: Issues in Using Function Approximation for Reinforcement Learning \nAbstract: Reinforcement learning techniques address the problem of learning to select actions in unknown, dynamic environments. It is widely acknowledged that to be of use in complex domains, reinforcement learning techniques must be combined with generalizing function approximation methods such as artificial neural networks. Little, however, is understood about the theoretical properties of such combinations, and many researchers have encountered failures in practice. In this paper we identify a prime source of such failuresnamely, a systematic overestimation of utility values. Using Watkins' Q-Learning [18] as an example, we give a theoretical account of the phenomenon, deriving conditions under which one may expected it to cause learning to fail. Employing some of the most popular function approximators, we present experimental results which support the theoretical findings. ",
"neighbors": [
173,
552,
565,
738,
843,
882,
1378,
2485
],
"mask": "Train"
},
{
"node_id": 576,
"label": 2,
"text": "Title: An information-maximisation approach to blind separation and blind deconvolution \nAbstract: We derive a new self-organising learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information max-imisation provides a unifying framework for problems in `blind' signal processing. fl Please send comments to tony@salk.edu. This paper will appear as Neural Computation, 7, 6, 1004-1034 (1995). The reference for this version is: Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523. ",
"neighbors": [
43,
59,
212,
293,
330,
354,
355,
570,
605,
726,
731,
834,
839,
863,
874,
1014,
1067,
1200,
1245,
1258,
1381,
1524,
1526,
1710,
1801,
1814,
1922,
1932,
2026,
2552
],
"mask": "Train"
},
{
"node_id": 577,
"label": 3,
"text": "Title: Operations for Learning with Graphical Models decomposition techniques and the demonstration that graphical models provide\nAbstract: This paper is a multidisciplinary review of empirical, statistical learning from a graphical model perspective. Well-known examples of graphical models include Bayesian networks, directed graphs representing a Markov chain, and undirected networks representing a Markov field. These graphical models are extended to model data analysis and empirical learning using the notation of plates. Graphical operations for simplifying and manipulating a problem are provided including decomposition, differentiation, and the manipulation of probability models from the exponential family. Two standard algorithm schemas for learning are reviewed in a graphical framework: Gibbs sampling and the expectation maximization algorithm. Using these operations and schemas, some popular algorithms can be synthesized from their graphical specification. This includes versions of linear regression, techniques for feed-forward networks, and learning Gaussian and discrete Bayesian networks from data. The paper concludes by sketching some implications for data analysis and summarizing how some popular algorithms fall within the framework presented. ",
"neighbors": [
250,
312,
389,
401,
1502,
1532,
2034,
2492,
2660
],
"mask": "Test"
},
{
"node_id": 578,
"label": 0,
"text": "Title: AN EMPIRICAL APPROACH TO SOLVING THE GENERAL UTILITY PROBLEM IN SPEEDUP LEARNING \nAbstract: The utility problem in speedup learning describes a common behavior of machine learning methods: the eventual degradation of performance due to increasing amounts of learned knowledge. The shape of the learning curve (cost of using a learning method vs. number of training examples) over several domains suggests a parameterized model relating performance to the amount of learned knowledge and a mechanism to limit the amount of learned knowledge for optimal performance. Many recent approaches to avoiding the utility problem in speedup learning rely on sophisticated utility measures and significant numbers of training data to accurately estimate the utility of control knowledge. Empirical results presented here and elsewhere indicate that a simple selection strategy of retaining all control rules derived from a training problem explanation quickly defines an efficient set of control knowledge from few training problems. This simple selection strategy provides a low-cost alternative to example-intensive approaches for improving the speed of a problem solver. Experimentation illustrates the existence of a minimum (representing least cost) in the learning curve which is reached after a few training examples. Stress is placed on controlling the amount of learned knowledge as opposed to which knowledge. An attempt is also made to relate domain characteristics to the shape of the learning curve.",
"neighbors": [
13,
482,
1122,
1333,
1877
],
"mask": "Train"
},
{
"node_id": 579,
"label": 2,
"text": "Title: Comparison of Kernel Estimators, Perceptrons, and Radial-Basis Functions for OCR and Speech Classification \nAbstract: We compare kernel estimators, single and multi-layered perceptrons and radial-basis functions for the problems of classification of handwritten digits and speech phonemes. By taking two different applications and employing many techniques, we report here a two-dimensional study whereby a domain-independent assessment of these learning methods can be possible. We consider a feed-forward network with one hidden layer. As examples of the local methods, we use kernel estimators like k-nearest neighbor (k-nn), Parzen windows, generalized k-nn, and Grow and Learn (Condensed Nearest Neighbor). We have also considered fuzzy k-nn due to its similarity. As distributed networks, we use linear perceptron, pairwise separating linear perceptron, and multilayer perceptrons with sigmoidal hidden units. We also tested the radial-basis function network which is a combination of local and distributed networks. Four criteria are taken for comparison: Correct classification of the test set, network size, learning time, and the operational complexity. We found that perceptrons when the architecture is suitable, generalize better than local, memory-based kernel estimators but require longer training and more precise computation. Local networks are simple, learn very quickly and acceptably, but use more memory. ",
"neighbors": [
611,
696,
747
],
"mask": "Train"
},
{
"node_id": 580,
"label": 0,
"text": "Title: Learning to Improve Case Adaptation by Introspective Reasoning and CBR \nAbstract: In current CBR systems, case adaptation is usually performed by rule-based methods that use task-specific rules hand-coded by the system developer. The ability to define those rules depends on knowledge of the task and domain that may not be available a priori, presenting a serious impediment to endowing CBR systems with the needed adaptation knowledge. This paper describes ongoing research on a method to address this problem by acquiring adaptation knowledge from experience. The method uses reasoning from scratch, based on introspective reasoning about the requirements for successful adaptation, to build up a library of adaptation cases that are stored for future reuse. We describe the tenets of the approach and the types of knowledge it requires. We sketch initial computer implementation, lessons learned, and open questions for further study.",
"neighbors": [
581,
922,
1126,
1212,
1215,
1497
],
"mask": "Train"
},
{
"node_id": 581,
"label": 0,
"text": "Title: Representing Self-knowledge for Introspection about Memory Search \nAbstract: This position paper sketches a framework for modeling introspective reasoning and discusses the relevance of that framework for modeling introspective reasoning about memory search. It argues that effective and flexible memory processing in rich memories should be built on five types of explicitly represented self-knowledge: knowledge about information needs, relationships between different types of information, expectations for the actual behavior of the information search process, desires for its ideal behavior, and representations of how those expectations and desires relate to its actual performance. This approach to modeling memory search is both an illustration of general principles for modeling introspective reasoning and a step towards addressing the problem of how a reasoner human or machinecan acquire knowledge about the properties of its own knowledge base. ",
"neighbors": [
49,
50,
222,
580
],
"mask": "Train"
},
{
"node_id": 582,
"label": 0,
"text": "Title: In Machine Learning: A Multistrategy Approach, Vol. IV Macro and Micro Perspectives of Multistrategy Learning \nAbstract: Machine learning techniques are perceived to have a great potential as means for the acquisition of knowledge; nevertheless, their use in complex engineering domains is still rare. Most machine learning techniques have been studied in the context of knowledge acquisition for well defined tasks, such as classification. Learning for these tasks can be handled by relatively simple algorithms. Complex domains present difficulties that can be approached by combining the strengths of several complementing learning techniques, and overcoming their weaknesses by providing alternative learning strategies. This study presents two perspectives, the macro and the micro, for viewing the issue of multistrategy learning. The macro perspective deals with the decomposition of an overall complex learning task into relatively well-defined learning tasks, and the micro perspective deals with designing multistrategy learning techniques for supporting the acquisition of knowledge for each task. The two perspectives are discussed in the context of ",
"neighbors": [
259,
818,
1498,
1792
],
"mask": "Train"
},
{
"node_id": 583,
"label": 0,
"text": "Title: Introspective reasoning using meta-explanations for multistrategy learning \nAbstract: In order to learn effectively, a reasoner must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires declarative representations of meta-knowledge of the reasoning performed by the system during the performance task, of the system's knowledge, and of the organization of this knowledge. This chapter presents a taxonomy of possible reasoning failures that can occur during a performance task, declarative representations of these failures, and associations between failures and particular learning strategies. The theory is based on Meta-XPs, which are explanation structures that help the system identify failure types, formulate learning goals, and choose appropriate learning strategies in order to avoid similar mistakes in the future. The theory is implemented in a computer model of an introspective reasoner that performs multistrategy learning during a story understanding task. ",
"neighbors": [
50,
64,
284,
643,
1126,
1214,
1278,
2371,
2398,
2568
],
"mask": "Train"
},
{
"node_id": 584,
"label": 3,
"text": "Title: A MEAN FIELD LEARNING ALGORITHM FOR UNSUPERVISED NEURAL NETWORKS \nAbstract: We introduce a learning algorithm for unsupervised neural networks based on ideas from statistical mechanics. The algorithm is derived from a mean field approximation for large, layered sigmoid belief networks. We show how to (approximately) infer the statistics of these networks without resort to sampling. This is done by solving the mean field equations, which relate the statistics of each unit to those of its Markov blanket. Using these statistics as target values, the weights in the network are adapted by a local delta rule. We evaluate the strengths and weaknesses of these networks for problems in statistical pattern recognition. ",
"neighbors": [
250,
427,
639
],
"mask": "Test"
},
{
"node_id": 585,
"label": 5,
"text": "Title: An investigation of noise-tolerant relational concept learning algorithms \nAbstract: We discuss the types of noise that may occur in relational learning systems and describe two approaches to addressing noise in a relational concept learning algorithm. We then evaluate each approach experimentally.",
"neighbors": [
335,
378,
911,
1061,
1275,
2091,
2290,
2291
],
"mask": "Validation"
},
{
"node_id": 586,
"label": 2,
"text": "Title: Neural Learning of Chaotic Dynamics: The Error Propagation Algorithm trains a neural network to identify\nAbstract: Technical Report UMIACS-TR-97-77 and CS-TR-3843 Abstract ",
"neighbors": [
28
],
"mask": "Test"
},
{
"node_id": 587,
"label": 2,
"text": "Title: NONPARAMETRIC SELECTION OF INPUT VARIABLES FOR CONNECTIONIST LEARNING \nAbstract: Technical Report UMIACS-TR-97-77 and CS-TR-3843 Abstract ",
"neighbors": [
88,
214,
427,
2239
],
"mask": "Train"
},
{
"node_id": 588,
"label": 4,
"text": "Title: LEARNING TO AVOID COLLISIONS: A REINFORCEMENT LEARNING PARADIGM FOR MOBILE ROBOT NAVIGATION \nAbstract: The paper describes a self-learning control system for a mobile robot. Based on sensor information the control system has to provide a steering signal in such a way that collisions are avoided. Since in our case no `examples' are available, the system learns on the basis of an external reinforcement signal which is negative in case of a collision and zero otherwise. We describe the adaptive algorithm which is used for a discrete coding of the state space, and the adaptive algorithm for learning the correct mapping from the input (state) vector to the output (steering) signal. ",
"neighbors": [
186,
294,
566,
699,
747
],
"mask": "Train"
},
{
"node_id": 589,
"label": 2,
"text": "Title: BRIGHTNESS PERCEPTION, ILLUSORY CONTOURS, AND CORTICOGENICULATE FEEDBACK \nAbstract: fl Partially supported by the Advanced Research Projects Agency (AFOSR 90-0083). y Partially supported by the Air Force Office of Scientific Research (AFOSR F49620-92-J-0499), the Advanced Research Projects Agency (ONR N00014-92-J-4015), and the Office of Naval Research (ONR N00014-91-J-4100). z Partially funded by the Air Force Office of Scientific Research (AFOSR F49620-92-J-0334) and the Office of Naval Research (ONR N00014-91-J-4100 and ONR N00014-94-1-0597). ",
"neighbors": [
282,
592,
1509
],
"mask": "Train"
},
{
"node_id": 590,
"label": 2,
"text": "Title: APPROXIMATION IN L p (R d FROM SPACES SPANNED BY THE PERTURBED INTEGER TRANSLATES OF\nAbstract: May 14, 1995 Abstract. The problem of approximating smooth L p -functions from spaces spanned by the integer translates of a radially symmetric function is very well understood. In case the points of translation, ffi, are scattered throughout R d , the approximation problem is only well understood in the \"stationary\" setting. In this work, we treat the \"non-stationary\" setting under the assumption that ffi is a small perturbation of Z d . Our results, which are similar in many respects to the known results for the case ffi = Z d , apply specifically to the examples of the Gauss kernel and the Generalized Multiquadric.",
"neighbors": [
364,
365,
366
],
"mask": "Train"
},
{
"node_id": 591,
"label": 6,
"text": "Title: Toward Efficient Agnostic Learning \nAbstract: In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. ",
"neighbors": [
199,
287,
453,
488,
549,
640,
672,
848,
1032,
1105,
1186,
1358,
2054,
2155,
2182,
2475,
2690
],
"mask": "Test"
},
{
"node_id": 592,
"label": 2,
"text": "Title: FIGURE-GROUND SEPARATION BY VISUAL CORTEX Encyclopedia of Neuroscience \nAbstract: In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. ",
"neighbors": [
282,
589,
1144,
1509
],
"mask": "Validation"
},
{
"node_id": 593,
"label": 0,
"text": "Title: THE DESIGN AND IMPLEMENTATION OF A CASE-BASED PLANNING FRAMEWORK WITHIN A PARTIAL-ORDER PLANNER \nAbstract: In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables. ",
"neighbors": [
300,
594
],
"mask": "Train"
},
{
"node_id": 594,
"label": 0,
"text": "Title: Design and Implementation of a Replay Framework based on a Partial Order Planner \nAbstract: In this paper we describe the design and implementation of the derivation replay framework, dersnlp+ebl (Derivational snlp+ebl), which is based within a partial order planner. dersnlp+ebl replays previous plan derivations by first repeating its earlier decisions in the context of the new problem situation, then extending the replayed path to obtain a complete solution for the new problem. When the replayed path cannot be extended into a new solution, explanation-based learning (ebl) techniques are employed to identify the features of the new problem which prevent this extension. These features are then added as censors on the retrieval of the stored case. To keep retrieval costs low, dersnlp+ebl normally stores plan derivations for individual goals, and replays one or more of these derivations in solving multi-goal problems. Cases covering multiple goals are stored only when subplans for individual goals cannot be successfully merged. The aim in constructing the case library is to predict these goal interactions and to store a multi-goal case for each set of negatively interacting goals. We provide empirical results demonstrating the effectiveness of dersnlp+ebl in improving planning performance on randomly-generated problems drawn from a complex domain. ",
"neighbors": [
593,
1122,
1194,
1621
],
"mask": "Test"
},
{
"node_id": 595,
"label": 2,
"text": "Title: LEARNING TO CONTROL FAST-WEIGHT MEMORIES: AN ALTERNATIVE TO DYNAMIC RECURRENT NETWORKS (Neural Computation, 4(1):131-139, 1992) \nAbstract: Previous algorithms for supervised sequence learning are based on dynamic recurrent networks. This paper describes an alternative class of gradient-based systems consisting of two feedforward nets that learn to deal with temporal sequences using fast weights: The first net learns to produce context dependent weight changes for the second net whose weights may vary very quickly. The method offers the potential for STM storage efficiency: A single weight (instead of a full-fledged unit) may be sufficient for storing temporal information. Various learning methods are derived. Two experiments with unknown time delays illustrate the approach. One experiment shows how the system can be used for adaptive temporary variable binding.",
"neighbors": [
121,
233
],
"mask": "Test"
},
{
"node_id": 596,
"label": 6,
"text": "Title: The Disk-Covering Method for Tree Reconstruction \nAbstract: Evolutionary tree reconstruction is a very important step in many biological research problems, and yet is extremely difficult for a variety of computational, statistical, and scientific reasons. In particular, the reconstruction of very large trees containing significant amounts of divergence is especially challenging. We present in this paper a new tree reconstruction method, which we call the Disk-Covering Method, which can be used to recover accurate estimations of the evolutionary tree for otherwise intractable datasets. DCM obtains a decomposition of the input dataset into small overlapping sets of closely related taxa, reconstructs trees on these subsets (using a \"base\" phylogenetic method of choice), and then combines the subtrees into one tree on the entire set of taxa. Because the subproblems analyzed by DCM are smaller, com-putationally expensive methods such as maximum likelihood estimation can be used without incurring too much cost. At the same time, because the taxa within each subset are closely related, even very simple methods (such as neighbor-joining) are much more likely to be highly accurate. The result is that DCM-boosted methods are typically faster and more accurate as compared to \"naive\" use of the same method. In this paper we describe the basic ideas and techniques in DCM, and demonstrate the advantages of DCM experimentally by simulating sequence evolution on a variety of trees.",
"neighbors": [
299
],
"mask": "Train"
},
{
"node_id": 597,
"label": 5,
"text": "Title: Learning Semantic Grammars with Constructive Inductive Logic Programming \nAbstract: Automating the construction of semantic grammars is a difficult and interesting problem for machine learning. This paper shows how the semantic-grammar acquisition problem can be viewed as the learning of search-control heuristics in a logic program. Appropriate control rules are learned using a new first-order induction algorithm that automatically invents useful syntactic and semantic categories. Empirical results show that the learned parsers generalize well to novel sentences and out-perform previous approaches based on connectionist techniques. ",
"neighbors": [
106,
204,
224,
434,
675
],
"mask": "Train"
},
{
"node_id": 598,
"label": 5,
"text": "Title: Dynamic Hammock Predication for Non-predicated Instruction Set Architectures \nAbstract: Conventional speculative architectures use branch prediction to evaluate the most likely execution path during program execution. However, certain branches are difficult to predict. One solution to this problem is to evaluate both paths following such a conditional branch. Predicated execution can be used to implement this form of multi-path execution. Predicated architectures fetch and issue instructions that have associated predicates. These predicates indicate if the instruction should commit its result. Predicating a branch reduces the number of branches executed, eliminating the chance of branch misprediction at the cost of executing additional instructions. In this paper, we propose a restricted form of multi-path execution called Dynamic Predication for architectures with little or no support for predicated instructions in their instruction set. Dynamic predication dynamically predicates instruction sequences in the form of a branch hammock, concurrently executing both paths of the branch. A branch hammock is a short forward branch that spans a few instructions in the form of an if-then or if-then-else construct. We mark these and other constructs in the executable. When the decode stage detects such a sequence, it passes a predicated instruction sequence to a dynamically scheduled execution core. Our results show that dynamic predication can accrue speedups of up to 13%. ",
"neighbors": [
158,
302,
307,
428,
432
],
"mask": "Train"
},
{
"node_id": 599,
"label": 2,
"text": "Title: A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl \nAbstract: In the barn owl, the self-organization of the auditory map of space in the external nucleus of the inferior colliculus (ICx) is strongly influenced by vision, but the nature of this interaction is unknown. In this paper a biologically plausible and mini-malistic model of ICx self-organization is proposed where the ICx receives a learn signal based on the owl's visual attention. When the visual attention is focused in the same spatial location as the auditory input, the learn signal is turned on, and the map is allowed to adapt. A two-dimensional Kohonen map is used to model the ICx, and simulations were performed to evaluate how the learn signal would affect the auditory map. When primary area of visual attention was shifted at different spatial locations, the auditory map shifted to the corresponding location. The shift was complete when done early in the development and partial when done later. Similar results have been observed in the barn owl with its visual field modified with prisms. Therefore, the simulations suggest that a learn signal, based on visual attention, is a possible explanation for the auditory plasticity. ",
"neighbors": [
747
],
"mask": "Validation"
},
{
"node_id": 600,
"label": 2,
"text": "Title: Separating hippocampal maps Spatial Functions of the Hippocampal Formation and the \nAbstract: The place fields of hippocampal cells in old animals sometimes change when an animal is removed from and then returned to an environment [ Barnes et al., 1997 ] . The ensemble correlation between two sequential visits to the same environment shows a strong bimodality for old animals (near 0, indicative of remapping, and greater than 0.7, indicative of a similar representation between experiences), but a strong unimodality for young animals (greater than 0.7, indicative of a similar representation between experiences). One explanation for this is the multi-map hypothesis in which multiple maps are encoded in the hippocampus: old animals may sometimes be returning to the wrong map. A theory proposed by Samsonovich and McNaughton (1997) suggests that the Barnes et al. experiment implies that the maps are pre-wired in the CA3 region of hippocampus. Here, we offer an alternative explanation in which orthogonalization properties in the dentate gyrus (DG) region of hippocampus interact with errors in self-localization (reset of the path integrator on re-entry into the environment) to produce the bimodality. ",
"neighbors": [
205,
745,
747,
1052
],
"mask": "Train"
},
{
"node_id": 601,
"label": 4,
"text": "Title: Active Gesture Recognition using Partially Observable Markov Decision Processes \nAbstract: M.I.T Media Laboratory Perceptual Computing Section Technical Report No. 367 Appeared 13th IEEE Intl. Conference on Pattern Recognition (ICPR '96), Vienna, Austria. Abstract We present a foveated gesture recognition system that guides an active camera to foveate salient features based on a reinforcement learning paradigm. Using vision routines previously implemented for an interactive environment, we determine the spatial location of salient body parts of a user and guide an active camera to obtain images of gestures or expressions. A hidden-state reinforcement learning paradigm based on the Partially Observable Markov Decision Process (POMDP) is used to implement this visual attention. The attention module selects targets to foveate based on the goal of successful recognition, and uses a new multiple-model Q-learning formulation. Given a set of target and distractor gestures, our system can learn where to foveate to maximally discriminate a particular gesture.",
"neighbors": [
3,
552,
565,
611,
1741
],
"mask": "Validation"
},
{
"node_id": 602,
"label": 1,
"text": "Title: Every Niching Method has its Niche: Fitness Sharing and Implicit Sharing Compared \nAbstract: Various extensions to the Genetic Algorithm (GA) attempt to find all or most optima in a search space containing several optima. Many of these emulate natural speciation. For co-evolutionary learning to succeed in a range of management and control problems, such as learning game strategies, such methods must find all or most optima. However, suitable comparison studies are rare. We compare two similar GA specia-tion methods, fitness sharing and implicit sharing. Using a realistic letter classification problem, we find they have advantages under different circumstances. Implicit sharing covers optima more comprehensively, when the population is large enough for a species to form at each optimum. With a population not large enough to do this, fitness sharing can find the optima with larger basins of attraction, and ignore the peaks with narrow bases, while implicit sharing is more easily distracted. This indicates that for a speciated GA trying to find as many near-global optima as possible, implicit sharing works well only if the population is large enough. This requires prior knowledge of how many peaks exist.",
"neighbors": [
163,
1114,
2334
],
"mask": "Train"
},
{
"node_id": 603,
"label": 0,
"text": "Title: METHOD-SPECIFIC KNOWLEDGE COMPILATION: TOWARDS PRACTICAL DESIGN SUPPORT SYSTEMS \nAbstract: Modern knowledge systems for design typically employ multiple problem-solving methods which in turn use different kinds of knowledge. The construction of a heterogeneous knowledge system that can support practical design thus raises two fundamental questions: how to accumulate huge volumes of design information, and how to support heterogeneous design processing? Fortunately, partial answers to both questions exist separately. Legacy databases already contain huge amounts of general-purpose design information. In addition, modern knowledge systems typically characterize the kinds of knowledge needed by specific problem-solving methods quite precisely. This leads us to hypothesize method-specific data-to-knowledge compilation as a potential mechanism for integrating heterogeneous knowledge systems and legacy databases for design. In this paper, first we outline a general computational architecture called HIPED for this integration. Then, we focus on the specific issue of how to convert data accessed from a legacy database into a form appropriate to the problem-solving method used in a heterogeneous knowledge system. We describe an experiment in which a legacy knowledge system called Interactive Kritik is integrated with an ORACLE database using IDI as the communication tool. The limited experiment indicates the computational feasibility of method-specific data-to-knowledge compilation, but also raises additional research issues. ",
"neighbors": [
540,
670,
1047,
1640
],
"mask": "Test"
},
{
"node_id": 604,
"label": 2,
"text": "Title: First experiments using a mixture of nonlinear experts for time series prediction \nAbstract: This paper investigates the advantages and disadvantages of the mixture of experts (ME) model (introduced to the connectionist community in [JJNH91] and applied to time series analysis in [WM95]) on two time series where the dynamics is well understood. The first series is a computer-generated series, consisting of a mixture between a noise-free process (the quadratic map) and a noisy process (a composition of a noisy linear autoregressive and a hyperbolic tangent). There are three main results: (1) the ME model produces significantly better results than single networks; (2) it discovers the regimes correctly and also allows us to characterize the sub-processes through their variances. (3) due to the correct matching of the noise level of the model to that of the data it avoids overfitting. The second series is the laser series used in the Santa Fe competition; the ME model also obtains excellent out-of-sample predictions, allows for analysis and shows no overfitting.",
"neighbors": [
74,
668
],
"mask": "Test"
},
{
"node_id": 605,
"label": 2,
"text": "Title: Learning Viewpoint Invariant Representations of Faces in an Attractor Network \nAbstract: In natural visual experience, different views of an object tend to appear in close temporal proximity as an animal manipulates the object or navigates around it. We investigated the ability of an attractor network to acquire view invariant visual representations by associating first neighbors in a pattern sequence. The pattern sequence contains successive views of faces of ten individuals as they change pose. Under the network dynamics developed by Griniasty, Tsodyks & Amit (1993), multiple views of a given subject fall into the same basin of attraction. We use an independent component (ICA) representation of the faces for the input patterns (Bell & Sejnowski, 1995). The ICA representation has advantages over the principal component representation (PCA) for viewpoint-invariant recognition both with and without the attractor network, suggesting that ICA is a better representation than PCA for object recognition. ",
"neighbors": [
476,
576,
676
],
"mask": "Test"
},
{
"node_id": 606,
"label": 1,
"text": "Title: Analysis of the Numerical Effects of Parallelism on a Parallel Genetic Algorithm \nAbstract: This paper examines the effects of relaxed synchronization on both the numerical and parallel efficiency of parallel genetic algorithms (GAs). We describe a coarse-grain geographically structured parallel genetic algorithm. Our experiments provide preliminary evidence that asynchronous versions of these algorithms have a lower run time than synchronous GAs. Our analysis shows that this improvement is due to (1) decreased synchronization costs and (2) high numerical efficiency (e.g. fewer function evaluations) for the asynchronous GAs. This analysis includes a critique of the utility of traditional parallel performance measures for parallel GAs. ",
"neighbors": [
163,
537
],
"mask": "Test"
},
{
"node_id": 607,
"label": 2,
"text": "Title: A Support Vector Machine Approach to Decision Trees \nAbstract: Key ideas from statistical learning theory and support vector machines are generalized to decision trees. A support vector machine is used for each decision in the tree. The \"optimal\" decision tree is characterized, and both a primal and dual space formulation for constructing the tree are proposed. The result is a method for generating logically simple decision trees with multivariate linear or nonlinear decisions. The preliminary results indicate that the method produces simple trees that generalize well with respect to other decision tree algorithms and single support vector machines.",
"neighbors": [
438,
821,
1055,
1306
],
"mask": "Test"
},
{
"node_id": 608,
"label": 2,
"text": "Title: Regularization Theory and Neural Networks Architectures \nAbstract: We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer. 1 This paper will appear on Neural Computation, vol. 7, pages 219-269, 1995. An earlier version of ",
"neighbors": [
133,
179,
394,
611,
680,
975,
2230
],
"mask": "Validation"
},
{
"node_id": 609,
"label": 2,
"text": "Title: Interactive Segmentation of Three-dimensional Medical Images (Extended abstract) \nAbstract: We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer. 1 This paper will appear on Neural Computation, vol. 7, pages 219-269, 1995. An earlier version of ",
"neighbors": [
719,
747
],
"mask": "Train"
},
{
"node_id": 610,
"label": 2,
"text": "Title: Figure 1: The architecture of a Kohonen network. Each input neuron is fully connected with\nAbstract: We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of Projection Pursuit Regression and several types of neural networks. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that can be generalized to Hyper Basis Functions, b) some tensor product splines, and c) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions and several perceptron-like neural networks with one-hidden layer. 1 This paper will appear on Neural Computation, vol. 7, pages 219-269, 1995. An earlier version of ",
"neighbors": [
427,
747
],
"mask": "Train"
},
{
"node_id": 611,
"label": 2,
"text": "Title: Learning networks for face analysis and synthesis \nAbstract: This paper presents an overview of the face-related projects in our group. The unifying theme underlying our work is the use of example-based learning methods for both analyzing and synthesizing face images. We label the example face images (and for the problem of face detection, \"near miss\" faces as well) with descriptive parameters for pose, expression, identity, and face vs. non-face. Then, by using example-based learning techniques, we develop networks for performing analysis tasks such as pose and expression estimation, face recognition, and face detection in cluttered scenes. In addition to these analysis applications, we show how the example-based technique can also be used as a novel method for image synthesis that is for computer graphics. ",
"neighbors": [
28,
133,
179,
368,
386,
413,
511,
579,
601,
608,
633,
716,
718,
719,
864,
899,
938,
1079,
1103,
1116,
1265,
1315,
1352,
1488,
1493,
1499,
1564,
1668,
1732,
1755,
1763,
1935,
2050,
2230,
2260,
2325,
2340,
2352,
2378,
2385,
2501,
2505,
2540,
2676
],
"mask": "Validation"
},
{
"node_id": 612,
"label": 0,
"text": "Title: Indexing, Elaboration and Refinement: Incremental Learning of Explanatory Cases \nAbstract: This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good \"lessons\" to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program. ",
"neighbors": [
289,
629,
1046,
1348,
2568
],
"mask": "Validation"
},
{
"node_id": 613,
"label": 2,
"text": "Title: A Generalized Hidden Markov Model for the Recognition of Human Genes in DNA \nAbstract: We present a statistical model of genes in DNA. A Generalized Hidden Markov Model (GHMM) provides the framework for describing the grammar of a legal parse of a DNA sequence (Stormo & Haussler 1994). Probabilities are assigned to transitions between states in the GHMM and to the generation of each nucleotide base given a particular state. Machine learning techniques are applied to optimize these probabilities using a standardized training set. Given a new candidate sequence, the best parse is deduced from the model using a dynamic programming algorithm to identify the path through the model with maximum probability. The GHMM is flexible and modular, so new sensors and additional states can be inserted easily. In addition, it provides simple solutions for integrating cardinality constraints, reading frame constraints, \"indels\", and homology searching. The description and results of an implementation of such a gene-finding model, called Genie, is presented. The exon sensor is a codon frequency model conditioned on windowed nucleotide frequency and the preceding codon. Two neural networks are used, as in (Brunak, Engelbrecht, & Knudsen 1991), for splice site prediction. We show that this simple model performs quite well. For a cross-validated standard test set of 304 genes [ftp://www-hgc.lbl.gov/pub/genesets] in human DNA, our gene-finding system identified up to 85% of protein-coding bases correctly with a specificity of 80%. 58% of exons were exactly identified with a specificity of 51%. Genie is shown to perform favorably compared with several other gene-finding systems. ",
"neighbors": [
14,
232,
268,
616,
2107,
2496,
2571
],
"mask": "Validation"
},
{
"node_id": 614,
"label": 6,
"text": "Title: Optimality and Domination in Repeated Games with Bounded Players \nAbstract: We examine questions of optimality and domination in repeated stage games where one or both players may draw their strategies only from (perhaps different) computationally bounded sets. We also consider optimality and domination when bounded convergence rates of the infinite payoff. We develop a notion of a \"grace period\" to handle the problem of vengeful strategies. ",
"neighbors": [
615
],
"mask": "Validation"
},
{
"node_id": 615,
"label": 6,
"text": "Title: Efficient Algorithms for Learning to Play Repeated Games Against Computationally Bounded Adversaries \nAbstract: We study the problem of efficiently learning to play a game optimally against an unknown adversary chosen from a computationally bounded class. We both contribute to the line of research on playing games against finite automata, and expand the scope of this research by considering new classes of adversaries. We introduce the natural notions of games against recent history adversaries (whose current action is determined by some simple boolean formula on the recent history of play), and games against statistical adversaries (whose current action is determined by some simple function of the statistics of the entire history of play). In both cases we give efficient algorithms for learning to play penny-matching and a more difficult game called contract . We also give the most powerful positive result to date for learning to play against finite automata, an efficient algorithm for learning to play any game against any finite automata with probabilistic actions and low cover time. ",
"neighbors": [
54,
98,
555,
556,
614
],
"mask": "Train"
},
{
"node_id": 616,
"label": 2,
"text": "Title: A Decision Tree System for Finding Genes in DNA \nAbstract: MORGAN is an integrated system for finding genes in vertebrate DNA sequences. MORGAN uses a variety of techniques to accomplish this task, the most distinctive of which is a decision tree classifier. The decision tree system is combined with new methods for identifying start codons, donor sites, and acceptor sites, and these are brought together in a frame-sensitive dynamic programming algorithm that finds the optimal segmentation of a DNA sequence into coding and noncoding regions (exons and introns). The optimal segmentation is dependent on a separate scoring function that takes a subsequence and assigns to it a score reflecting the probability that the sequence is an exon. The scoring functions in MORGAN are sets of decision trees that are combined to give a probability estimate. Experimental results on a database of 570 vertebrate DNA sequences show that MORGAN has excellent performance by many different measures. On a separate test set, it achieves an overall accuracy of 95%, with a correlation coefficient of 0.78 and a sensitivity and specificity for coding bases of 83% and 79%. In addition, MORGAN identifies 58% of coding exons exactly; i.e., both the beginning and end of the coding regions are predicted correctly. This paper describes the MORGAN system, including its decision tree routines and the algorithms for site recognition, and its performance on a benchmark database of vertebrate DNA. ",
"neighbors": [
268,
438,
613,
2046
],
"mask": "Train"
},
{
"node_id": 617,
"label": 2,
"text": "Title: The Use of Neural Networks to Support \"Intelligent\" Scientific Computing \nAbstract: In this paper we report on the use of backpropagation based neural networks to implement a phase of the computational intelligence process of the PYTHIA[3] expert system for supporting the numerical simulation of applications modelled by partial differential equations (PDEs). PYTHIA is an exemplar based reasoning system that provides advice on what method and parameters to use for the simulation of a specified PDE based application. When advice is requested, the characteristics of the given model are matched with the characteristics of previously seen classes of models. The performance of various solution methods on previously seen similar classes of models is then used as a basis for predicting what method to use. Thus, a major step of the reasoning process in PYTHIA involves the analysis and categorization of models into classes of models based on their characteristics. In this study we demonstrate the use of neural networks to identify the class of predefined models whose characteristics match the ones of the specified PDE based application. ",
"neighbors": [
226
],
"mask": "Validation"
},
{
"node_id": 618,
"label": 6,
"text": "Title: 0 Inductive learning of compact rule sets by using efficient hypotheses reduction \nAbstract: A method is described which reduces the hypotheses space with an efficient and easily interpretable reduction criteria called a - reduction. A learning algorithm is described based on a - reduction and analyzed by using probability approximate correct learning results. The results are obtained by reducing a rule set to an equivalent set of kDNF formulas. The goal of the learning algorithm is to induce a compact rule set describing the basic dependencies within a set of data. The reduction is based on criterion which is very exible and gives a semantic interpretation of the rules which fulfill the criteria. Comparison with syntactical hypotheses reduction show that the a reduction improves search and has a smaller probability of missclassification. ",
"neighbors": [
478,
638
],
"mask": "Train"
},
{
"node_id": 619,
"label": 3,
"text": "Title: MEDIATING INSTRUMENTAL VARIABLES \nAbstract: A method is described which reduces the hypotheses space with an efficient and easily interpretable reduction criteria called a - reduction. A learning algorithm is described based on a - reduction and analyzed by using probability approximate correct learning results. The results are obtained by reducing a rule set to an equivalent set of kDNF formulas. The goal of the learning algorithm is to induce a compact rule set describing the basic dependencies within a set of data. The reduction is based on criterion which is very exible and gives a semantic interpretation of the rules which fulfill the criteria. Comparison with syntactical hypotheses reduction show that the a reduction improves search and has a smaller probability of missclassification. ",
"neighbors": [
260
],
"mask": "Train"
},
{
"node_id": 620,
"label": 2,
"text": "Title: How Receptive Field Parameters Affect Neural Learning \nAbstract: We identify the three principle factors affecting the performance of learning by networks with localized units: unit noise, sample density, and the structure of the target function. We then analyze the effect of unit receptive field parameters on these factors and use this analysis to propose a new learning algorithm which dynamically alters receptive field properties during learning.",
"neighbors": [
747
],
"mask": "Train"
},
{
"node_id": 621,
"label": 4,
"text": "Title: Reinforcement Learning Methods for Continuous-Time Markov Decision Problems \nAbstract: Semi-Markov Decision Problems are continuous time generalizations of discrete time Markov Decision Problems. A number of reinforcement learning algorithms have been developed recently for the solution of Markov Decision Problems, based on the ideas of asynchronous dynamic programming and stochastic approximation. Among these are TD(), Q-learning, and Real-time Dynamic Programming. After reviewing semi-Markov Decision Problems and Bellman's optimality equation in that context, we propose algorithms similar to those named above, adapted to the solution of semi-Markov Decision Problems. We demonstrate these algorithms by applying them to the problem of determining the optimal control for a simple queueing system. We conclude with a discussion of circumstances under which these algorithms may be usefully ap plied.",
"neighbors": [
471,
552,
565,
738,
1859
],
"mask": "Train"
},
{
"node_id": 622,
"label": 2,
"text": "Title: CLASSIFICATION USING HIERARCHICAL MIXTURES OF EXPERTS \nAbstract: There has recently been widespread interest in the use of multiple models for classification and regression in the statistics and neural networks communities. The Hierarchical Mixture of Experts (HME) [1] has been successful in a number of regression problems, yielding significantly faster training through the use of the Expectation Maximisation algorithm. In this paper we extend the HME to classification and results are reported for three common classification benchmark tests: Exclusive-Or, N-input Parity and Two Spirals. ",
"neighbors": [
74,
345,
377
],
"mask": "Validation"
},
{
"node_id": 623,
"label": 3,
"text": "Title: State-Space Abstraction for Anytime Evaluation of Probabilistic Networks \nAbstract: One important factor determining the computa - tional complexity of evaluating a probabilistic network is the cardinality of the state spaces of the nodes. By varying the granularity of the state spaces, one can trade off accuracy in the result for computational efficiency. We present an any - time procedure for approximate evaluation of probabilistic networks based on this idea. On application to some simple networks, the proce - dure exhibits a smooth improvement in approxi - mation quality as computation time increases. This suggests that statespace abstraction is one more useful control parameter for designing real-time probabilistic reasoners. ",
"neighbors": [
637,
1064,
1172,
1937,
2140,
2341
],
"mask": "Test"
},
{
"node_id": 624,
"label": 6,
"text": "Title: Measuring the Difficulty of Specific Learning Problems \nAbstract: Existing complexity measures from contemporary learning theory cannot be conveniently applied to specific learning problems (e.g., training sets). Moreover, they are typically non-generic, i.e., they necessitate making assumptions about the way in which the learner will operate. The lack of a satisfactory, generic complexity measure for learning problems poses difficulties for researchers in various areas; the present paper puts forward an idea which may help to alleviate these. It shows that supervised learning problems fall into two, generic, complexity classes only one of which is associated with computational tractability. By determining which class a particular problem belongs to, we can thus effectively evaluate its degree of generic difficulty. ",
"neighbors": [
11,
112,
163,
572,
659,
700
],
"mask": "Test"
},
{
"node_id": 625,
"label": 2,
"text": "Title: Statistical Biases in Backpropagation Learning Keywords: Cognitive Science, Pattern recognition \nAbstract: The paper investigates the statistical effects which may need to be exploited in supervised learning. It notes that these effects can be classified according to their conditionality and their order and proposes that learning algorithms will typically have some form of bias towards particular classes of effect. It presents the results of an empirical study of the statistical bias of backpropagation. The study involved applying the algorithm to a wide range of learning problems using a variety of different internal architectures. The results of the study revealed that backpropagation has a very specific bias in the general direction of statistical rather than relational effects. The paper shows how the existence of this bias effectively constitutes a weakness in the algorithm's ability to discount noise. ",
"neighbors": [
659,
700
],
"mask": "Validation"
},
{
"node_id": 626,
"label": 2,
"text": "Title: Scatter-partitioning RBF network for function regression and image \nAbstract: segmentation: Preliminary results Abstract. Scatter-partitioning Radial Basis Function (RBF) networks increase their number of degrees of freedom with the complexity of an input-output mapping to be estimated on the basis of a supervised training data set. Due to its superior expressive power a scatter-partitioning Gaussian RBF (GRBF) model, termed Supervised Growing Neural Gas (SGNG), is selected from the literature. SGNG employs a one-stage error-driven learning strategy and is capable of generating and removing both hidden units and synaptic connections. A slightly modified SGNG version is tested as a function estimator when the training surface to be fitted is an image, i.e., a 2-D signal whose size is finite. The relationship between the generation, by the learning system, of disjointed maps of hidden units and the presence, in the image, of pictorially homogeneous subsets (segments) is investigated. Unfortunately, the examined SGNG version performs poorly both as function estimator and image segmenter. This may be due to an intrinsic inadequacy of the one-stage error-driven learning strategy to adjust structural parameters and output weights simultaneously but consistently. In the framework of RBF networks, further studies should investigate the combination of two-stage error-driven learning strategies with synapse generation and removal criteria. y Internal report of the paper entitled \"Image segmentation with scatter-partitioning RBF networks: A feasibility study,\" to be presented at the conference Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation, part of SPIE's International Symposium on Optical Science, Engineering and Instrumentation, 19-24 July 1998, San Diego, CA. ",
"neighbors": [
261,
687
],
"mask": "Validation"
},
{
"node_id": 627,
"label": 2,
"text": "Title: Learning Symbolic Rules Using Artificial Neural Networks \nAbstract: A distinct advantage of symbolic learning algorithms over artificial neural networks is that typically the concept representations they form are more easily understood by humans. One approach to understanding the representations formed by neural networks is to extract symbolic rules from trained networks. In this paper we describe and investigate an approach for extracting rules from networks that uses (1) the NofM extraction algorithm, and (2) the network training method of soft weight-sharing. Previously, the NofM algorithm had been successfully applied only to knowledge-based neural networks. Our experiments demonstrate that our extracted rules generalize better than rules learned using the C4.5 system. In addition to being accurate, our extracted rules are also reasonably comprehensible.",
"neighbors": [
338,
1057,
1270,
1562,
1679
],
"mask": "Test"
},
{
"node_id": 628,
"label": 2,
"text": "Title: Figure 8: time complexity of unit parallelism measured on MANNA theoretical prediction #nodes N time\nAbstract: Our experience showed us that exibility in expressing a parallel algorithm for simulating neural networks is desirable even if it is not possible then to obtain the most efficient solution for any single training algorithm. We believe that the advantages of a clear and easy to understand program predominates the disadvantages of approaches allowing only for a specific machine or neural network algorithm. We currently investigate if other neural network models are worth while being parallelized, and how the resulting parallel algorithms can be composed of a few common basic building blocks and the logarithmic tree as efficient communication structure. 1 2 4 8 2 500 connections 40 000 connections [1] D. Ackley, G. Hinton, T. Sejnowski: A Learning Algorithm for Boltzmann Machines, Cognitive Science 9, pp. 147-169, 1985 [2] B. M. Forrest et al.: Implementing Neural Network Models on Parallel Computers, The computer Journal, vol. 30, no. 5, 1987 [3] W. Giloi: Latency Hiding in Message Passing Architectures, International Parallel Processing Symposium, April 1994, Cancun, Mexico, IEEE Computer Society Press [4] T. Nordstrm, B. Svensson: Using And Designing Massively Parallel Computers for Artificial Neural Networks, Journal Of Parallel And Distributed Computing, vol. 14, pp. 260-285, 1992 [5] A. Kramer, A. Vincentelli: Efficient parallel learning algorithms for neural networks, Advances in Neural Information Processing Systems I, D. Touretzky (ed.), pp. 40-48, 1989 [6] T. Kohonen: Self-Organization and Associative Memory, Springer-Verlag, Berlin, 1988 [7] D. A. Pomerleau, G. L. Gusciora, D. L. Touretzky, H. T. Kung: Neural Network Simulation at Warp Speed: How We Got 17 Million Connections Per Second, IEEE Intern. Conf. Neural Networks, July 1988 [8] A. Rbel: Dynamic selection of training patterns for neural networks: A new method to control the generalization, Technical Report 92-39, Technical University of Berlin, 1993 [9] D. E. Rumelhart, D. E. Hinton, R. J. Williams: Learning internal representations by error propagation, Rumelhart & McClelland (eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. I, pp. 318-362, Bradford Books/MIT Press, Cambridge, MA, 1986 [10] W. Schiffmann, M. Joost, R. Werner: Comparison of optimized backpropagation algorithms, Proc. of the European Symposium on Artificial Neural Networks, ESANN '93, Brussels, pp. 97-104, 1993 [11] J. Schmidhuber: Accelerated Learning in BackPropagation Nets, Connectionism in perspective, Elsevier Science Publishers B.V. (North-Holland), pp 439-445,1989 [12] M. Taylor, P. Lisboa (eds.): Techniques and Applications of Neural Networks, Ellis Horwood, 1993 [13] M. Witbrock, M. Zagha: An implementation of backpropagation learning on GF11, a large SIMD parallel computer, Parallel Computing, vol. 14, pp. 329-346, 1990 [14] X. Zhang, M. Mckenna, J. P. Mesirov, D. L. Waltz: The backpropagation algorithm on grid and hypercube architectures, Parallel Computing, vol. 14, pp. 317-327, 1990 ",
"neighbors": [
747
],
"mask": "Train"
},
{
"node_id": 629,
"label": 0,
"text": "Title: Using Introspective Reasoning to Select Learning Strategies \nAbstract: In order to learn effectively, a system must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires a declaratflive representation of the reasoning performed by the system during the performance task. This paper presents a taxonomy of possible reasoning failures that can occur during this task, their declarative representations, and their associations with particular learning strategies. We propose a theory of Meta-XPs, which are explanation structures that help the system identify failure types and choose appropriate learning strategies in order to avoid similar mistakes in the future. A program called Meta-AQUA embodies the theory and processes examples in the domain of drug smuggling. ",
"neighbors": [
221,
612,
1535,
1537
],
"mask": "Train"
},
{
"node_id": 630,
"label": 2,
"text": "Title: Input to State Stabilizability for Parameterized Families of Systems Key Words: Nonlinear stability, Robust control,\nAbstract: In order to learn effectively, a system must not only possess knowledge about the world and be able to improve that knowledge, but it also must introspectively reason about how it performs a given task and what particular pieces of knowledge it needs to improve its performance at the current task. Introspection requires a declaratflive representation of the reasoning performed by the system during the performance task. This paper presents a taxonomy of possible reasoning failures that can occur during this task, their declarative representations, and their associations with particular learning strategies. We propose a theory of Meta-XPs, which are explanation structures that help the system identify failure types and choose appropriate learning strategies in order to avoid similar mistakes in the future. A program called Meta-AQUA embodies the theory and processes examples in the domain of drug smuggling. ",
"neighbors": [
408,
447
],
"mask": "Validation"
},
{
"node_id": 631,
"label": 2,
"text": "Title: Extracting Provably Correct Rules from Artificial Neural Networks \nAbstract: Although connectionist learning procedures have been applied successfully to a variety of real-world scenarios, artificial neural networks have often been criticized for exhibiting a low degree of comprehensibility. Mechanisms that automatically compile neural networks into symbolic rules offer a promising perspective to overcome this practical shortcoming of neural network representations. This paper describes an approach to neural network rule extraction based on Validity Interval Analysis (VI-Analysis). VI-Analysis is a generic tool for extracting symbolic knowledge from Backpropagation-style artificial neural networks. It does this by propagating whole intervals of activations through the network in both the forward and backward directions. In the context of rule extraction, these intervals are used to prove or disprove the correctness of conjectured rules. We describe techniques for generating and testing rule hypotheses, and demonstrate these using some simple classification tasks including the MONK's benchmark problems. Rules extracted by VI-Analysis are provably correct. No assumptions are made about the topology of the network at hand, as well as the procedure employed for training the network. ",
"neighbors": [
383,
1057,
1562
],
"mask": "Validation"
},
{
"node_id": 632,
"label": 3,
"text": "Title: Toward Optimal Feature Selection \nAbstract: In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computation-ally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively han dles datasets with large numbers of features.",
"neighbors": [
401,
430,
635,
683,
1582,
1909,
2508
],
"mask": "Train"
},
{
"node_id": 633,
"label": 4,
"text": "Title: Chapter 1 Reinforcement Learning for Planning and Control \nAbstract: In this paper, we examine a method for feature subset selection based on Information Theory. Initially, a framework for defining the theoretically optimal, but computation-ally intractable, method for feature subset selection is presented. We show that our goal should be to eliminate a feature if it gives us little or no additional information beyond that subsumed by the remaining features. In particular, this will be the case for both irrelevant and redundant features. We then give an efficient algorithm for feature selection which computes an approximation to the optimal feature selection criterion. The conditions under which the approximate algorithm is successful are examined. Empirical results are given on a number of data sets, showing that the algorithm effectively han dles datasets with large numbers of features.",
"neighbors": [
197,
294,
565,
566,
611,
1459
],
"mask": "Train"
},
{
"node_id": 634,
"label": 0,
"text": "Title: Oblivious Decision Trees and Abstract Cases \nAbstract: In this paper, we address the problem of case-based learning in the presence of irrelevant features. We review previous work on attribute selection and present a new algorithm, Oblivion, that carries out greedy pruning of oblivious decision trees, which effectively store a set of abstract cases in memory. We hypothesize that this approach will efficiently identify relevant features even when they interact, as in parity concepts. We report experimental results on artificial domains that support this hypothesis, and experiments with natural domains that show improvement in some cases but not others. In closing, we discuss the implications of our experiments, consider additional work on irrelevant features, and outline some directions for future research. ",
"neighbors": [
256,
430,
635,
686,
1513,
1570,
2593
],
"mask": "Train"
},
{
"node_id": 635,
"label": 6,
"text": "Title: Learning Boolean Concepts in the Presence of Many Irrelevant Features \nAbstract: In this paper, we address the problem of case-based learning in the presence of irrelevant features. We review previous work on attribute selection and present a new algorithm, Oblivion, that carries out greedy pruning of oblivious decision trees, which effectively store a set of abstract cases in memory. We hypothesize that this approach will efficiently identify relevant features even when they interact, as in parity concepts. We report experimental results on artificial domains that support this hypothesis, and experiments with natural domains that show improvement in some cases but not others. In closing, we discuss the implications of our experiments, consider additional work on irrelevant features, and outline some directions for future research. ",
"neighbors": [
89,
109,
172,
177,
208,
256,
264,
375,
381,
430,
436,
442,
474,
632,
634,
640,
651,
660,
683,
686,
722
],
"mask": "Validation"
},
{
"node_id": 636,
"label": 4,
"text": "Title: Robot Shaping: Developing Situated Agents through Learning \nAbstract: Learning plays a vital role in the development of situated agents. In this paper, we explore the use of reinforcement learning to \"shape\" a robot to perform a predefined target behavior. We connect both simulated and real robots to A LECSYS, a parallel implementation of a learning classifier system with an extended genetic algorithm. After classifying different kinds of Animat-like behaviors, we explore the effects on learning of different types of agent's architecture (monolithic, flat and hierarchical) and of training strategies. In particular, hierarchical architecture requires the agent to learn how to coordinate basic learned responses. We show that the best results are achieved when both the agent's architecture and the training strategy match the structure of the behavior pattern to be learned. We report the results of a number of experiments carried out both in simulated and in real environments, and show that the results of simulations carry smoothly to real robots. While most of our experiments deal with simple reactive behavior, in one of them we demonstrate the use of a simple and general memory mechanism. As a whole, our experimental activity demonstrates that classifier systems with genetic algorithms can be practically employed to develop autonomous agents. ",
"neighbors": [
118,
294,
552,
764,
1573,
2027,
2173,
2174,
2204,
2233,
2672,
2687
],
"mask": "Train"
},
{
"node_id": 637,
"label": 3,
"text": "Title: Computational complexity reduction for BN2O networks using similarity of states \nAbstract: Although probabilistic inference in a general Bayesian belief network is an NP-hard problem, inference computation time can be reduced in most practical cases by exploiting domain knowledge and by making appropriate approximations in the knowledge representation. In this paper we introduce the property of similarity of states and a new method for approximate knowledge representation which is based on this property. We define two or more states of a node to be similar when the likelihood ratio of their probabilities does not depend on the instantiations of the other nodes in the network. We show that the similarity of states exposes redundancies in the joint probability distribution which can be exploited to reduce the computational complexity of probabilistic inference in networks with multiple similar states. For example, we show that a BN2O network|a two layer networks often used in diagnostic problems|can be reduced to a very close network with multiple similar states. Probabilistic inference in the new network can be done in only polynomial time with respect to the size of the network, and the results for queries of practical importance are very close to the results that can be obtained in exponential time with the original network. The error introduced by our reduction converges to zero faster than exponentially with respect to the degree of the polynomial describing the resulting computational complexity. ",
"neighbors": [
515,
623
],
"mask": "Test"
},
{
"node_id": 638,
"label": 6,
"text": "Title: Learning a set of primitive actions with an Induction of decision trees. Machine Learning, 1(1):81-106,\nAbstract: Although probabilistic inference in a general Bayesian belief network is an NP-hard problem, inference computation time can be reduced in most practical cases by exploiting domain knowledge and by making appropriate approximations in the knowledge representation. In this paper we introduce the property of similarity of states and a new method for approximate knowledge representation which is based on this property. We define two or more states of a node to be similar when the likelihood ratio of their probabilities does not depend on the instantiations of the other nodes in the network. We show that the similarity of states exposes redundancies in the joint probability distribution which can be exploited to reduce the computational complexity of probabilistic inference in networks with multiple similar states. For example, we show that a BN2O network|a two layer networks often used in diagnostic problems|can be reduced to a very close network with multiple similar states. Probabilistic inference in the new network can be done in only polynomial time with respect to the size of the network, and the results for queries of practical importance are very close to the results that can be obtained in exponential time with the original network. The error introduced by our reduction converges to zero faster than exponentially with respect to the degree of the polynomial describing the resulting computational complexity. ",
"neighbors": [
89,
227,
264,
294,
296,
348,
375,
383,
441,
449,
521,
618,
701,
722,
779,
862,
1027,
1189,
1312,
1327,
1460,
1468,
1539,
1808,
1895,
1919,
2045,
2146,
2171,
2409,
2426,
2465,
2541,
2673
],
"mask": "Train"
},
{
"node_id": 639,
"label": 3,
"text": "Title: Bayesian Unsupervised Learning of Higher Order Structure \nAbstract: Multilayer architectures such as those used in Bayesian belief networks and Helmholtz machines provide a powerful framework for representing and learning higher order statistical relations among inputs. Because exact probability calculations with these models are often intractable, there is much interest in finding approximate algorithms. We present an algorithm that efficiently discovers higher order structure using EM and Gibbs sampling. The model can be interpreted as a stochastic recurrent network in which ambiguity in lower-level states is resolved through feedback from higher levels. We demonstrate the performance of the algorithm on bench mark problems.",
"neighbors": [
250,
584,
1763
],
"mask": "Test"
},
{
"node_id": 640,
"label": 6,
"text": "Title: Learning in the Presence of Malicious Errors \nAbstract: In this paper we study an extension of the distribution-free model of learning introduced by Valiant [23] (also known as the probably approximately correct or PAC model) that allows the presence of malicious errors in the examples given to a learning algorithm. Such errors are generated by an adversary with unbounded computational power and access to the entire history of the learning algorithm's computation. Thus, we study a worst-case model of errors. Our results include general methods for bounding the rate of error tolerable by any learning algorithm, efficient algorithms tolerating nontrivial rates of malicious errors, and equivalences between problems of learning with errors and standard combinatorial optimization problems.",
"neighbors": [
20,
109,
130,
199,
287,
459,
549,
574,
591,
635,
732,
1363,
1897,
2054,
2475
],
"mask": "Test"
},
{
"node_id": 641,
"label": 3,
"text": "Title: Comparing Bayesian Model Class Selection Criteria by Discrete Finite Mixtures \nAbstract: We investigate the problem of computing the posterior probability of a model class, given a data sample and a prior distribution for possible parameter settings. By a model class we mean a group of models which all share the same parametric form. In general this posterior may be very hard to compute for high-dimensional parameter spaces, which is usually the case with real-world applications. In the literature several methods for computing the posterior approximately have been proposed, but the quality of the approximations may depend heavily on the size of the available data sample. In this work we are interested in testing how well the approximative methods perform in real-world problem domains. In order to conduct such a study, we have chosen the model family of finite mixture distributions. With certain assumptions, we are able to derive the model class posterior analytically for this model family. We report a series of model class selection experiments on real-world data sets, where the true posterior and the approximations are compared. The empirical results support the hypothesis that the approximative techniques can provide good estimates of the true posterior, especially when the sample size grows large. ",
"neighbors": [
376,
484,
1739
],
"mask": "Train"
},
{
"node_id": 642,
"label": 3,
"text": "Title: Constructing Bayesian finite mixture models by the EM algorithm \nAbstract: Email: Firstname.Lastname@cs.Helsinki.FI Report C-1996-9, University of Helsinki, Department of Computer Science. Abstract In this paper we explore the use of finite mixture models for building decision support systems capable of sound probabilistic inference. Finite mixture models have many appealing properties: they are computationally efficient in the prediction (reasoning) phase, they are universal in the sense that they can approximate any problem domain distribution, and they can handle multimod-ality well. We present a formulation of the model construction problem in the Bayesian framework for finite mixture models, and describe how Bayesian inference is performed given such a model. The model construction problem can be seen as missing data estimation and we describe a realization of the Expectation-Maximization (EM) algorithm for finding good models. To prove the feasibility of our approach, we report crossvalidated empirical results on several publicly available classification problem datasets, and compare our results to corresponding results obtained by alternative techniques, such as neural networks and decision trees. The comparison is based on the best results reported in the literature on the datasets in question. It appears that using the theoretically sound Bayesian framework suggested here the other reported results can be outperformed with a relatively small effort. ",
"neighbors": [
484,
697
],
"mask": "Train"
},
{
"node_id": 643,
"label": 0,
"text": "Title: Modeling Case-based Planning for Repairing Reasoning Failures \nAbstract: One application of models of reasoning behavior is to allow a reasoner to introspectively detect and repair failures of its own reasoning process. We address the issues of the transferability of such models versus the specificity of the knowledge in them, the kinds of knowledge needed for self-modeling and how that knowledge is structured, and the evaluation of introspective reasoning systems. We present the ROBBIE system which implements a model of its planning processes to improve the planner in response to reasoning failures. We show how ROBBIE's hierarchical model balances model generality with access to implementation-specific details, and discuss the qualitative and quantitative measures we have used for evaluating its introspective component. ",
"neighbors": [
49,
50,
150,
222,
583,
1121,
1904,
2489
],
"mask": "Train"
},
{
"node_id": 644,
"label": 4,
"text": "Title: Multi-criteria reinforcement learning \nAbstract: We consider multi-criteria sequential decision making problems, where the criteria are ordered according to their importance. Structural properties of these problems are touched and reinforcement learning algorithms, which learn asymptotically optimal decisions, are derived. Computer experiments confirm the theoretical results and provide further insight in the learning processes.",
"neighbors": [
210,
552,
565
],
"mask": "Train"
},
{
"node_id": 645,
"label": 3,
"text": "Title: On the Markov Equivalence of Chain Graphs, Undirected Graphs, and Acyclic Digraphs \nAbstract: Graphical Markov models use undirected graphs (UDGs), acyclic directed graphs (ADGs), or (mixed) chain graphs to represent possible dependencies among random variables in a multivariate distribution. Whereas a UDG is uniquely determined by its associated Markov model, this is not true for ADGs or for general chain graphs (which include both UDGs and ADGs as special cases). This paper addresses three questions regarding the equivalence of graphical Markov models: when is a given chain graph Markov equivalent (1) to some UDG? (2) to some (at least one) ADG? (3) to some decomposable UDG? The answers are obtained by means of an extension of Frydenberg's (1990) elegant graph-theoretic characterization of the Markov equivalence of chain graphs.",
"neighbors": [
51,
211,
312,
742
],
"mask": "Validation"
},
{
"node_id": 646,
"label": 3,
"text": "Title: Constructing Computationally Efficient Bayesian Models via Unsupervised Clustering Probabilistic Reasoning and Bayesian Belief Networks, \nAbstract: Given a set of samples of an unknown probability distribution, we study the problem of constructing a good approximative Bayesian network model of the probability distribution in question. This task can be viewed as a search problem, where the goal is to find a maximal probability network model, given the data. In this work, we do not make an attempt to learn arbitrarily complex multi-connected Bayesian network structures, since such resulting models can be unsuitable for practical purposes due to the exponential amount of time required for the reasoning task. Instead, we restrict ourselves to a special class of simple tree-structured Bayesian networks called Bayesian prototype trees, for which a polynomial time algorithm for Bayesian reasoning exists. We show how the probability of a given Bayesian prototype tree model can be evaluated, given the data, and how this evaluation criterion can be used in a stochastic simulated annealing algorithm for searching the model space. The simulated annealing algorithm provably finds the maximal probability model, provided that a sufficient amount of time is used.",
"neighbors": [
450
],
"mask": "Train"
},
{
"node_id": 647,
"label": 3,
"text": "Title: Balls and Urns \nAbstract: We use a simple and illustrative example to expose some of the main ideas of Evidential Probability. Specifically, we show how the use of an acceptance rule naturally leads to the use of intervals to represent probabilities, how change of opinion due to experience can be facilitated, and how probabilities concerning compound experiments or events can be computed given the proper knowledge of the underlying distributions.",
"neighbors": [
81
],
"mask": "Test"
},
{
"node_id": 648,
"label": 2,
"text": "Title: Reprint of: Sontag, E.D., \"Remarks on stabilization and input-to-state stability,\" \nAbstract: We use a simple and illustrative example to expose some of the main ideas of Evidential Probability. Specifically, we show how the use of an acceptance rule naturally leads to the use of intervals to represent probabilities, how change of opinion due to experience can be facilitated, and how probabilities concerning compound experiments or events can be computed given the proper knowledge of the underlying distributions.",
"neighbors": [
693
],
"mask": "Train"
},
{
"node_id": 649,
"label": 0,
"text": "Title: Concept Learning and Heuristic Classification in Weak-Theory Domains 1 \nAbstract: We use a simple and illustrative example to expose some of the main ideas of Evidential Probability. Specifically, we show how the use of an acceptance rule naturally leads to the use of intervals to represent probabilities, how change of opinion due to experience can be facilitated, and how probabilities concerning compound experiments or events can be computed given the proper knowledge of the underlying distributions.",
"neighbors": [
96,
166,
457,
479,
752,
1643,
2123,
2294,
2403,
2560,
2581
],
"mask": "Train"
},
{
"node_id": 650,
"label": 4,
"text": "Title: Learning to Use Selective Attention and Short-Term Memory in Sequential Tasks \nAbstract: This paper presents U-Tree, a reinforcement learning algorithm that uses selective attention and short-term memory to simultaneously address the intertwined problems of large perceptual state spaces and hidden state. By combining the advantages of work in instance-based (or memory-based) learning and work with robust statistical tests for separating noise from task structure, the method learns quickly, creates only task-relevant state distinctions, and handles noise well. U-Tree uses a tree-structured representation, and is related to work on Prediction Suffix Trees [ Ron et al., 1994 ] , Parti-game [ Moore, 1993 ] , G-algorithm [ Chap-man and Kaelbling, 1991 ] , and Variable Resolution Dynamic Programming [ Moore, 1991 ] . It builds on Utile Suffix Memory [ McCallum, 1995c ] , which only used short-term memory, not selective perception. The algorithm is demonstrated solving a highway driving task in which the agent weaves around slower and faster traffic. The agent uses active perception with simulated eye movements. The environment has hidden state, time pressure, stochasticity, over 21,000 world states and over 2,500 percepts. From this environment and sensory system, the agent uses a utile distinction test to build a tree that represents depth-three memory where necessary, and has just 143 internal statesfar fewer than the 2500 3 states that would have resulted from a fixed-sized history-window ap proach.",
"neighbors": [
472,
483,
656,
1006,
1557
],
"mask": "Test"
},
{
"node_id": 651,
"label": 2,
"text": "Title: A Monotonic Measure for Optimal Feature Selection \nAbstract: Feature selection is a problem of choosing a subset of relevant features. In general, only exhaustive search can bring about the optimal subset. With a monotonic measure, exhaustive search can be avoided without sacrificing optimality. Unfortunately, most error- or distance-based measures are not monotonic. A new measure is employed in this work that is monotonic and fast to compute. The search for relevant features according to this measure is guaranteed to be complete but not exhaustive. Experiments are conducted for verification.",
"neighbors": [
430,
635
],
"mask": "Train"
},
{
"node_id": 652,
"label": 5,
"text": "Title: ARB: A Hardware Mechanism for Dynamic Reordering of Memory References* \nAbstract: Feature selection is a problem of choosing a subset of relevant features. In general, only exhaustive search can bring about the optimal subset. With a monotonic measure, exhaustive search can be avoided without sacrificing optimality. Unfortunately, most error- or distance-based measures are not monotonic. A new measure is employed in this work that is monotonic and fast to compute. The search for relevant features according to this measure is guaranteed to be complete but not exhaustive. Experiments are conducted for verification.",
"neighbors": [
86,
249,
373
],
"mask": "Train"
},
{
"node_id": 653,
"label": 4,
"text": "Title: Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email \nAbstract: This paper describes a novel method by which a dialogue agent can learn to choose an optimal dialogue strategy. While it is widely agreed that dialogue strategies should be formulated in terms of communicative intentions, there has been little work on automatically optimizing an agent's choices when there are multiple ways to realize a communicative intention. Our method is based on a combination of learning algorithms and empirical evaluation techniques. The learning component of our method is based on algorithms for reinforcement learning, such as dynamic programming and Q-learning. The empirical component uses the PARADISE evaluation framework (Walker et al., 1997) to identify the important performance factors and to provide the performance function needed by the learning algorithm. We illustrate our method with a dialogue agent named ELVIS (EmaiL Voice Interactive System), that supports access to email over the phone. We show how ELVIS can learn to choose among alternate strategies for agent initiative, for reading messages, and for summarizing email folders. ",
"neighbors": [
552,
2480
],
"mask": "Train"
},
{
"node_id": 654,
"label": 5,
"text": "Title: Incremental Reduced Error Pruning \nAbstract: This paper outlines some problems that may occur with Reduced Error Pruning in Inductive Logic Programming, most notably efficiency. Thereafter a new method, Incremental Reduced Error Pruning, is proposed that attempts to address all of these problems. Experiments show that in many noisy domains this method is much more efficient than alternative algorithms, along with a slight gain in accuracy. However, the experiments show as well that the use of this algorithm cannot be recommended for domains with a very specific concept description. ",
"neighbors": [
135,
1260
],
"mask": "Train"
},
{
"node_id": 655,
"label": 2,
"text": "Title: Determining Mental State from EEG Signals Using Parallel Implementations of Neural Networks \nAbstract: EEG analysis has played a key role in the modeling of the brain's cortical dynamics, but relatively little effort has been devoted to developing EEG as a limited means of communication. If several mental states can be reliably distinguished by recognizing patterns in EEG, then a paralyzed person could communicate to a device like a wheelchair by composing sequences of these mental states. EEG pattern recognition is a difficult problem and hinges on the success of finding representations of the EEG signals in which the patterns can be distinguished. In this article, we report on a study comparing three EEG representations, the unprocessed signals, a reduced-dimensional representation using the Karhunen-Loeve transform, and a frequency-based representation. Classification is performed with a two-layer neural network implemented on a CNAPS server (128 processor, SIMD architecture) by Adaptive Solutions, Inc.. Execution time comparisons show over a hundred-fold speed up over a Sun Sparc 10. The best classification accuracy on untrained samples is 73% using the frequency-based representation. ",
"neighbors": [
198,
747
],
"mask": "Train"
},
{
"node_id": 656,
"label": 4,
"text": "Title: Reinforcement Learning: A Survey \nAbstract: This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word \"reinforcement.\" The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.",
"neighbors": [
148,
650,
657,
773
],
"mask": "Validation"
},
{
"node_id": 657,
"label": 4,
"text": "Title: Adding Memory to XCS \nAbstract: We add internal memory to the XCS classifier system. We then test XCS with internal memory, named XCSM, in non-Markovian environments with two and four aliasing states. Experimental results show that XCSM can easily converge to optimal solutions in simple environments; moreover, XCSM's performance is very stable with respect to the size of the internal memory involved in learning. However, the results we present evidence that in more complex non-Markovian environments, XCSM may fail to evolve an optimal solution. Our results suggest that this happens because, the exploration strategies currently employed with XCS, are not adequate to guarantee the convergence to an optimal policy with XCSM, in complex non-Markovian environments. ",
"neighbors": [
656,
1581
],
"mask": "Train"
},
{
"node_id": 658,
"label": 1,
"text": "Title: Hill Climbing with Learning (An Abstraction of Genetic Algorithm) \nAbstract: Simple modification of standard hill climbing optimization algorithm by taking into account learning features is discussed. Basic concept of this approach is the socalled probability vector, its single entries determine probabilities of appearance of '1' entries in n-bit vectors. This vector is used for the random generation of n-bit vectors that form a neighborhood (specified by the given probability vector). Within the neighborhood a few best solutions (with smallest functional values of a minimized function) are recorded. The feature of learning is introduced here so that the probability vector is updated by a formal analogue of Hebbian learning rule, well-known in the theory of artificial neural networks. The process is repeated until the probability vector entries are close either to zero or to one. The resulting probability vector unambiguously determines an n-bit vector which may be interpreted as an optimal solution of the given optimization task. Resemblance with genetic algorithms is discussed. Effectiveness of the proposed method is illustrated by an example of looking for global minima of a highly multimodal function. ",
"neighbors": [
163,
427,
1577
],
"mask": "Train"
},
{
"node_id": 659,
"label": 2,
"text": "Title: Trading Spaces: Computation, Representation and the Limits of Uninformed Learning \nAbstract: fl Research on this paper was partly supported by a Senior Research Leave fellowship granted by the Joint Council (SERC/MRC/ESRC) Cognitive Science Human Computer Interaction Initiative to one of the authors (Clark). Thanks to the Initiative for that support. y The order of names is arbitrary. ",
"neighbors": [
11,
163,
397,
624,
625,
695
],
"mask": "Test"
},
{
"node_id": 660,
"label": 6,
"text": "Title: Efficient Algorithms for Identifying Relevant Features \nAbstract: This paper describes efficient methods for exact and approximate implementation of the MIN-FEATURES bias, which prefers consistent hypotheses definable over as few features as possible. This bias is useful for learning domains where many irrelevant features are present in the training data. We first introduce FOCUS-2, a new algorithm that exactly implements the MIN-FEATURES bias. This algorithm is empirically shown to be substantially faster than the FOCUS algorithm previously given in [ Al-muallim and Dietterich, 1991 ] . We then introduce the Mutual-Information-Greedy, Simple-Greedy and Weighted-Greedy algorithms, which apply efficient heuristics for approximating the MIN-FEATURES bias. These algorithms employ greedy heuristics that trade optimality for computational efficiency. Experimental studies show that the learning performance of ID3 is greatly improved when these algorithms are used to preprocess the training data by eliminating the irrelevant features from ID3's consideration. In particular, the Weighted-Greedy algorithm provides an excellent and efficient approximation of the MIN ",
"neighbors": [
635
],
"mask": "Validation"
},
{
"node_id": 661,
"label": 3,
"text": "Title: A Statistical Approach to Decision Tree Modeling \nAbstract: A statistical approach to decision tree modeling is described. In this approach, each decision in the tree is modeled parametrically as is the process by which an output is generated from an input and a sequence of decisions. The resulting model yields a likelihood measure of goodness of fit, allowing ML and MAP estimation techniques to be utilized. An efficient algorithm is presented to estimate the parameters in the tree. The model selection problem is presented and several alternative proposals are considered. A hidden Markov version of the tree is described for data sequences that have temporal dependencies.",
"neighbors": [
71,
74,
76,
1191
],
"mask": "Train"
},
{
"node_id": 662,
"label": 2,
"text": "Title: Free Energy Minimization Algorithm for Decoding and Cryptanalysis three binary vectors: s of length N\nAbstract: where A is a binary matrix. Our task is to infer s given z and A, and given assumptions about the statistical properties of s and n. This problem arises in the decoding of a noisy signal transmitted using a linear code A, and in the inference of the sequence of a linear feedback shift register (LFSR) from noisy observations [1, 2]. ",
"neighbors": [
181,
518
],
"mask": "Validation"
},
{
"node_id": 663,
"label": 2,
"text": "Title: Perceptual Development and Learning: From Behavioral, Neurophysiological, and Morphological Evidence To Computational Models \nAbstract: An intelligent system has to be capable of adapting to a constantly changing environment. It therefore, ought to be capable of learning from its perceptual interactions with its surroundings. This requires a certain amount of plasticity in its structure. Any attempt to model the perceptual capabilities of a living system or, for that matter, to construct a synthetic system of comparable abilities, must therefore, account for such plasticity through a variety of developmental and learning mechanisms. This paper examines some results from neuroanatomical, morphological, as well as behavioral studies of the development of visual perception; integrates them into a computational framework; and suggests several interesting experiments with computational models that can yield insights into the development of visual perception. In order to understand the development of information processing structures in the brain, one needs knowledge of changes it undergoes from birth to maturity in the context of a normal environment. However, knowledge of its development in aberrant settings is also extremely useful, because it reveals the extent to which the development is a function of environmental experience (as opposed to genetically determined pre-wiring). Accordingly, we consider development of the visual system under both normal and restricted rearing conditions. The role of experience in the early development of the sensory systems in general, and the visual system in particular, has been widely studied through a variety of experiments involving carefully controlled manipulation of the environment presented to an animal. Extensive reviews of such results can be found in (Mitchell, 1984; Movshon, 1981; Hirsch, 1986; Boothe, 1986; Singer, 1986). Some examples of manipulation of visual experience are total pattern deprivation (e.g., dark rearing), selective deprivation of a certain class of patterns (e.g., vertical lines), monocular deprivation in animals with binocular vision, etc. Extensive studies involving behavioral deficits resulting from total visual pattern deprivation indicate that the deficits arise primarily as a result of impairment of visual information processing in the brain. The results of these experiments suggest specific developmental or learning mechanisms that may be operating at various stages of development, and at different levels in the system. We will discuss some of these hhhhhhhhhhhhhhh This is a working draft. All comments, especially constructive criticism and suggestions for improvement, will be appreciated. I am indebted to Prof. James Dannemiller for introducing me to some of the literature in infant development; to Prof. Leonard Uhr for his helpful comments on an initial draft of the paper; and to numerous researchers whose experimental work has provided the basis for the model outlined in this paper. This research was partially supported by grants from the National Science Foundation and the University of Wisconsin Graduate School. ",
"neighbors": [
496,
501,
2393
],
"mask": "Train"
},
{
"node_id": 664,
"label": 2,
"text": "Title: Diffusion of Context and Credit Information in Markovian Models \nAbstract: This paper studies the problem of ergodicity of transition probability matrices in Marko-vian models, such as hidden Markov models (HMMs), and how it makes very difficult the task of learning to represent long-term context for sequential data. This phenomenon hurts the forward propagation of long-term context information, as well as learning a hidden state representation to represent long-term context, which depends on propagating credit information backwards in time. Using results from Markov chain theory, we show that this problem of diffusion of context and credit is reduced when the transition probabilities approach 0 or 1, i.e., the transition probability matrices are sparse and the model essentially deterministic. The results found in this paper apply to learning approaches based on continuous optimization, such as gradient descent and the Baum-Welch algorithm. ",
"neighbors": [
350,
978
],
"mask": "Test"
},
{
"node_id": 665,
"label": 2,
"text": "Title: Requirements and use of neural networks for industrial appli- \nAbstract: Modern industry of today needs flexible, adaptive and fault-tolerant methods for information processing. Several applications have shown that neural networks fulfill these requirements. In this paper application areas, in which neural networks have been successfully used, are presented. Then a kind of check list is described, which mentioned the different steps, when applying neural networks. The paper finished with a discussion of some neural networks projects done in the research group Interactive Planning at the Research Center for Computer Science (FZI). ",
"neighbors": [
294,
427,
747
],
"mask": "Train"
},
{
"node_id": 666,
"label": 2,
"text": "Title: NeuroPipe a neural network based system for pipeline inspec- \nAbstract: Gas, oil and other pipelines need to be inspected for corrosion and other defects at regular intervals. For this application Pipetronix GmbH (PTX) Karlsruhe has developed a special ultrasonic based probe. Based on the recorded wall thicknesses of this so called pipe pig the Research center for computer science (FZI) has developed in cooperation with PTX an automatic inspection system called NeuroPipe. NeuroPipe has the task to detect defects like metal loss. The kernel of this inspection tool is a neural classifier which was trained using manually collected defect examples. The following paper focus on the aspects of successfull use of learning methods in an industrial application. ",
"neighbors": [
745
],
"mask": "Train"
},
{
"node_id": 667,
"label": 2,
"text": "Title: Recognizing Handwritten Digits Using Mixtures of Linear Models \nAbstract: We construct a mixture of locally linear generative models of a collection of pixel-based images of digits, and use them for recognition. Different models of a given digit are used to capture different styles of writing, and new images are classified by evaluating their log-likelihoods under each model. We use an EM-based algorithm in which the M-step is computationally straightforward principal components analysis (PCA). Incorporating tangent-plane information [12] about expected local deformations only requires adding tangent vectors into the sample covariance matrices for the PCA, and it demonstrably improves performance.",
"neighbors": [
257,
480,
867,
1356,
1928,
1974,
2072,
2227,
2270,
2570
],
"mask": "Train"
},
{
"node_id": 668,
"label": 2,
"text": "Title: Nonlinear gated experts for time series: discovering regimes and avoiding overfitting \nAbstract: In: International Journal of Neural Systems 6 (1995) p. 373 - 399. URL of this paper: ftp://ftp.cs.colorado.edu/pub/Time-Series/MyPapers/experts.ps.Z, or http://www.cs.colorado.edu/~andreas/Time-Series/MyPapers/experts.ps.Z University of Colorado Computer Science Technical Report CU-CS-798-95. In the analysis and prediction of real-world systems, two of the key problems are nonstationarity(often in the form of switching between regimes), and overfitting (particularly serious for noisy processes). This article addresses these problems using gated experts, consisting of a (nonlinear) gating network, and several (also nonlinear) competing experts. Each expert learns to predict the conditional mean, and each expert adapts its width to match the noise level in its regime. The gating network learns to predict the probability of each expert, given the input. This article focuses on the case where the gating network bases its decision on information from the inputs. This can be contrasted to hidden Markov models where the decision is based on the previous state(s) (i.e., on the output of the gating network at the previous time step), as well as to averaging over several predictors. In contrast, gated experts soft-partition the input space. This article discusses the underlying statistical assumptions, derives the weight update rules, and compares the performance of gated experts to standard methods on three time series: (1) a computer-generated series, obtained by randomly switching between two nonlinear processes, (2) a time series from the Santa Fe Time Series Competition (the light intensity of a laser in chaotic state), and (3) the daily electricity demand of France, a real-world multivariate problem with structure on several time scales. The main results are (1) the gating network correctly discovers the different regimes of the process, (2) the widths associated with each expert are important for the segmentation task (and they can be used to characterize the sub-processes), and (3) there is less overfitting compared to single networks (homogeneous multi-layer perceptrons), since the experts learn to match their variances to the (local) noise levels. This can be viewed as matching the local complexity of the model to the local complexity of the data. ",
"neighbors": [
263,
310,
604,
975,
1315,
1508,
2413,
2414,
2562,
2683
],
"mask": "Train"
},
{
"node_id": 669,
"label": 5,
"text": "Title: Drug design by machine learning: Modelling drug activity \nAbstract: This paper describes an approach to modelling drug activity using machine learning tools. Some experiments in modelling the quantitative structure-activity relationship (QSAR) using a standard, Hansch, method and a machine learning system Golem were already reported in the literature. The paper describes the results of applying two other machine learning systems, Magnus Assistant and Retis, on the same data. The results achieved by the machine learning systems, are better then the results of the Hansch method; therefore, machine learning tools can be considered as very promising for solving that kind of problems. The given results also illustrate the variations of performance of the different machine learning systems applied to this drug design problem.",
"neighbors": [
509
],
"mask": "Validation"
},
{
"node_id": 670,
"label": 0,
"text": "Title: Rule Based Database Integration in HIPED Heterogeneous Intelligent Processing in Engineering Design \nAbstract: In this paper 1 we describe one aspect of our research in the project called HIPED, which addressed the problem of performing design of engineering devices by accessing heterogeneous databases. The front end of the HIPED system consisted of interactive KRI-TIK, a multimodal reasoning system that combined case based and model based reasoning to solve a design problem. This paper focuses on the backend processing where five types of queries received from the front end are evaluated by mapping them appropriately using the \"facts\" about the schemas of the underlying databases and \"rules\" that establish the correspondance among the data in these databases in terms of relationships such as equivalence, overlap and set containment. The uniqueness of our approach stems from the fact that the mapping process is very forgiving in that the query received from the front end is evaluated with respect to a large number of possibilities. These possibilities are encoded in the form of rules that consider various ways in which the tokens in the given query may match relation names, attrribute names, or values in the underlying tables. The approach has been implemented using CORAL deductive database system as the rule processing engine. ",
"neighbors": [
603
],
"mask": "Train"
},
{
"node_id": 671,
"label": 4,
"text": "Title: Learning to Achieve Goals \nAbstract: Temporal difference methods solve the temporal credit assignment problem for reinforcement learning. An important subproblem of general reinforcement learning is learning to achieve dynamic goals. Although existing temporal difference methods, such as Q learning, can be applied to this problem, they do not take advantage of its special structure. This paper presents the DG-learning algorithm, which learns efficiently to achieve dynamically changing goals and exhibits good knowledge transfer between goals. In addition, this paper shows how traditional relaxation techniques can be applied to the problem. Finally, experimental results are given that demonstrate the superiority of DG learning over Q learning in a moderately large, synthetic, non-deterministic domain.",
"neighbors": [
552,
562,
565,
566,
1459,
1599
],
"mask": "Test"
},
{
"node_id": 672,
"label": 6,
"text": "Title: Cryptographic Limitations on Learning Boolean Formulae and Finite Automata \nAbstract: In this paper we prove the intractability of learning several classes of Boolean functions in the distribution-free model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent, in that they hold regardless of the syntactic form in which the learner chooses to represent its hypotheses. Our methods reduce the problems of cracking a number of well-known public-key cryptosys- tems to the learning problems. We prove that a polynomial-time learning algorithm for Boolean formulae, deterministic finite automata or constant-depth threshold circuits would have dramatic consequences for cryptography and number theory: in particular, such an algorithm could be used to break the RSA cryptosystem, factor Blum integers (composite numbers equivalent to 3 modulo 4), and detect quadratic residues. The results hold even if the learning algorithm is only required to obtain a slight advantage in prediction over random guessing. The techniques used demonstrate an interesting duality between learning and cryptography. We also apply our results to obtain strong intractability results for approximating a gener - alization of graph coloring. fl This research was conducted while the author was at Harvard University and supported by an A.T.& T. Bell Laboratories scholarship. y Supported by grants ONR-N00014-85-K-0445, NSF-DCR-8606366 and NSF-CCR-89-02500, DAAL03-86-K-0171, DARPA AFOSR 89-0506, and by SERC. ",
"neighbors": [
109,
130,
456,
535,
549,
574,
591,
924,
1003,
1004,
1293,
1343,
1363,
1386,
1460,
2004,
2040,
2329,
2360,
2653,
2696
],
"mask": "Train"
},
{
"node_id": 673,
"label": 2,
"text": "Title: Increasing Consensus Accuracy in DNA Fragment Assemblies by Incorporating Fluorescent Trace Representations \nAbstract: We present a new method for determining the consensus sequence in DNA fragment assemblies. The new method, Trace-Evidence, directly incorporates aligned ABI trace information into consensus calculations via our previously described representation, TraceData Classifications. The new method extracts and sums evidence indicated by the representation to determine consensus calls. Using the Trace-Evidence method results in automatically produced consensus sequences that are more accurate and less ambiguous than those produced with standard majority- voting methods. Additionally, these improvements are achieved with less coverage than required by the standard methods using Trace-Evidence and a coverage of only three, error rates are as low as those with a coverage of over ten sequences. ",
"neighbors": [
194
],
"mask": "Test"
},
{
"node_id": 674,
"label": 2,
"text": "Title: Transfer in a Connectionist Model of the Acquisition of Morphology \nAbstract: The morphological systems of natural languages are replete with examples of the same devices used for multiple purposes: (1) the same type of morphological process (for example, suffixation for both noun case and verb tense) and (2) identical morphemes (for example, the same suffix for English noun plural and possessive). These sorts of similarity would be expected to convey advantages on language learners in the form of transfer from one morphological category to another. Connectionist models of morphology acquisition have been faulted for their supposed inability to represent phonological similarity across morphological categories and hence to facilitate transfer. This paper describes a connectionist model of the acquisition of morphology which is shown to exhibit transfer of this type. The model treats the morphology acquisition problem as one of learning to map forms onto meanings and vice versa. As the network learns these mappings, it makes phonological generalizations which are embedded in connection weights. Since these weights are shared by different morphological categories, transfer is enabled. In a set of experiments with artificial stimuli, networks were trained first on one morphological task (e.g., tense) and then on a second (e.g., number). It is shown that in the context of suffixation, prefixation, and template rules, the second task is facilitated when the second category either makes use of the same forms or the same general process type (e.g., prefixation) as the first. ",
"neighbors": [
427
],
"mask": "Train"
},
{
"node_id": 675,
"label": 5,
"text": "Title: Combining FOIL and EBG to Speed-up Logic Programs \nAbstract: This paper presents an algorithm that combines traditional EBL techniques and recent developments in inductive logic programming to learn effective clause selection rules for Prolog programs. When these control rules are incorporated into the original program, significant speed-up may be achieved. The algorithm is shown to be an improvement over competing EBL approaches in several domains. Additionally, the algorithm is capable of automatically transforming some intractable algorithms into ones that run in polynomial time.",
"neighbors": [
344,
597,
1429,
1442,
1445,
2215,
2339
],
"mask": "Validation"
},
{
"node_id": 676,
"label": 2,
"text": "Title: A Model of Invariant Object Recognition in the Visual System \nAbstract: Neurons in the ventral stream of the primate visual system exhibit responses to the images of objects which are invariant with respect to natural transformations such as translation, size, and view. Anatomical and neurophysiological evidence suggests that this is achieved through a series of hierarchical processing areas. In an attempt to elucidate the manner in which such representations are established, we have constructed a model of cortical visual processing which seeks to parallel many features of this system, specifically the multi-stage hierarchy with its topologically constrained convergent connectivity. Each stage is constructed as a competitive network utilising a modified Hebb-like learning rule, called the trace rule, which incorporates previous as well as current neuronal activity. The trace rule enables neurons to learn about whatever is invariant over short time periods (e.g. 0.5 s) in the representation of objects as the objects transform in the real world. The trace rule enables neurons to learn the statistical invariances about objects during their transformations, by associating together representations which occur close together in time. We show that by using the trace rule training algorithm the model can indeed learn to produce transformation invariant responses to natural stimuli such as faces.",
"neighbors": [
605,
1014,
1056,
1091
],
"mask": "Train"
},
{
"node_id": 677,
"label": 2,
"text": "Title: Recurrent Neural Networks for Missing or Asynchronous Data \nAbstract: In this paper we propose recurrent neural networks with feedback into the input units for handling two types of data analysis problems. On the one hand, this scheme can be used for static data when some of the input variables are missing. On the other hand, it can also be used for sequential data, when some of the input variables are missing or are available at different frequencies. Unlike in the case of probabilistic models (e.g. Gaussian) of the missing variables, the network does not attempt to model the distribution of the missing variables given the observed variables. Instead it is a more \"discriminant\" approach that fills in the missing variables for the sole purpose of minimizing a learning criterion (e.g., to minimize an output error).",
"neighbors": [
71
],
"mask": "Train"
},
{
"node_id": 678,
"label": 2,
"text": "Title: Algebraic Transformations of Objective Functions \nAbstract: Many neural networks can be derived as optimization dynamics for suitable objective functions. We show that such networks can be designed by repeated transformations of one objective into another with the same fixpoints. We exhibit a collection of algebraic transformations which reduce network cost and increase the set of objective functions that are neurally implementable. The transformations include simplification of products of expressions, functions of one or two expressions, and sparse matrix products (all of which may be interpreted as Legendre transformations); also the minimum and maximum of a set of expressions. These transformations introduce new interneurons which force the network to seek a saddle point rather than a minimum. Other transformations allow control of the network dynamics, by reconciling the Lagrangian formalism with the need for fixpoints. We apply the transformations to simplify a number of structured neural networks, beginning with the standard reduction of the winner-take-all network from O(N 2 ) connections to O(N ). Also susceptible are inexact graph-matching, random dot matching, convolutions and coordinate transformations, and sorting. Simulations show that fixpoint-preserving transformations may be applied repeatedly and elaborately, and the example networks still robustly converge. ",
"neighbors": [
528
],
"mask": "Validation"
},
{
"node_id": 679,
"label": 0,
"text": "Title: Developing Case-Based Reasoning for Structural Design \nAbstract: The use of case-based reasoning as a process model of design involves the subtasks of recalling previously known designs from memory and adapting these design cases or subcases to fit the current design context. The development of this process model for a particular design domain proceeds in parallel with the development of a representation for the cases, the case memory organisation, and the design knowledge needed in addition to specific designs. The selection of a particular representational paradigm for these types of information, and the details of its use for a particular problemsolving domain, depend on the intended use of the information to be represented and the project information available, as well as the nature of the domain. In this paper we describe the development and implementation of four case-based design systems: CASECAD, CADSYN, WIN, and DEMEX. Each system is described in terms of the content, organisation, and source of case memory, and the implementation of case recall and case adaptation. A comparison of these systems considers the relative advantages and disadvantages of the implementations. ",
"neighbors": [
30,
32
],
"mask": "Train"
},
{
"node_id": 680,
"label": 2,
"text": "Title: Cortical Mechanisms of Visual Recognition and Learning: A Hierarchical Kalman Filter Model \nAbstract: We describe a biologically plausible model of dynamic recognition and learning in the visual cortex based on the statistical theory of Kalman filtering from optimal control theory. The model utilizes a hierarchical network whose successive levels implement Kalman filters operating over successively larger spatial and temporal scales. Each hierarchical level in the network predicts the current visual recognition state at a lower level and adapts its own recognition state using the residual error between the prediction and the actual lower-level state. Simultaneously, the network also learns an internal model of the spatiotemporal dynamics of the input stream by adapting the synaptic weights at each hierarchical level in order to minimize prediction errors. The Kalman filter model respects key neuroanatomical data such as the reciprocity of connections between visual cortical areas, and assigns specific computational roles to the inter-laminar connections known to exist between neurons in the visual cortex. Previous work elucidated the usefulness of this model in explaining neurophysiological phenomena such as endstopping and other related extra-classical receptive field effects. In this paper, in addition to providing a more detailed exposition of the model, we present a variety of experimental results demonstrating the ability of this model to perform robust spatiotemporal segmentation and recognition of objects and image sequences in the presence of varying amounts of occlusion, background clutter, and noise. ",
"neighbors": [
74,
90,
608,
747
],
"mask": "Test"
},
{
"node_id": 681,
"label": 1,
"text": "Title: Guiding or Hiding: Explorations into the Effects of Learning on the Rate of Evolution. \nAbstract: Individual lifetime learning can `guide' an evolving population to areas of high fitness in genotype space through an evolutionary phenomenon known as the Baldwin effect (Baldwin, 1896; Hin-ton & Nowlan, 1987). It is the accepted wisdom that this guiding speeds up the rate of evolution. By highlighting another interaction between learning and evolution, that will be termed the Hiding effect, it will be argued here that this depends on the measure of evolutionary speed one adopts. The Hiding effect shows that learning can reduce the selection pressure between individuals by `hiding' their genetic differences. There is thus a trade-off between the Baldwin effect and the Hiding effect to determine learning's influence on evolution and two factors that contribute to this trade-off, the cost of learning and landscape epis tasis, are investigated experimentally.",
"neighbors": [
402,
712,
2309
],
"mask": "Validation"
},
{
"node_id": 682,
"label": 4,
"text": "Title: Memory Based Stochastic Optimization for Validation and Tuning of Function Approximators \nAbstract: This paper focuses on the optimization of hyper-parameters for function approximators. We describe a kind of racing algorithm for continuous optimization problems that spends less time evaluating poor parameter settings and more time honing its estimates in the most promising regions of the parameter space. The algorithm is able to automatically optimize the parameters of a function approximator with less computation time. We demonstrate the algorithm on the problem of finding good parameters for a memory based learner and show the tradeoffs involved in choosing the right amount of computation to spend on each evaluation.",
"neighbors": [
88,
1791,
2430
],
"mask": "Train"
},
{
"node_id": 683,
"label": 0,
"text": "Title: On the Greediness of Feature Selection Algorithms \nAbstract: Based on our analysis and experiments using real-world datasets, we find that the greediness of forward feature selection algorithms does not severely corrupt the accuracy of function approximation using the selected input features, but improves the efficiency significantly. Hence, we propose three greedier algorithms in order to further enhance the efficiency of the feature selection processing. We provide empirical results for linear regression, locally weighted regression and k-nearest-neighbor models. We also propose to use these algorithms to develop an offline Chinese and Japanese handwriting recognition system with auto matically configured, local models. ",
"neighbors": [
430,
632,
635,
686,
1860
],
"mask": "Train"
},
{
"node_id": 684,
"label": 3,
"text": "Title: Finding Overlapping Distributions with MML \nAbstract: This paper considers an aspect of mixture modelling. Significantly overlapping distributions require more data for their parameters to be accurately estimated than well separated distributions. For example, two Gaussian distributions are considered to significantly overlap when their means are within three standard deviations of each other. If insufficient data is available, only a single component distribution will be estimated, although the data originates from two component distributions. We consider how much data is required to distinguish two component distributions from one distribution in mixture modelling using the minimum message length (MML) criterion. First, we perform experiments which show the MML criterion performs well relative to other Bayesian criteria. Second, we make two improvements to the existing MML estimates, that improve its performance with overlapping distributions. ",
"neighbors": [
161,
525,
779,
1425,
1550
],
"mask": "Train"
},
{
"node_id": 685,
"label": 2,
"text": "Title: Performance of the GCel-512 and PowerXPlorer for parallel neural network simulations \nAbstract: This report presents new results from work performed in the framework of the IC 3 A pro-gramme. Using the GCel-512 and the PowerXPlorer made available by the UvA, a performance prediction model for several neural network simulations could be validated quantitatively both for a larger processor grid and for a different target parallel processor configuration. The performance prediction model and its application on a popular neural network model | backpropagation | decomposed via network decomposition will be discussed here. Using the model, the suitability of the GCel-512 and PowerXPlorer are discussed in terms of performance, speedup, efficiency and scalability.",
"neighbors": [
217,
747
],
"mask": "Validation"
},
{
"node_id": 686,
"label": 0,
"text": "Title: Prototype and Feature Selection by Sampling and Random Mutation Hill Climbing Algorithms \nAbstract: With the goal of reducing computational costs without sacrificing accuracy, we describe two algorithms to find sets of prototypes for nearest neighbor classification. Here, the term prototypes refers to the reference instances used in a nearest neighbor computation the instances with respect to which similarity is assessed in order to assign a class to a new data item. Both algorithms rely on stochastic techniques to search the space of sets of prototypes and are simple to implement. The first is a Monte Carlo sampling algorithm; the second applies random mutation hill climbing. On four datasets we show that only three or four prototypes sufficed to give predictive accuracy equal or superior to a basic nearest neighbor algorithm whose run-time storage costs were approximately 10 to 200 times greater. We briefly investigate how random mutation hill climbing may be applied to select features and prototypes simultaneously. Finally, we explain the performance of the sampling algorithm on these datasets in terms of a statistical measure of the extent of clustering displayed by the target classes.",
"neighbors": [
44,
116,
119,
223,
225,
318,
381,
430,
634,
635,
683,
760,
1273,
1698,
2541,
2557
],
"mask": "Train"
},
{
"node_id": 687,
"label": 2,
"text": "Title: Growing Cell Structures A Self-organizing Network for Unsupervised and Supervised Learning \nAbstract: We present a new self-organizing neural network model having two variants. The first variant performs unsupervised learning and can be used for data visualization, clustering, and vector quantization. The main advantage over existing approaches, e.g., the Kohonen feature map, is the ability of the model to automatically find a suitable network structure and size. This is achieved through a controlled growth process which also includes occasional removal of units. The second variant of the model is a supervised learning method which results from the combination of the abovementioned self-organizing network with the radial basis function (RBF) approach. In this model it is possible in contrast to earlier approaches toperform the positioning of the RBF units and the supervised training of the weights in parallel. Therefore, the current classification error can be used to determine where to insert new RBF units. This leads to small networks which generalize very well. Results on the two-spirals benchmark and a vowel classification problem are presented which are better than any results previously published. fl submitted for publication",
"neighbors": [
110,
626,
741,
745,
1157,
1564,
1700,
1704,
2381,
2683
],
"mask": "Train"
},
{
"node_id": 688,
"label": 4,
"text": "Title: Exploration and Model Building in Mobile Robot Domains \nAbstract: I present first results on COLUMBUS, an autonomous mobile robot. COLUMBUS operates in initially unknown, structured environments. Its task is to explore and model the environment efficiently while avoiding collisions with obstacles. COLUMBUS uses an instance-based learning technique for modeling its environment. Real-world experiences are generalized via two artificial neural networks that encode the characteristics of the robot's sensors, as well as the characteristics of typical environments the robot is assumed to face. Once trained, these networks allow for knowledge transfer across different environments the robot will face over its lifetime. COLUMBUS' models represent both the expected reward and the confidence in these expectations. Exploration is achieved by navigating to low confidence regions. An efficient dynamic programming method is employed in background to find minimal-cost paths that, executed by the robot, maximize exploration. COLUMBUS operates in real-time. It has been operating successfully in an office building environment for periods up to hours.",
"neighbors": [
60,
207,
252,
412,
552,
566,
835,
1248
],
"mask": "Train"
},
{
"node_id": 689,
"label": 1,
"text": "Title: Where Genetic Algorithms Excel \nAbstract: We analyze the performance of a Genetic Algorithm (GA) we call Culling and a variety of other algorithms on a problem we refer to as Additive Search Problem (ASP). ASP is closely related to several previously well studied problems, such as the game of Mastermind and additive fitness functions. We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Culling is efficient on ASP, highly noise tolerant, and the best known approach in some regimes. Noisy ASP is the first problem we are aware of where a Genetic Type Algorithm bests all known competitors. Standard GA's, by contrast, perform much more poorly on ASP than hillclimbing and other approaches even though the Schema theorem holds for ASP. We generalize ASP to k-ASP to study whether GA's will achieve `implicit parallelism' in a problem with many more schemata. GA's fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a Mean Field Theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GA's can beat competing methods. ",
"neighbors": [
163,
967,
1580,
1826
],
"mask": "Train"
},
{
"node_id": 690,
"label": 0,
"text": "Title: Instance Pruning Techniques \nAbstract: The nearest neighbor algorithm and its derivatives are often quite successful at learning a concept from a training set and providing good generalization on subsequent input vectors. However, these techniques often retain the entire training set in memory, resulting in large memory requirements and slow execution speed, as well as a sensitivity to noise. This paper provides a discussion of issues related to reducing the number of instances retained in memory while maintaining (and sometimes improving) generalization accuracy, and mentions algorithms other researchers have used to address this problem. It presents three intuitive noise-tolerant algorithms that can be used to prune instances from the training set. In experiments on 29 applications, the algorithm that achieves the highest reduction in storage also results in the highest generalization accuracy of the three methods.",
"neighbors": [
445,
2457
],
"mask": "Train"
},
{
"node_id": 691,
"label": 4,
"text": "Title: Reinforcement Learning in the Multi-Robot Domain \nAbstract: This paper describes a formulation of reinforcement learning that enables learning in noisy, dynamic environemnts such as in the complex concurrent multi-robot learning domain. The methodology involves minimizing the learning space through the use behaviors and conditions, and dealing with the credit assignment problem through shaped reinforcement in the form of heterogeneous reinforcement functions and progress estimators. We experimentally validate the ap proach on a group of four mobile robots learning a foraging task.",
"neighbors": [
552,
565,
1557,
1649,
2658
],
"mask": "Test"
},
{
"node_id": 692,
"label": 6,
"text": "Title: Decision Tree Induction: How Effective is the Greedy Heuristic? \nAbstract: Most existing decision tree systems use a greedy approach to induce trees | locally optimal splits are induced at every node of the tree. Although the greedy approach is suboptimal, it is believed to produce reasonably good trees. In the current work, we attempt to verify this belief. We quantify the goodness of greedy tree induction empirically, using the popular decision tree algorithms, C4.5 and CART. We induce decision trees on thousands of synthetic data sets and compare them to the corresponding optimal trees, which in turn are found using a novel map coloring idea. We measure the effect on greedy induction of variables such as the underlying concept complexity, training set size, noise and dimensionality. Our experiments show, among other things, that the expected classification cost of a greedily induced tree is consistently very close to that of the optimal tree. ",
"neighbors": [
296,
378,
438,
1191
],
"mask": "Train"
},
{
"node_id": 693,
"label": 2,
"text": "Title: FURTHER FACTS ABOUT INPUT TO STATE STABILIZATION \"Further facts about input to state stabilization\", IEEE\nAbstract: Report SYCON-88-15 ABSTRACT Previous results about input to state stabilizability are shown to hold even for systems which are not linear in controls, provided that a more general type of feedback be allowed. Applications to certain stabilization problems and coprime factorizations, as well as comparisons to other results on input to state stability, are also briefly discussed. ",
"neighbors": [
531,
648,
1471,
1501
],
"mask": "Train"
},
{
"node_id": 694,
"label": 2,
"text": "Title: Localist Attractor Networks \nAbstract: Attractor networks, which map a continuous input space to a discrete output space, are useful for pattern completion, cleaning up noisy or missing features in an input. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encoding of multiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor nets have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. We propose a statistical formulation of localist attractor net dynamics, which yields a convergence proof and a mathematical interpretation of model parameters. We present simulation experiments that explore the behavior of lo-calist attractor nets, showing that they produce a gang effectthe presence of an attractor enhances the attractor basins of neighboring attractorsand that spurious attractors occur only at points of symmetry in state space.",
"neighbors": [
76,
250
],
"mask": "Test"
},
{
"node_id": 695,
"label": 6,
"text": "Title: There is No Free Lunch but the Starter is Cheap: Generalisation from First Principles \nAbstract: According to Wolpert's no-free-lunch (NFL) theorems [1, 2], gener-alisation in the absence of domain knowledge is necessarily a zero-sum enterprise. Good generalisation performance in one situation is always offset by bad performance in another. Wolpert notes that the theorems do not demonstrate that effective generalisation is a logical impossibility but merely that a learner's bias (or assumption set) is of key importance ",
"neighbors": [
659,
747,
1967
],
"mask": "Validation"
},
{
"node_id": 696,
"label": 2,
"text": "Title: GAL: Networks that grow when they learn and shrink when they forget \nAbstract: Learning when limited to modification of some parameters has a limited scope; the capability to modify the system structure is also needed to get a wider range of the learnable. In the case of artificial neural networks, learning by iterative adjustment of synaptic weights can only succeed if the network designer predefines an appropriate network structure, i.e., number of hidden layers, units, and the size and shape of their receptive and projective fields. This paper advocates the view that the network structure should not, as usually done, be determined by trial-and-error but should be computed by the learning algorithm. Incremental learning algorithms can modify the network structure by addition and/or removal of units and/or links. A survey of current connectionist literature is given on this line of thought. \"Grow and Learn\" (GAL) is a new algorithm that learns an association at one-shot due to being incremental and using a local representation. During the so-called \"sleep\" phase, units that were previously stored but which are no longer necessary due to recent modifications are removed to minimize network complexity. The incrementally constructed network can later be finetuned off-line to improve performance. Another method proposed that greatly increases recognition accuracy is to train a number of networks and vote over their responses. The algorithm and its variants are tested on recognition of handwritten numerals and seem promising especially in terms of learning speed. This makes the algorithm attractive for on-line learning tasks, e.g., in robotics. The biological plausibility of incremental learning is also discussed briefly. Earlier part of this work was realized at the Laboratoire de Microinformatique of Ecole Polytechnique Federale de Lausanne and was supported by the Fonds National Suisse de la Recherche Scientifique. Later part was realized at and supported by the International Computer Science Institute. A number of people helped by guiding, stimulating discussions or questions: Subutai Ahmad, Peter Clarke, Jerry Feldman, Christian Jutten, Pierre Marchal, Jean Daniel Nicoud, Steve Omohondro and Leon Personnaz. ",
"neighbors": [
215,
427,
579,
747,
1672
],
"mask": "Test"
},
{
"node_id": 697,
"label": 3,
"text": "Title: Estimating Dependency Structure as a Hidden Variable \nAbstract: This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors. This report describes research done at the Dept. of Electrical Engineering and Computer Science, the Dept. of Brain and Cognitive Sciences, the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Dept. of Defense and by the Office of Naval Research. Michael I. Jordan is a NSF Presidential Young Investigator. The authors can be reached at M.I.T., Center for Biological and Computational Learning, 45 Carleton St., Cambridge MA 02142, USA. E-mail: mmp@ai.mit.edu, jordan@psyche.mit.edu, quaid@ai.mit.edu. ",
"neighbors": [
642
],
"mask": "Validation"
},
{
"node_id": 698,
"label": 2,
"text": "Title: Learning to Predict Reading Frames in E. coli DNA Sequences \nAbstract: Two fundamental problems in analyzing DNA sequences are (1) locating the regions of a DNA sequence that encode proteins, and (2) determining the reading frame for each region. We investigate using artificial neural networks (ANNs) to find coding regions, determine reading frames, and detect frameshift errors in E. coli DNA sequences. We describe our adaptation of the approach used by Uberbacher and Mural to identify coding regions in human DNA, and we compare the performance of ANNs to several conventional methods for predicting reading frames. Our experiments demonstrate that ANNs can outperform these conventional approaches. ",
"neighbors": [
360,
427,
474,
1431
],
"mask": "Test"
},
{
"node_id": 699,
"label": 4,
"text": "Title: Adaptive state space quantisation for reinforcement learning of collision-free navigation \nAbstract: The paper describes a self-learning control system for a mobile robot. Based on sensor information the control system has to provide a steering signal in such a way that collisions are avoided. Since in our case no `examples' are available, the system learns on the basis of an external reinforcement signal which is negative in case of a collision and zero otherwise. Rules from Temporal Difference learning are used to find the correct mapping between the (discrete) sensor input space and the steering signal. We describe the algorithm for learning the correct mapping from the input (state) vector to the output (steering) signal, and the algorithm which is used for a discrete coding of the input state space. ",
"neighbors": [
294,
566,
588,
747
],
"mask": "Test"
},
{
"node_id": 700,
"label": 6,
"text": "Title: Is Transfer Inductive? \nAbstract: Work is currently underway to devise learning methods which are better able to transfer knowledge from one task to another. The process of knowledge transfer is usually viewed as logically separate from the inductive procedures of ordinary learning. However, this paper argues that this `seperatist' view leads to a number of conceptual difficulties. It offers a task analysis which situates the transfer process inside a generalised inductive protocol. It argues that transfer should be viewed as a subprocess within induction and not as an independent procedure for transporting knowledge between learning trials.",
"neighbors": [
624,
625,
747,
879
],
"mask": "Train"
},
{
"node_id": 701,
"label": 2,
"text": "Title: Experiments on the Transfer of Knowledge between Neural Networks Reprinted from: Computational Learning Theory and\nAbstract: This chapter describes three studies which address the question of how neural network learning can be improved via the incorporation of information extracted from other networks. This general problem, which we call network transfer, encompasses many types of relationships between source and target networks. Our focus is on the utilization of weights from source networks which solve a subproblem of the target network task, with the goal of speeding up learning on the target task. We demonstrate how the approach described here can improve learning speed by up to ten times over learning starting with random weights. ",
"neighbors": [
15,
638,
1644
],
"mask": "Train"
},
{
"node_id": 702,
"label": 2,
"text": "Title: Automated Highway System \nAbstract: ALVINN (Autonomous Land Vehicle in a Neural Net) is a Backpropagation trained neural network which is capable of autonomously steering a vehicle in road and highway environments. Although ALVINN is fairly robust, one of the problems with it has been the time it takes to train. As the vehicle is capable of on-line learning, the driver has to drive the car for about 2 minutes before the network is capable of autonomous operation. One reason for this is the use of Backprop. In this report, we describe the original ALVINN system, and then look at three alternative training methods - Quickprop, Cascade Correlation, and Cascade 2. We then run a series of trials using Quickprop, Cascade Correlation and Cascade2, and compare them to a BackProp baseline. Finally, a hidden unit analysis is performed to determine what the network is learning. Applying Advanced Learning Algorithms to ALVINN ",
"neighbors": [
504
],
"mask": "Train"
},
{
"node_id": 703,
"label": 2,
"text": "Title: VECTOR ASSOCIATIVE MAPS: UNSUPERVISED REAL-TIME ERROR-BASED LEARNING AND CONTROL OF MOVEMENT TRAJECTORIES \nAbstract: ALVINN (Autonomous Land Vehicle in a Neural Net) is a Backpropagation trained neural network which is capable of autonomously steering a vehicle in road and highway environments. Although ALVINN is fairly robust, one of the problems with it has been the time it takes to train. As the vehicle is capable of on-line learning, the driver has to drive the car for about 2 minutes before the network is capable of autonomous operation. One reason for this is the use of Backprop. In this report, we describe the original ALVINN system, and then look at three alternative training methods - Quickprop, Cascade Correlation, and Cascade 2. We then run a series of trials using Quickprop, Cascade Correlation and Cascade2, and compare them to a BackProp baseline. Finally, a hidden unit analysis is performed to determine what the network is learning. Applying Advanced Learning Algorithms to ALVINN ",
"neighbors": [
747,
2233
],
"mask": "Train"
},
{
"node_id": 704,
"label": 3,
"text": "Title: EXPERIMENTING WITH THE CHEESEMAN-STUTZ EVIDENCE APPROXIMATION FOR PREDICTIVE MODELING AND DATA MINING \nAbstract: The work discussed in this paper is motivated by the need of building decision support systems for real-world problem domains. Our goal is to use these systems as a tool for supporting Bayes optimal decision making, where the action maximizing the expected utility, with respect to predicted probabilities of the possible outcomes, should be selected. For this reason, the models used need to be probabilistic in nature | the output of a model has to be a probability distribution, not just a set of numbers. For the model family, we have chosen the set of simple discrete finite mixture models which have the advantage of being computationally very efficient. In this work, we describe a Bayesian approach for constructing finite mixture models from sample data. Our approach is based on a two-phase unsupervised learning process which can be used both for exploratory analysis and model construction. In the first phase, the selection of a model class, i.e., the number of parameters, is performed by calculating the Cheeseman-Stutz approximation for the model class evidence. In the second phase, the MAP parameters in the selected class are estimated by the EM algorithm. In this framework, the overfitting problem common to many traditional learning approaches can be avoided, as the learning process automatically regulates the complexity of the model. This paper focuses on the model class selection phase and the approach is validated by presenting empirical results with both natural and synthetic data. ",
"neighbors": [
376
],
"mask": "Train"
},
{
"node_id": 705,
"label": 2,
"text": "Title: EXPERIMENTING WITH THE CHEESEMAN-STUTZ EVIDENCE APPROXIMATION FOR PREDICTIVE MODELING AND DATA MINING \nAbstract: TECHNICAL REPORT NO. 947 June 5, 1995 ",
"neighbors": [
192,
519,
1910
],
"mask": "Test"
},
{
"node_id": 706,
"label": 6,
"text": "Title: Large Margin Classification Using the Perceptron Algorithm \nAbstract: We introduce and analyze a new algorithm for linear classification which combines Rosenblatt's perceptron algorithm with Helmbold and Warmuth's leave-one-out method. Like Vapnik's maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large margins. Compared to Vapnik's algorithm, however, ours is much simpler to implement, and much more efficient in terms of computation time. We also show that our algorithm can be efficiently used in very high dimensional spaces using kernel functions. We performed some experiments using our algorithm, and some variants of it, for classifying images of handwritten digits. The performance of our algorithm is close to, but not as good as, the performance of maximal-margin classifiers on the same problem.",
"neighbors": [
453
],
"mask": "Train"
},
{
"node_id": 707,
"label": 5,
"text": "Title: Converting Thread-Level Parallelism to Instruction-Level Parallelism via Simultaneous Multithreading \nAbstract: A version of this paper will appear in ACM Transactions on Computer Systems, August 1997. Permission to make digital copies of part or all of this work for personal or classroom use is grantedwithout fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Abstract To achieve high performance, contemporary computer systems rely on two forms of parallelism: instruction-level parallelism (ILP) and thread-level parallelism (TLP). Wide-issue superscalar processors exploit ILP by executing multiple instructions from a single program in a single cycle. Multiprocessors (MP) exploit TLP by executing different threads in parallel on different processors. Unfortunately, both parallel-processing styles statically partition processor resources, thus preventing them from adapting to dynamically-changing levels of ILP and TLP in a program. With insufficient TLP, processors in an MP will be idle; with insufficient ILP, multiple-issue hardware on a superscalar is wasted. This paper explores parallel processing on an alternative architecture, simultaneous multithreading (SMT), which allows multiple threads to compete for and share all of the processors resources every cycle. The most compelling reason for running parallel applications on an SMT processor is its ability to use thread-level parallelism and instruction-level parallelism interchangeably. By permitting multiple threads to share the processors functional units simultaneously, the processor can use both ILP and TLP to accommodate variations in parallelism. When a program has only a single thread, all of the SMT processors resources can be dedicated to that thread; when more TLP exists, this parallelism can compensate for a lack of ",
"neighbors": [
158,
195,
196
],
"mask": "Validation"
},
{
"node_id": 708,
"label": 2,
"text": "Title: GIBBS-MARKOV MODELS \nAbstract: In this paper we present a framework for building probabilistic automata parameterized by context-dependent probabilities. Gibbs distributions are used to model state transitions and output generation, and parameter estimation is carried out using an EM algorithm where the M-step uses a generalized iterative scaling procedure. We discuss relations with certain classes of stochastic feedforward neural networks, a geometric interpretation for parameter estimation, and a simple example of a statistical language model constructed using this methodology. ",
"neighbors": [
14,
250,
1116
],
"mask": "Train"
},
{
"node_id": 709,
"label": 2,
"text": "Title: Convergence and new operations in SDM new method for converging in the SDM memory, utilizing\nAbstract: Report R95:13 ISRN : SICS-R--95/13-SE ISSN : 0283-3638 Abstract ",
"neighbors": [
340,
341,
529
],
"mask": "Test"
},
{
"node_id": 710,
"label": 6,
"text": "Title: Improving Regressors using Boosting Techniques \nAbstract: In the regression context, boosting and bagging are techniques to build a committee of regressors that may be superior to a single regressor. We use regression trees as fundamental building blocks in bagging committee machines and boosting committee machines. Performance is analyzed on three non-linear functions and the Boston housing database. In all cases, boosting is at least equivalent, and in most cases better than bagging in terms of prediction error.",
"neighbors": [
438,
569,
847
],
"mask": "Train"
},
{
"node_id": 711,
"label": 3,
"text": "Title: NEULA: A hybrid neural-symbolic expert system shell \nAbstract: Current expert systems cannot properly handle imprecise and incomplete information. On the other hand, neural networks can perform pattern recognition operations even in noisy environments. Against this background, we have implemented a neural expert system shell NEULA, whose computational mechanism processes imprecisely or incompletely given information by means of approximate probabilistic reasoning. ",
"neighbors": [
485
],
"mask": "Train"
},
{
"node_id": 712,
"label": 1,
"text": "Title: Tracking the red queen: Measurements of adaptive progress in co-evolution ary simulations. In Third European\nAbstract: Current expert systems cannot properly handle imprecise and incomplete information. On the other hand, neural networks can perform pattern recognition operations even in noisy environments. Against this background, we have implemented a neural expert system shell NEULA, whose computational mechanism processes imprecisely or incompletely given information by means of approximate probabilistic reasoning. ",
"neighbors": [
54,
219,
415,
681,
1036,
1965,
2664
],
"mask": "Train"
},
{
"node_id": 713,
"label": 3,
"text": "Title: FLEXIBLE PARAMETRIC MEASUREMENT ERROR MODELS \nAbstract: Inferences in measurement error models can be sensitive to modeling assumptions. Specifically, if the model is incorrect then the estimates can be inconsistent. To reduce sensitivity to modeling assumptions and yet still retain the efficiency of parametric inference we propose to use flexible parametric models which can accommodate departures from standard parametric models. We use mixtures of normals for this purpose. We study two cases in detail: a linear errors-in-variables model and a change-point Berkson model. fl Raymond J. Carroll is Professor of Statistics, Nutrition and Toxicology, Department of Statistics, Texas A&M University, College Station, TX 77843-3143. Kathryn Roeder is Associate Professor, and Larry Wasser-man is Professor, Department of Statistics, Carnegie-Mellon University, Pittsburgh PA 15213-3890. Carroll's research was supported by a grant from the National Cancer Institute (CA-57030). Roeder's research was supported by NSF grant DMS-9496219. Wasserman's research was supported by NIH grant RO1-CA54852 and NSF grants DMS-9303557 and DMS-9357646. ",
"neighbors": [
161,
347
],
"mask": "Train"
},
{
"node_id": 714,
"label": 1,
"text": "Title: Orgy in the Computer: Multi-Parent Reproduction in Genetic Algorithms \nAbstract: In this paper we investigate the phenomenon of multi-parent reproduction, i.e. we study recombination mechanisms where an arbitrary n > 1 number of parents participate in creating children. In particular, we discuss scanning crossover that generalizes the standard uniform crossover and diagonal crossover that generalizes 1-point crossover, and study the effects of different number of parents on the GA behavior. We conduct experiments on tough function optimization problems and observe that by multi-parent operators the performance of GAs can be enhanced significantly. We also give a theoretical foundation by showing how these operators work on distributions.",
"neighbors": [
163,
833,
1035,
1218,
1299,
2089
],
"mask": "Test"
},
{
"node_id": 715,
"label": 3,
"text": "Title: Covariate Selection in Hierarchical Models of Hospital Admission Counts: A Bayes Factor Approach 1 \nAbstract: TECHNICAL REPORT No. 268 Department of Statistics, GN-22 University of Washington Seattle, Washington 98195 USA 1 Susan L. Rosenkranz is Pew Health Policy Postdoctoral Fellow at the Institute for Health Policy Studies, Box 0936, University of California at San Francisco, San Francisco, CA 94143, and Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, GN-22, University of Washington, Seattle, WA 98195. Rosenkranz's research was supported by the National Research Service Award 5T32CA 09168-17 from the National Cancer Institute. The authors are grateful to Paula Diehr and Kevin Cain for helpful discussions. ",
"neighbors": [
84
],
"mask": "Train"
},
{
"node_id": 716,
"label": 2,
"text": "Title: Covariate Selection in Hierarchical Models of Hospital Admission Counts: A Bayes Factor Approach 1 \nAbstract: Draft A Brief Introduction to Neural Networks Richard D. De Veaux Lyle H. Ungar Williams College University of Pennsylvania Abstract Artificial neural networks are being used with increasing frequency for high dimensional problems of regression or classification. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques. KEYWORDS: nonparametric regression; function approximation; backpropagation. 1 Introduction Networks that mimic the way the brain works; computer programs that actually LEARN patterns; forecasting without having to know statistics. These are just some of the many claims and attractions of artificial neural networks. Neural networks (we will henceforth drop the term artificial, unless we need to distinguish them from biological neural networks) seem to be everywhere these days, and at least in their advertising, are able to do all that statistics can do without all the fuss and bother of having to do anything except buy a piece of software. Neural networks have been successfully used for many different applications including robotics, chemical process control, speech recognition, optical character recognition, credit card fraud detection, interpretation of chemical spectra and vision for autonomous navigation of vehicles. (Pointers to the literature are given at the end of this article.) In this article we will attempt to explain how one particular type of neural network, feedforward networks with sigmoidal activation functions (\"backpropagation networks\") actually works, how it is \"trained\", and how it compares with some more well known statistical techniques. As an example of why someone would want to use a neural network, consider the problem of recognizing hand written ZIP codes on letters. This is a classification problem, where the 1 ",
"neighbors": [
157,
611
],
"mask": "Train"
},
{
"node_id": 717,
"label": 1,
"text": "Title: Information filtering: Selection mechanisms in learning systems. Machine Learning, 10:113-151, 1993. Generalization as search. Artificial\nAbstract: Draft A Brief Introduction to Neural Networks Richard D. De Veaux Lyle H. Ungar Williams College University of Pennsylvania Abstract Artificial neural networks are being used with increasing frequency for high dimensional problems of regression or classification. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques. KEYWORDS: nonparametric regression; function approximation; backpropagation. 1 Introduction Networks that mimic the way the brain works; computer programs that actually LEARN patterns; forecasting without having to know statistics. These are just some of the many claims and attractions of artificial neural networks. Neural networks (we will henceforth drop the term artificial, unless we need to distinguish them from biological neural networks) seem to be everywhere these days, and at least in their advertising, are able to do all that statistics can do without all the fuss and bother of having to do anything except buy a piece of software. Neural networks have been successfully used for many different applications including robotics, chemical process control, speech recognition, optical character recognition, credit card fraud detection, interpretation of chemical spectra and vision for autonomous navigation of vehicles. (Pointers to the literature are given at the end of this article.) In this article we will attempt to explain how one particular type of neural network, feedforward networks with sigmoidal activation functions (\"backpropagation networks\") actually works, how it is \"trained\", and how it compares with some more well known statistical techniques. As an example of why someone would want to use a neural network, consider the problem of recognizing hand written ZIP codes on letters. This is a classification problem, where the 1 ",
"neighbors": [
163,
188,
523,
1122,
1877,
2502,
2568
],
"mask": "Train"
},
{
"node_id": 718,
"label": 2,
"text": "Title: Gaussian Regression and Optimal Finite Dimensional Linear Models \nAbstract: Draft A Brief Introduction to Neural Networks Richard D. De Veaux Lyle H. Ungar Williams College University of Pennsylvania Abstract Artificial neural networks are being used with increasing frequency for high dimensional problems of regression or classification. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques. KEYWORDS: nonparametric regression; function approximation; backpropagation. 1 Introduction Networks that mimic the way the brain works; computer programs that actually LEARN patterns; forecasting without having to know statistics. These are just some of the many claims and attractions of artificial neural networks. Neural networks (we will henceforth drop the term artificial, unless we need to distinguish them from biological neural networks) seem to be everywhere these days, and at least in their advertising, are able to do all that statistics can do without all the fuss and bother of having to do anything except buy a piece of software. Neural networks have been successfully used for many different applications including robotics, chemical process control, speech recognition, optical character recognition, credit card fraud detection, interpretation of chemical spectra and vision for autonomous navigation of vehicles. (Pointers to the literature are given at the end of this article.) In this article we will attempt to explain how one particular type of neural network, feedforward networks with sigmoidal activation functions (\"backpropagation networks\") actually works, how it is \"trained\", and how it compares with some more well known statistical techniques. As an example of why someone would want to use a neural network, consider the problem of recognizing hand written ZIP codes on letters. This is a classification problem, where the 1 ",
"neighbors": [
271,
611
],
"mask": "Train"
},
{
"node_id": 719,
"label": 2,
"text": "Title: Parzen. On estimation of a probability density function and mode. Annual Mathematical Statistics, 33:1065-1076, 1962.\nAbstract: To apply the algorithm for classification we assign each class a separate set of codebook Gaussians. Each set is only trained with patterns from a single class. After having trained the codebook Gaussians, each set provides an estimate of the probability function of one class; just as with Parzen window estimation, we take as the estimate of the pattern distribution the average of all Gaussians in the set. Classification of a pattern may now be done by calculating the probability of each class at the respective sample point, and assigning to the pattern the class with the highest probability. Hence the whole codebook plays a role in the classification of patterns. This is not the case with regular classification schemes using codebooks. We have tested the classification scheme on several classification tasks including the two spiral problem. We compared our algorithm to various other classification algorithms and it came out second; the best algorithm for the applications is the Parzen window estimation. However, the computing time and memory for Parzen window estimation are excessive when compared to our algorithm, and hence, in practical situations, our algorithm is to be preferred. We have developed a fast algorithm which combines attractive properties of both Parzen window estimation and vector quantization. The scale parameter is tuned adaptively and, therefore, is not set in an ad hoc manner. It allows a classification strategy in which all the codebook vectors are taken into account. This yields better results than the standard vector quantization techniques. An interesting topic for further research is to use radially non-symmetric Gaussians. ",
"neighbors": [
87,
520,
609,
611,
747,
1112,
1133,
1666,
1838,
2561
],
"mask": "Train"
},
{
"node_id": 720,
"label": 2,
"text": "Title: Predicting Lifetimes in Dynamically Allocated Memory \nAbstract: Predictions of lifetimes of dynamically allocated objects can be used to improve time and space efficiency of dynamic memory management in computer programs. Barrett and Zorn [1993] used a simple lifetime predictor and demonstrated this improvement on a variety of computer programs. In this paper, we use decision trees to do lifetime prediction on the same programs and show significantly better prediction. Our method also has the advantage that during training we can use a large number of features and let the decision tree automatically choose the relevant subset.",
"neighbors": [
438
],
"mask": "Train"
},
{
"node_id": 721,
"label": 1,
"text": "Title: Learning Representations for Evolutionary Computation an example from the domain of two-dimensional shape designs. In\nAbstract: Evolutionary systems have been used in a variety of applications, from turbine design to scheduling problems. The basic algorithms are similar in all these applications, but the representation is always problem specific. Unfortunately, the search time for evolutionary systems very much depends on efficient codings, using problem specific domain knowledge to reduce the size of the search space. This paper describes an approach, where the user only specifies a very general, basic coding that can be used in a larger variety of problems. The system then learns a more efficient, problem specific coding. To do this, an evolutionary system with variable length coding is used. While the system optimizes an example problem, a meta process identifies successful combinations of genes in the population and combines them into higher level evolved genes. The extraction is repeated iteratively, allowing genes to evolve that have a high level complexity and encode a high number of the original, basic genes. This results in a continuous restructuring of the search space, allowing potentially successful solutions to be found in much shorter search time. The evolved coding can then be used to solve other, related problems. While not excluding any potentially desirable solutions, the evolved coding makes knowledge from the example problem available for the new problem. ",
"neighbors": [
188
],
"mask": "Train"
},
{
"node_id": 722,
"label": 6,
"text": "Title: A Study of Maximal-Coverage Learning Algorithms \nAbstract: The coverage of a learning algorithm is the number of concepts that can be learned by that algorithm from samples of a given size. This paper asks whether good learning algorithms can be designed by maximizing their coverage. The paper extends a previous upper bound on the coverage of any Boolean concept learning algorithm and describes two algorithms|Multi-Balls and Large-Ball|whose coverage approaches this upper bound. Experimental measurement of the coverage of the ID3 and FRINGE algorithms shows that their coverage is far below this bound. Further analysis of Large-Ball shows that although it learns many concepts, these do not seem to be very interesting concepts. Hence, coverage maximization alone does not appear to yield practically-useful learning algorithms. The paper concludes with a definition of coverage within a bias, which suggests a way that coverage maximization could be applied to strengthen weak preference biases.",
"neighbors": [
635,
638
],
"mask": "Train"
},
{
"node_id": 723,
"label": 3,
"text": "Title: Exploiting Structure in Policy Construction \nAbstract: Markov decision processes (MDPs) have recently been applied to the problem of modeling decision-theoretic planning. While traditional methods for solving MDPs are often practical for small states spaces, their effectiveness for large AI planning problems is questionable. We present an algorithm, called structured policy iteration (SPI), that constructs optimal policies without explicit enumeration of the state space. The algorithm retains the fundamental computational steps of the commonly used modified policy iteration algorithm, but exploits the variable and propositional independencies reflected in a temporal Bayesian network representation of MDPs. The principles behind SPI can be applied to any structured representation of stochastic actions, policies and value functions, and the algorithm itself can be used in conjunction with re cent approximation methods.",
"neighbors": [
552
],
"mask": "Train"
},
{
"node_id": 724,
"label": 2,
"text": "Title: ASOCS: A Multilayered Connectionist Network with Guaranteed Learning of Arbitrary Mappings \nAbstract: This paper reviews features of a new class of multilayer connectionist architectures known as ASOCS (Adaptive Self-Organizing Concurrent Systems). ASOCS is similar to most decision-making neural network models in that it attempts to learn an adaptive set of arbitrary vector mappings. However, it differs dramatically in its mechanisms. ASOCS is based on networks of adaptive digital elements which self-modify using local information. Function specification is entered incrementally by use of rules, rather than complete input-output vectors, such that a processing network is able to extract critical features from a large environment and give output in a parallel fashion. Learning also uses parallelism and self-organization such that a new rule is completely learned in time linear with the depth of the network. The model guarantees learning of any arbitrary mapping of boolean input-output vectors. The model is also stable in that learning does not erase any previously learned mappings except those explicitly contradicted. ",
"neighbors": [
747,
2612
],
"mask": "Train"
},
{
"node_id": 725,
"label": 3,
"text": "Title: Maximum Working Likelihood Inference with Markov Chain Monte Carlo \nAbstract: Maximum working likelihood (MWL) inference in the presence of missing data can be quite challenging because of the intractability of the associated marginal likelihood. This problem can be further exacerbated when the number of parameters involved is large. We propose using Markov chain Monte Carlo (MCMC) to first obtain both the MWL estimator and the working Fisher information matrix and, second, using Monte Carlo quadrature to obtain the remaining components of the correct asymptotic MWL variance. Evaluation of the marginal likelihood is not needed. We demonstrate consistency and asymptotic normality when the number of independent and identically distributed data clusters is large but the likelihood may be incorrectly specified. An analysis of longitudinal ordinal data is given for an example. KEY WORDS: Convergence of posterior distributions, Maximum likelihood, Metropolis ",
"neighbors": [
41,
48,
93
],
"mask": "Train"
},
{
"node_id": 726,
"label": 2,
"text": "Title: Natural image statistics and efficient coding \nAbstract: Natural images contain characteristic statistical regularities that set them apart from purely random images. Understanding what these regularities are can enable natural images to be coded more efficiently. In this paper, we describe some of the forms of structure that are contained in natural images, and we show how these are related to the response properties of neurons at early stages of the visual system. Many of the important forms of structure require higher-order (i.e., more than linear, pairwise) statistics to characterize, which makes models based on linear Hebbian learning, or principal components analysis, inappropriate for finding efficient codes for natural images. We suggest that a good objective for an efficient coding of natural scenes is to maximize the sparseness of the representation, and we show that a network that learns sparse codes of natural scenes succeeds in developing localized, oriented, bandpass receptive fields similar to those in the primate striate cortex. ",
"neighbors": [
576,
1068,
1418
],
"mask": "Train"
},
{
"node_id": 727,
"label": 1,
"text": "Title: Using Problem Generators to Explore the Effects of Epistasis \nAbstract: In this paper we develop an empirical methodology for studying the behavior of evolutionary algorithms based on problem generators. We then describe three generators that can be used to study the effects of epistasis on the performance of EAs. Finally, we illustrate the use of these ideas in a preliminary exploration of the effects of epistasis on simple GAs.",
"neighbors": [
163,
1016,
1136,
1799
],
"mask": "Train"
},
{
"node_id": 728,
"label": 1,
"text": "Title: On the Virtues of Parameterized Uniform Crossover \nAbstract: Traditionally, genetic algorithms have relied upon 1 and 2-point crossover operators. Many recent empirical studies, however, have shown the benefits of higher numbers of crossover points. Some of the most intriguing recent work has focused on uniform crossover, which involves on the average L/2 crossover points for strings of length L. Theoretical results suggest that, from the view of hyperplane sampling disruption, uniform crossover has few redeeming features. However, a growing body of experimental evidence suggests otherwise. In this paper, we attempt to reconcile these opposing views of uniform crossover and present a framework for understanding its virtues.",
"neighbors": [
243,
943,
1016,
1127,
1305,
1466
],
"mask": "Train"
},
{
"node_id": 729,
"label": 3,
"text": "Title: On the Complexity of Conditional Logics \nAbstract: Conditional logics, introduced by Lewis and Stalnaker, have been utilized in artificial intelligence to capture a broad range of phenomena. In this paper we examine the complexity of several variants discussed in the literature. We show that, in general, deciding satisfiability is PSPACE-complete for formulas with arbitrary conditional nesting and NP-complete for formulas with bounded nesting of conditionals. However, we provide several exceptions to this rule. Of particular note are results showing that (a) when assuming uniformity (i.e., that all worlds agree on what worlds are possible), the decision problem becomes EXPTIME-complete even for formulas with bounded nesting, and (b) when assuming absoluteness (i.e., that all worlds agree on all conditional statements), the decision problem is NP-complete for for mulas with arbitrary nesting.",
"neighbors": [
67,
342,
467,
1945,
1993
],
"mask": "Validation"
},
{
"node_id": 730,
"label": 2,
"text": "Title: Learning Sequential Tasks by Incrementally Adding Higher Orders \nAbstract: An incremental, higher-order, non-recurrent network combines two properties found to be useful for learning sequential tasks: higher-order connections and incremental introduction of new units. The network adds higher orders when needed by adding new units that dynamically modify connection weights. Since the new units modify the weights at the next time-step with information from the previous step, temporal tasks can be learned without the use of feedback, thereby greatly simplifying training. Furthermore, a theoretically unlimited number of units can be added to reach into the arbitrarily distant past. Experiments with the Reber grammar have demonstrated speedups of two orders of magnitude over recurrent networks.",
"neighbors": [
350,
351,
770,
1889
],
"mask": "Test"
},
{
"node_id": 731,
"label": 2,
"text": "Title: LEARNING FACTORIAL CODES BY PREDICTABILITY MINIMIZATION (Neural Computation, 4(6):863-879, 1992) \nAbstract: I propose a novel general principle for unsupervised learning of distributed non-redundant internal representations of input patterns. The principle is based on two opposing forces. For each representational unit there is an adaptive predictor which tries to predict the unit from the remaining units. In turn, each unit tries to react to the environment such that it minimizes its predictability. This encourages each unit to filter `abstract concepts' out of the environmental input such that these concepts are statistically independent of those upon which the other units focus. I discuss various simple yet potentially powerful implementations of the principle which aim at finding binary factorial codes (Bar-low et al., 1989), i.e. codes where the probability of the occurrence of a particular input is simply the product of the probabilities of the corresponding code symbols. Such codes are potentially relevant for (1) segmentation tasks, (2) speeding up supervised learning, (3) novelty detection. Methods for finding factorial codes automatically implement Occam's razor for finding codes using a minimal number of units. Unlike previous methods the novel principle has a potential for removing not only linear but also non-linear output redundancy. Illustrative experiments show that algorithms based on the principle of predictability minimization are practically feasible. The final part of this paper describes an entirely local algorithm that has a potential for learning unique representations of extended input sequences.",
"neighbors": [
121,
330,
576,
808,
1450,
1656,
1778
],
"mask": "Train"
},
{
"node_id": 732,
"label": 6,
"text": "Title: Statistical Queries and Faulty PAC Oracles \nAbstract: In this paper we study learning in the PAC model of Valiant [18] in which the example oracle used for learning may be faulty in one of two ways: either by misclassifying the example or by distorting the distribution of examples. We first consider models in which examples are misclassified. Kearns [12] recently showed that efficient learning in a new model using statistical queries is a sufficient condition for PAC learning with classification noise. We show that efficient learning with statistical queries is sufficient for learning in the PAC model with malicious error rate proportional to the required statistical query accuracy. One application of this result is a new lower bound for tolerable malicious error in learning monomials of k literals. This is the first such bound which is independent of the number of irrelevant attributes n. We also use the statistical query model to give sufficient conditions for using distribution specific algorithms on distributions outside their prescribed domains. A corollary of this result expands the class of distributions on which we can weakly learn monotone Boolean formulae. We also consider new models of learning in which examples are not chosen according to the distribution on which the learner will be tested. We examine three variations of distribution noise and give necessary and sufficient conditions for polynomial time learning with such noise. We show containments and separations between the various models of faulty oracles. Finally, we examine hypothesis boosting algorithms in the context of learning with distribution noise, and show that Schapire's result regarding the strength of weak learnabil-ity [17] is in some sense tight in requiring the weak learner to be nearly distribution free. ",
"neighbors": [
20,
267,
640,
1897
],
"mask": "Test"
},
{
"node_id": 733,
"label": 4,
"text": "Title: Evolutionary Differentiation of Learning Abilities a case study on optimizing parameter values in Q-learning by\nAbstract: This paper describes the first stage of our study on evolution of learning abilities. We use a simple maze exploration problem designed by R. Sut-ton as the task of each individual, and encode the inherent learning parameters on the genome. The learning architecture we use is a one step Q-learning using look-up table, where the inherent parameters are initial Q-values, learning rate, discount rate of rewards, and exploration rate. Under the fitness measure proportioning to the number of times it achieves at the goal in the later half of life, learners evolve through a genetic algorithm. The results of computer simulation indicated that learning ability emerge when the environment changes every generation, and that the inherent map for the optimal path can be acquired when the environment doesn't change. These results suggest that emergence of learning ability needs environmental change faster than alternate generation. ",
"neighbors": [
566
],
"mask": "Train"
},
{
"node_id": 734,
"label": 4,
"text": "Title: Efficient dynamic-programming updates in partially observable Markov decision processes \nAbstract: We examine the problem of performing exact dynamic-programming updates in partially observable Markov decision processes (pomdps) from a computational complexity viewpoint. Dynamic-programming updates are a crucial operation in a wide range of pomdp solution methods and we find that it is intractable to perform these updates on piecewise-linear convex value functions for general pomdps. We offer a new algorithm, called the witness algorithm, which can compute updated value functions efficiently on a restricted class of pomdps in which the number of linear facets is not too great. We compare the witness algorithm to existing algorithms analytically and empirically and find that it is the fastest algorithm over a wide range of pomdp sizes.",
"neighbors": [
213,
490,
492
],
"mask": "Validation"
},
{
"node_id": 735,
"label": 5,
"text": "Title: The Limits of Instruction Level Parallelism in SPEC95 Applications \nAbstract: This paper examines the limits to instruction level parallelism that can be found in programs, in particular the SPEC95 benchmark suite. It differs from earlier studies in removing non-essential true dependencies that occur as a result of the compiler employing a stack for subroutine linkage. This is a subtle limitation to parallelism that is not readily evident as it appears as a true dependency on the stack pointer. In this paper we show that its removal exposes far more parallelism than has been seen previously. We refer to this type of parallelism as \"parallelism at a distance\" because it requires impossibly large instruction windows for detection. We conclude with two observations: 1) that a single instruction window characteristic of superscalar machines is inadequate for detecting parallelism at a distance; and 2) in order to take advantage of this parallelism the compiler must be involved, or separate threads must be explicitly programmed. ",
"neighbors": [
86,
195,
216,
307,
1956,
2106,
2527,
2649
],
"mask": "Train"
},
{
"node_id": 736,
"label": 2,
"text": "Title: GIBBS-MARKOV MODELS \nAbstract: In this paper we present a framework for building probabilistic automata parameterized by context-dependent probabilities. Gibbs distributions are used to model state transitions and output generation, and parameter estimation is carried out using an EM algorithm where the M-step uses a generalized iterative scaling procedure. We discuss relations with certain classes of stochastic feedforward neural networks, a geometric interpretation for parameter estimation, and a simple example of a statistical language model constructed using this methodology. ",
"neighbors": [
14,
250,
1116
],
"mask": "Test"
},
{
"node_id": 737,
"label": 2,
"text": "Title: The Role of Constraints in Hebbian Learning \nAbstract: Models of unsupervised correlation-based (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study the dynamical effects of such constraints. Two methods of enforcing a constraint are distinguished, multiplicative and subtractive. For otherwise linear learning rules, multiplicative enforcement of a constraint results in dynamics that converge to the principal eigenvector of the operator determining unconstrained synaptic development. Subtractive enforcement, in contrast, typically leads to a final state in which almost all synaptic strengths reach either the maximum or minimum allowed value. This final state is often dominated by weight configurations other than the principal eigenvector of the unconstrained operator. Multiplicative enforcement yields a \"graded\" receptive field in which most mutually correlated inputs are represented, whereas subtractive enforcement yields a receptive field that is \"sharpened\" to a subset of maximally-correlated inputs. If two equivalent input populations (e.g. two eyes) innervate a common target, multiplicative enforcement prevents their segregation (ocular dominance segregation) when the two populations are weakly correlated; whereas subtractive enforcement allows segregation under these circumstances. These results may be used to understand constraints both over output cells and over input cells. A variety of rules that can implement constrained dynamics are discussed.",
"neighbors": [
747,
2024
],
"mask": "Train"
},
{
"node_id": 738,
"label": 4,
"text": "Title: On the Convergence of Stochastic Iterative Dynamic Programming Algorithms \nAbstract: Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD() algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD() and Q-learning belong. This report describes research done at the Dept. of Brain and Cognitive Sciences, the Center for Biological and Computational Learning, and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for CBCL is provided in part by a grant from the NSF (ASC-9217041). Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Dept. of Defense. The authors were supported by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, by grant N00014-90-J-1942 from the Office of Naval Research, and by NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. ",
"neighbors": [
210,
552,
564,
565,
575,
621,
1183,
1213,
1376,
1585,
1727,
1741,
2628,
2629
],
"mask": "Test"
},
{
"node_id": 739,
"label": 4,
"text": "Title: On the Convergence of Stochastic Iterative Dynamic Programming Algorithms \nAbstract: Empirical Comparison of Gradient Descent and Exponentiated Gradient Descent in Supervised and Reinforcement Learning Technical Report 96-70 ",
"neighbors": [
63
],
"mask": "Test"
},
{
"node_id": 740,
"label": 6,
"text": "Title: Information-based objective functions for active data selection \nAbstract: Learning can be made more efficient if we can actively select particularly salient data points. Within a Bayesian learning framework, objective functions are discussed which measure the expected informativeness of candidate measurements. Three alternative specifications of what we want to gain information about lead to three different criteria for data selection. All these criteria depend on the assumption that the hypothesis space is correct, which may prove to be their main weakness. ",
"neighbors": [
157,
164,
418,
560,
929,
1559,
1664,
1667,
1683,
1703
],
"mask": "Train"
},
{
"node_id": 741,
"label": 2,
"text": "Title: Incremental Grid Growing: Encoding High-Dimensional Structure into a Two-Dimensional Feature Map \nAbstract: Knowledge of clusters and their relations is important in understanding high-dimensional input data with unknown distribution. Ordinary feature maps with fully connected, fixed grid topology cannot properly reflect the structure of clusters in the input space|there are no cluster boundaries on the map. Incremental feature map algorithms, where nodes and connections are added to or deleted from the map according to the input distribution, can overcome this problem. However, so far such algorithms have been limited to maps that can be drawn in 2-D only in the case of 2-dimensional input space. In the approach proposed in this paper, nodes are added incrementally to a regular, 2-dimensional grid, which is drawable at all times, irrespective of the dimensionality of the input space. The process results in a map that explicitly represents the cluster structure of the high-dimensional input. ",
"neighbors": [
204,
687,
747,
1157
],
"mask": "Test"
},
{
"node_id": 742,
"label": 3,
"text": "Title: A Graphical Characterization of Lattice Conditional Independence Models \nAbstract: Lattice conditional independence (LCI) models for multivariate normal data recently have been introduced for the analysis of non-monotone missing data patterns and of nonnested dependent linear regression models ( seemingly unrelated regressions). It is shown here that the class of LCI models coincides with a subclass of the class of graphical Markov models determined by acyclic digraphs (ADGs), namely, the subclass of transitive ADG models. An explicit graph - theoretic characterization of those ADGs that are Markov equivalent to some transitive ADG is obtained. This characterization allows one to determine whether a specific ADG D is Markov equivalent to some transitive ADG, hence to some LCI model, in polynomial time, without an exhaustive search of the (exponentially large) equivalence class [D ]. These results do not require the existence or positivity of joint densities.",
"neighbors": [
645,
772,
1240
],
"mask": "Validation"
},
{
"node_id": 743,
"label": 1,
"text": "Title: Learning to be Selective in Genetic-Algorithm-Based Design Optimization \nAbstract: Lattice conditional independence (LCI) models for multivariate normal data recently have been introduced for the analysis of non-monotone missing data patterns and of nonnested dependent linear regression models ( seemingly unrelated regressions). It is shown here that the class of LCI models coincides with a subclass of the class of graphical Markov models determined by acyclic digraphs (ADGs), namely, the subclass of transitive ADG models. An explicit graph - theoretic characterization of those ADGs that are Markov equivalent to some transitive ADG is obtained. This characterization allows one to determine whether a specific ADG D is Markov equivalent to some transitive ADG, hence to some LCI model, in polynomial time, without an exhaustive search of the (exponentially large) equivalence class [D ]. These results do not require the existence or positivity of joint densities.",
"neighbors": [
65,
163,
744,
2030,
2077,
2316,
2659
],
"mask": "Train"
},
{
"node_id": 744,
"label": 1,
"text": "Title: A Genetic Algorithm for Continuous Design Space Search \nAbstract: Genetic algorithms (GAs) have been extensively used as a means for performing global optimization in a simple yet reliable manner. However, in some realistic engineering design optimization domains the simple, classical implementation of a GA based on binary encoding and bit mutation and crossover is often inefficient and unable to reach the global optimum. In this paper we describe a GA for continuous design-space optimization that uses new GA operators and strategies tailored to the structure and properties of engineering design domains. Empirical results in the domains of supersonic transport aircraft and supersonic missile inlets demonstrate that the newly formulated GA can be significantly better than the classical GA in both efficiency and reliability. ",
"neighbors": [
65,
163,
743,
2030,
2077,
2316
],
"mask": "Train"
},
{
"node_id": 745,
"label": 2,
"text": "Title: References \"Using Neural Networks to Identify Jets\", Kohonen, \"Self Organized Formation of Topologically Correct Feature\nAbstract: 2] D. E. Rumelhart, G. E. Hinton and R. J. Williams, \"Learning Internal Representations by Error Propagation\", in D. E. Rumelhart and J. L. McClelland (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition (Vol. 1), MIT Press (1986). ",
"neighbors": [
18,
36,
72,
91,
110,
124,
127,
205,
355,
386,
458,
477,
561,
600,
666,
687,
771,
962,
1157,
1564,
1700,
1704,
1756,
1763,
1884,
1885,
1886,
1932,
2162,
2165,
2325,
2336,
2437
],
"mask": "Train"
},
{
"node_id": 746,
"label": 2,
"text": "Title: A New Look at Tree Models for Multiple Sequence Alignment \nAbstract: Evolutionary trees are frequently used as the underlying model in the design of algorithms, optimization criteria and software packages for multiple sequence alignment (MSA). In this paper, we reexamine the suitability of trees as a universal model for MSA in light of the broad range of biological questions that MSA's are used to address. A tree model consists of a tree topology and a model of accepted mutations along the branches. After surveying the major applications of MSA, examples from the molecular biology literature are used to illustrate situations in which this tree model fails. This occurs when the relationship between residues in a column cannot be described by a tree; for example, in some structural and functional applications of MSA. It also occurs in situations, such as lateral gene transfer, where an entire gene cannot be modeled by a unique tree. In cases of nonparsimonous data or convergent evolution, it may be difficult to find a consistent mutational model. We hope that this survey will promote dialogue between biologists and computer scientists, leading to more biologically realistic research on MSA.",
"neighbors": [
14,
299
],
"mask": "Train"
},
{
"node_id": 747,
"label": 2,
"text": "Title: Cholinergic suppression of transmission may allow combined associative memory function and self-organization in the neocortex. \nAbstract: Selective suppression of transmission at feedback synapses during learning is proposed as a mechanism for combining associative feedback with self-organization of feedforward synapses. Experimental data demonstrates cholinergic suppression of synaptic transmission in layer I (feedback synapses), and a lack of suppression in layer IV (feed-forward synapses). A network with this feature uses local rules to learn mappings which are not linearly separable. During learning, sensory stimuli and desired response are simultaneously presented as input. Feedforward connections form self-organized representations of input, while suppressed feedback connections learn the transpose of feedfor-ward connectivity. During recall, suppression is removed, sensory input activates the self-organized representation, and activity generates the learned response.",
"neighbors": [
18,
26,
43,
72,
73,
83,
91,
104,
110,
112,
113,
124,
127,
146,
153,
175,
186,
198,
202,
203,
204,
205,
207,
217,
229,
234,
235,
283,
310,
315,
328,
330,
336,
349,
353,
362,
368,
369,
386,
397,
458,
489,
494,
502,
526,
527,
542,
545,
561,
572,
579,
588,
599,
600,
609,
610,
620,
628,
655,
665,
680,
685,
695,
696,
699,
700,
703,
719,
724,
737,
741,
763,
771,
779
],
"mask": "Validation"
},
{
"node_id": 748,
"label": 3,
"text": "Title: Markov Chain Monte Carlo Methods Based on `Slicing' the Density Function \nAbstract: Technical Report No. 9722, Department of Statistics, University of Toronto Abstract. One way to sample from a distribution is to sample uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position. Variations on such `slice sampling' methods can easily be implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and may be more efficient than easily-constructed versions of the Metropolis algorithm. Slice sampling is therefore attractive in routine Markov chain Monte Carlo applications, and for use by software that automatically generates a Markov chain sampler from a model specification. One can also easily devise overrelaxed versions of slice sampling, which sometimes greatly improve sampling efficiency by suppressing random walk behaviour. Random walks can also be avoided in some slice sampling schemes that simultaneously update all variables. ",
"neighbors": [
137,
138,
1926,
1933,
1941
],
"mask": "Train"
},
{
"node_id": 749,
"label": 4,
"text": "Title: On the Complexity of Solving Markov Decision Problems \nAbstract: Markov decision problems (MDPs) provide the foundations for a number of problems of interest to AI researchers studying automated planning and reinforcement learning. In this paper, we summarize results regarding the complexity of solving MDPs and the running time of MDP solution algorithms. We argue that, although MDPs can be solved efficiently in theory, more study is needed to reveal practical algorithms for solving large problems quickly. To encourage future research, we sketch some alternative methods of analysis that rely on the struc ture of MDPs.",
"neighbors": [
197,
483,
552,
1459,
2485
],
"mask": "Test"
},
{
"node_id": 750,
"label": 4,
"text": "Title: Machine Learning, Creating Advice-Taking Reinforcement Learners \nAbstract: Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present and evaluate a design that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external observer. In our approach, the advice-giver watches the learner and occasionally makes suggestions, expressed as instructions in a simple imperative programming language. Based on techniques from knowledge-based neural networks, we insert these programs directly into the agent's utility function. Subsequent reinforcement learning further integrates and refines the advice. We present empirical evidence that investigates several aspects of our approach and show that, given good advice, a learner can achieve statistically significant gains in expected reward. A second experiment shows that advice improves the expected reward regardless of the stage of training at which it is given, while another study demonstrates that subsequent advice can result in further gains in reward. Finally, we present experimental results that indicate our method is more powerful than a naive technique for making use of advice. ",
"neighbors": [
244
],
"mask": "Train"
},
{
"node_id": 751,
"label": 2,
"text": "Title: Dirichlet Mixtures: A Method for Improving Detection of Weak but Significant Protein Sequence Homology \nAbstract: UCSC Technical Report UCSC-CRL-96-09 Abstract This paper presents the mathematical foundations of Dirichlet mixtures, which have been used to improve database search results for homologous sequences, when a variable number of sequences from a protein family or domain are known. We present a method for condensing the information in a protein database into a mixture of Dirichlet densities. These mixtures are designed to be combined with observed amino acid frequencies, to form estimates of expected amino acid probabilities at each position in a profile, hidden Markov model, or other statistical model. These estimates give a statistical model greater generalization capacity, such that remotely related family members can be more reliably recognized by the model. Dirichlet mixtures have been shown to outperform substitution matrices and other methods for computing these expected amino acid distributions in database search, resulting in fewer false positives and false negatives for the families tested. This paper corrects a previously published formula for estimating these expected probabilities, and contains complete derivations of the Dirichlet mixture formulas, methods for optimizing the mixtures to match particular databases, and suggestions for efficient implementation. ",
"neighbors": [
8,
14,
258,
435,
544
],
"mask": "Validation"
},
{
"node_id": 752,
"label": 0,
"text": "Title: Analysis and Empirical Studies of Derivational Analogy \nAbstract: Derivational analogy is a technique for reusing problem solving experience to improve problem solving performance. This research addresses an issue common to all problem solvers that use derivational analogy: overcoming the mismatches between past experiences and new problems that impede reuse. First, this research describes the variety of mismatches that can arise and proposes a new approach to derivational analogy that uses appropriate adaptation strategies for each. Second, it compares this approach with seven others in a common domain. This empirical study shows that derivational analogy is almost always more efficient than problem solving from scratch, but the amount it contributes depends on its ability to overcome mismatches ",
"neighbors": [
649,
1621
],
"mask": "Validation"
},
{
"node_id": 753,
"label": 2,
"text": "Title: Analysis of Dynamical Recognizers \nAbstract: Pollack (1991) demonstrated that second-order recurrent neural networks can act as dynamical recognizers for formal languages when trained on positive and negative examples, and observed both phase transitions in learning and IFS-like fractal state sets. Follow-on work focused mainly on the extraction and minimization of a finite state automaton (FSA) from the trained network. However, such networks are capable of inducing languages which are not regular, and therefore not equivalent to any FSA. Indeed, it may be simpler for a small network to fit its training data by inducing such a non-regular language. But when is the network's language not regular? In this paper, using a low dimensional network capable of learning all the Tomita data sets, we present an empirical method for testing whether the language induced by the network is regular or not. We also provide a detailed \"-machine analysis of trained networks for both regular and non-regular languages. ",
"neighbors": [
405,
409,
444
],
"mask": "Test"
},
{
"node_id": 754,
"label": 2,
"text": "Title: Linear Machine Decision Trees \nAbstract: COINS Technical Report 91-10 January 1991 Abstract This article presents an algorithm for inducing multiclass decision trees with multivariate tests at internal decision nodes. Each test is constructed by training a linear machine and eliminating variables in a controlled manner. Empirical results demonstrate that the algorithm builds small accurate trees across a variety of tasks. ",
"neighbors": [
478
],
"mask": "Test"
},
{
"node_id": 755,
"label": 1,
"text": "Title: of a simulator for evolving morphology are: Universal the simulator should cover an infinite gen\nAbstract: Funes, P. and Pollack, J. (1997) Computer Evolution of Buildable Objects. Fourth European Conference on Artificial Life. P. Husbands and I. Harvey, eds., MIT Press. pp 358-367. knowledge into the program, which would result in familiar structures, we provided the algorithm with a model of the physical reality and a purely utilitarian fitness function, thus supplying measures of feasibility and functionality. In this way the evolutionary process runs in an environment that has not been unnecessarily constrained. We added, however, a requirement of computability to reject overly complex structures when they took too long for our simulations to evaluate. The results are encouraging. The evolved structures had a surprisingly alien look: they are not based in common knowledge on how to build with brick toys; instead, the computer found ways of its own through the evolutionary search process. We were able to assemble the final designs manually and confirm that they accomplish the objectives introduced with our fitness functions. After some background on related problems, we describe our physical simulation model for two-dimensional Lego structures, and the representation for encoding them and applying evolution. We demonstrate the feasibility of our work with photos of actual objects which were the result of particular optimizations. Finally, we discuss future work and draw some conclusions. In order to evolve both the morphology and behavior of autonomous mechanical devices which can be manufactured, one must have a simulator which operates under several constraints, and a resultant controller which is adaptive enough to cover the gap between simulated and real world. eral space of mechanisms. Conservative - because simulation is never perfect, it should preserve a margin of safety. Efficient - it should be quicker to test in simulation than through physical production and test. Buildable - results should be convertible from a simula tion to a real object Computer Evolution of Buildable Objects Abstract The idea of co-evolution of bodies and brains is becoming popular, but little work has been done in evolution of physical structure because of the lack of a general framework for doing it. Evolution of creatures in simulation has been constrained by the reality gap which implies that resultant objects are usually not buildable. The work we present takes a step in the problem of body evolution by applying evolutionary techniques to the design of structures assembled out of parts. Evolution takes place in a simulator we designed, which computes forces and stresses and predicts failure for 2-dimensional Lego structures. The final printout of our program is a schematic assembly, which can then be built physically. We demonstrate its functionality in several different evolved entities.",
"neighbors": [
188,
757,
1404,
2058
],
"mask": "Test"
},
{
"node_id": 756,
"label": 5,
"text": "Title: Knowledge Acquisition via Knowledge Integration \nAbstract: In this paper we are concerned with the problem of acquiring knowledge by integration. Our aim is to construct an integrated knowledge base from several separate sources. The need to merge knowledge bases can arise, for example, when knowledge bases are acquired independently from interactions with several domain experts. As opinions of different domain experts may differ, the knowledge bases constructed in this way will normally differ too. A similar problem can also arise whenever separate knowledge bases are generated by learning algorithms. The objective of integration is to construct one system that exploits all the knowledge that is available and has a good performance. The aim of this paper is to discuss the methodology of knowledge integration, describe the implemented system (INTEG.3), and present some concrete results which demonstrate the advantages of this method. ",
"neighbors": [
176,
379
],
"mask": "Test"
},
{
"node_id": 757,
"label": 1,
"text": "Title: Evolving Self-Supporting Structures Page 18 References Evolution of Visual Control Systems for Robots. To appear\nAbstract: In this paper we are concerned with the problem of acquiring knowledge by integration. Our aim is to construct an integrated knowledge base from several separate sources. The need to merge knowledge bases can arise, for example, when knowledge bases are acquired independently from interactions with several domain experts. As opinions of different domain experts may differ, the knowledge bases constructed in this way will normally differ too. A similar problem can also arise whenever separate knowledge bases are generated by learning algorithms. The objective of integration is to construct one system that exploits all the knowledge that is available and has a good performance. The aim of this paper is to discuss the methodology of knowledge integration, describe the implemented system (INTEG.3), and present some concrete results which demonstrate the advantages of this method. ",
"neighbors": [
163,
188,
219,
755,
846,
1965,
2058
],
"mask": "Train"
},
{
"node_id": 758,
"label": 1,
"text": "Title: A COMPRESSION ALGORITHM FOR PROBABILITY TRANSITION MATRICES \nAbstract: This paper describes a compression algorithm for probability transition matrices. The compressed matrix is itself a probability transition matrix. In general the compression is not error-free, but the error appears to be small even for high levels of compression. ",
"neighbors": [
100,
265,
1980
],
"mask": "Train"
},
{
"node_id": 759,
"label": 3,
"text": "Title: BAYESIAN STATISTICS 6, pp. 000--000 Exact sampling for Bayesian inference: towards general purpose algorithms \nAbstract: There are now methods for organising a Markov chain Monte Carlo simulation so that it can be guaranteed that the state of the process at a given time is exactly drawn from the target distribution. The question of assessing convergence totally vanishes. Such methods are known as exact or perfect sampling. The approach that has received most attention uses the protocol of coupling from the past devised by Propp and Wilson (Random Structures and Algorithms,1996), in which multiple dependent paths of the chain are run from different initial states at a sequence of initial times going backwards into the past, until they satisfy the condition of coalescence by time 0. When this is achieved the state at time 0 is distributed according to the required target. This process must be implemented very carefully to assure its validity (including appropriate re-use of random number streams), and also requires one of various tricks to enable us to follow infinitely many sample paths with a finite amount of work. With the ultimate objective of Bayesian MCMC with guaranteed convergence, the purpose of this paper is to describe recent efforts to construct exact sampling methods for continuous-state Markov chains. We review existing methods based on gamma-coupling and rejection sampling (Murdoch and Green, Scandinavian Journal of Statistics, 1998), that are quite straightforward to understand, but require a closed form for the transition kernel and entail cumbersome algebraic manipulation. We then introduce two new methods based on random walk Metropolis, that offer the prospect of more automatic use, not least because the difficult, continuous, part of the transition mechanism can be coupled in a generic way, using a proposal distribution of convenience. One of the methods is based on a neat decomposition of any unimodal (multivariate) symmetric density into pieces that may be re-assembled to construct any translated copy of itself: that allows coupling of a continuum of Metropolis proposals to a finite set, at least for a compact state space. We discuss methods for economically coupling the subsequent accept/reject decisions. Our second new method deals with unbounded state spaces, using a trick due to W. S. Kendall of running a coupled dominating process in parallel with the sample paths of interest. The random subset of the state space below the dominating path is compact, allowing efficient coupling and coalescence. We look towards the possibility that application of such methods could become sufficiently convenient that they could become the basis for routine Bayesian computation in the foreseeable future. ",
"neighbors": [
23,
93,
95,
99,
161,
292
],
"mask": "Train"
},
{
"node_id": 760,
"label": 0,
"text": "Title: BAYESIAN STATISTICS 6, pp. 000--000 Exact sampling for Bayesian inference: towards general purpose algorithms \nAbstract: Instance-based learning methods explicitly remember all the data that they receive. They usually have no training phase, and only at prediction time do they perform computation. Then, they take a query, search the database for similar datapoints and build an on-line local model (such as a local average or local regression) with which to predict an output value. In this paper we review the advantages of instance based methods for autonomous systems, but we also note the ensuing cost: hopelessly slow computation as the database grows large. We present and evaluate a new way of structuring a database and a new algorithm for accessing it that maintains the advantages of instance-based learning. Earlier attempts to combat the cost of instance-based learning have sacrificed the explicit retention of all data, or been applicable only to instance-based predictions based on a small number of near neighbors or have had to re-introduce an explicit training phase in the form of an interpolative data structure. Our approach builds a multiresolution data structure to summarize the database of experiences at all resolutions of interest simultaneously. This permits us to query the database with the same exibility as a conventional linear search, but at greatly reduced computational cost.",
"neighbors": [
88,
686,
2428
],
"mask": "Train"
},
{
"node_id": 761,
"label": 6,
"text": "Title: Apple Tasting and Nearly One-Sided Learning \nAbstract: In the standard on-line model the learning algorithm tries to minimize the total number of mistakes made in a series of trials. On each trial the learner sees an instance, either accepts or rejects that instance, and then is told the appropriate response. We define a natural variant of this model (\"apple tasting\") where the learner gets feedback only when the instance is accepted. We use two transformations to relate the apple tasting model to an enhanced standard model where false acceptances are counted separately from false rejections. We present a strategy for trading between false acceptances and false rejections in the standard model. From one perspective this strategy is exactly optimal, including constants. We apply our results to obtain a good general purpose apple tasting algorithm as well as nearly optimal apple tasting algorithms for a variety of standard classes, such as conjunctions and disjunctions of n boolean variables. We also present and analyze a simpler transformation useful when the instances are drawn at random rather than selected by an adversary. ",
"neighbors": [
9,
40,
535
],
"mask": "Validation"
},
{
"node_id": 762,
"label": 6,
"text": "Title: Using Errors to Create Piecewise Learnable Partitions \nAbstract: In this paper we describe an algorithm which exploits the error distribution generated by a learning algorithm in order to break up the domain which is being approximated into piecewise learnable partitions. Traditionally, the error distribution has been neglected in favor of a lump error measure such as RMS. By doing this, however, we lose a lot of important information. The error distribution tells us where the algorithm is doing badly, and if there exists a \"ridge\" of errors, also tells us how to partition the space so that one part of the space will not interfere with the learning of another. The algorithm builds a variable arity k-d tree whose leaves contain the partitions. Using this tree, new points can be predicted using the correct partition by traversing the tree. We instantiate this algorithm using memory based learners and cross-validation. ",
"neighbors": [
88
],
"mask": "Train"
},
{
"node_id": 763,
"label": 2,
"text": "Title: PREENS, a Parallel Research Execution Environment for Neural Systems \nAbstract: PREENS a Parallel Research Execution Environment for Neural Systems is a distributed neurosimulator, targeted on networks of workstations and transputer systems. As current applications of neural networks often contain large amounts of data and as the neural networks involved in tasks such as vision are very large, high requirements on memory and computational resources are imposed on the target execution platforms. PREENS can be executed in a distributed environment, i.e. tools and neural network simulation programs can be running on any machine connectable via TCP/IP. Using this approach, larger tasks and more data can be examined using an efficient coarse grained parallelism. Furthermore, the design of PREENS allows for neural networks to be running on any high performance MIMD machine such as a trans-puter system. In this paper, the different features and design concepts of PREENS are discussed. These can also be used for other applications, like image processing.",
"neighbors": [
241,
747,
1879,
2355
],
"mask": "Test"
},
{
"node_id": 764,
"label": 1,
"text": "Title: GENETIC AND NON GENETIC OPERATORS IN ALECSYS \nAbstract: It is well known that standard learning classifier systems, when applied to many different domains, exhibit a number of problems: payoff oscillation, difficult to regulate interplay between the reward system and the background genetic algorithm (GA), rule chains instability, default hierarchies instability, are only a few. ALECSYS is a parallel version of a standard learning classifier system (CS), and as such suffers of these same problems. In this paper we propose some innovative solutions to some of these problems. We introduce the following original features. Mutespec, a new genetic operator used to specialize potentially useful classifiers. Energy, a quantity introduced to measure global convergence in order to apply the genetic algorithm only when the system is close to a steady state. Dynamical adjustment of the classifiers set cardinality, in order to speed up the performance phase of the algorithm. We present simulation results of experiments run in a simulated two-dimensional world in which a simple agent learns to follow a light source. ",
"neighbors": [
636,
769,
910,
1311,
1573,
1581,
2174,
2687
],
"mask": "Train"
},
{
"node_id": 765,
"label": 1,
"text": "Title: A Classifier System plays a simple board game Getting down to the Basics of Machine Learning? \nAbstract: It is well known that standard learning classifier systems, when applied to many different domains, exhibit a number of problems: payoff oscillation, difficult to regulate interplay between the reward system and the background genetic algorithm (GA), rule chains instability, default hierarchies instability, are only a few. ALECSYS is a parallel version of a standard learning classifier system (CS), and as such suffers of these same problems. In this paper we propose some innovative solutions to some of these problems. We introduce the following original features. Mutespec, a new genetic operator used to specialize potentially useful classifiers. Energy, a quantity introduced to measure global convergence in order to apply the genetic algorithm only when the system is close to a steady state. Dynamical adjustment of the classifiers set cardinality, in order to speed up the performance phase of the algorithm. We present simulation results of experiments run in a simulated two-dimensional world in which a simple agent learns to follow a light source. ",
"neighbors": [
163
],
"mask": "Validation"
},
{
"node_id": 766,
"label": 2,
"text": "Title: Keeping Neural Networks Simple by Minimizing the Description Length of the Weights \nAbstract: Supervised neural networks generalize well if there is much less information in the weights than there is in the output vectors of the training cases. So during learning, it is important to keep the weights simple by penalizing the amount of information they contain. The amount of information in a weight can be controlled by adding Gaussian noise and the noise level can be adapted during learning to optimize the trade-off between the expected squared error of the network and the amount of information in the weights. We describe a method of computing the derivatives of the expected squared error and of the amount of information in the noisy weights in a network that contains a layer of non-linear hidden units. Provided the output units are linear, the exact derivatives can be computed efficiently without time-consuming Monte Carlo simulations. The idea of minimizing the amount of information that is required to communicate the weights of a neural network leads to a number of interesting schemes for encoding the weights.",
"neighbors": [
78,
157,
181,
518,
979,
2532
],
"mask": "Train"
},
{
"node_id": 767,
"label": 6,
"text": "Title: Learning to Order Things \nAbstract: There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order, given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a two-stage approach in which one first learns by conventional means a preference function, of the form PREF(u; v), which indicates whether it is advisable to rank u before v. New instances are then ordered so as to maximize agreements with the learned preference function. We show that the problem of finding the ordering that agrees best with a preference function is NP-complete, even under very restrictive assumptions. Nevertheless, we describe a simple greedy algorithm that is guaranteed to find a good approximation. We then discuss an on-line learning algorithm, based on the \"Hedge\" algorithm, for finding a good linear combination of ranking \"experts.\" We use the ordering algorithm combined with the on-line learning algorithm to find a combination of \"search experts,\" each of which is a domain-specific query expansion strategy for a WWW search engine, and present experimental results that demonstrate the merits of our approach. ",
"neighbors": [
255,
569
],
"mask": "Test"
},
{
"node_id": 768,
"label": 3,
"text": "Title: DYNAMIC CONDITIONAL INDEPENDENCE MODELS AND MARKOV CHAIN MONTE CARLO METHODS \nAbstract: There are many applications in which it is desirable to order rather than classify instances. Here we consider the problem of learning how to order, given feedback in the form of preference judgments, i.e., statements to the effect that one instance should be ranked ahead of another. We outline a two-stage approach in which one first learns by conventional means a preference function, of the form PREF(u; v), which indicates whether it is advisable to rank u before v. New instances are then ordered so as to maximize agreements with the learned preference function. We show that the problem of finding the ordering that agrees best with a preference function is NP-complete, even under very restrictive assumptions. Nevertheless, we describe a simple greedy algorithm that is guaranteed to find a good approximation. We then discuss an on-line learning algorithm, based on the \"Hedge\" algorithm, for finding a good linear combination of ranking \"experts.\" We use the ordering algorithm combined with the on-line learning algorithm to find a combination of \"search experts,\" each of which is a domain-specific query expansion strategy for a WWW search engine, and present experimental results that demonstrate the merits of our approach. ",
"neighbors": [
772
],
"mask": "Train"
},
{
"node_id": 769,
"label": 1,
"text": "Title: On the Relations Between Search and Evolutionary Algorithms \nAbstract: Technical Report: CSRP-96-7 March 1996 Abstract Evolutionary algorithms are powerful techniques for optimisation whose operation principles are inspired by natural selection and genetics. In this paper we discuss the relation between evolutionary techniques, numerical and classical search methods and we show that all these methods are instances of a single more general search strategy, which we call the `evolutionary computation cookbook'. By combining the features of classical and evolutionary methods in different ways new instances of this general strategy can be generated, i.e. new evolutionary (or classical) algorithms can be designed. One such algorithm, GA fl , is described.",
"neighbors": [
163,
764
],
"mask": "Train"
},
{
"node_id": 770,
"label": 2,
"text": "Title: A Connectionist Symbol Manipulator That Discovers the Structure of Context-Free Languages \nAbstract: We present a neural net architecture that can discover hierarchical and recursive structure in symbol strings. To detect structure at multiple levels, the architecture has the capability of reducing symbols substrings to single symbols, and makes use of an external stack memory. In terms of formal languages, the architecture can learn to parse strings in an LR(0) context-free grammar. Given training sets of positive and negative exemplars, the architecture has been trained to recognize many different grammars. The architecture has only one layer of modifiable weights, allowing for a Many cognitive domains involve complex sequences that contain hierarchical or recursive structure, e.g., music, natural language parsing, event perception. To illustrate, \"the spider that ate the hairy fly\" is a noun phrase containing the embedded noun phrase \"the hairy fly.\" Understanding such multilevel structures requires forming reduced descriptions (Hinton, 1988) in which a string of symbols or states (\"the hairy fly\") is reduced to a single symbolic entity (a noun phrase). We present a neural net architecture that learns to encode the structure of symbol strings via such reduction transformations. The difficult problem of extracting multilevel structure from complex, extended sequences has been studied by Mozer (1992), Ring (1993), Rohwer (1990), and Schmidhuber (1992), among others. While these previous efforts have made some straightforward interpretation of its behavior.",
"neighbors": [
350,
730,
1285
],
"mask": "Test"
},
{
"node_id": 771,
"label": 2,
"text": "Title: SELF-ORGANIZING PROCESS BASED ON LATERAL INHIBITION AND SYNAPTIC RESOURCE REDISTRIBUTION \nAbstract: Self-organizing feature maps are usually implemented by abstracting the low-level neural and parallel distributed processes. An external supervisor finds the unit whose weight vector is closest in Euclidian distance to the input vector and determines the neighborhood for weight adaptation. The weights are changed proportional to the Euclidian distance. In a biologically more plausible implementation, similarity is measured by a scalar product, neighborhood is selected through lateral inhibition and weights are changed by redistributing synaptic resources. The resulting self-organizing process is quite similar to the abstract case. However, the process is somewhat hampered by boundary effects and the parameters need to be carefully evolved. It is also necessary to add a redundant dimension to the input vectors.",
"neighbors": [
72,
104,
202,
745,
747
],
"mask": "Validation"
},
{
"node_id": 772,
"label": 3,
"text": "Title: [12] J. Whittaker. Graphical Models in Applied Mathematical Multivariate Statis- \nAbstract: Self-organizing feature maps are usually implemented by abstracting the low-level neural and parallel distributed processes. An external supervisor finds the unit whose weight vector is closest in Euclidian distance to the input vector and determines the neighborhood for weight adaptation. The weights are changed proportional to the Euclidian distance. In a biologically more plausible implementation, similarity is measured by a scalar product, neighborhood is selected through lateral inhibition and weights are changed by redistributing synaptic resources. The resulting self-organizing process is quite similar to the abstract case. However, the process is somewhat hampered by boundary effects and the parameters need to be carefully evolved. It is also necessary to add a redundant dimension to the input vectors.",
"neighbors": [
51,
312,
742,
768,
1147,
1240,
1241,
1502,
2076,
2166,
2167
],
"mask": "Train"
},
{
"node_id": 773,
"label": 4,
"text": "Title: Reinforcement Learning with Imitation in Heterogeneous Multi-Agent Systems \nAbstract: The application of decision making and learning algorithms to multi-agent systems presents many interestingresearch challenges and opportunities. Among these is the ability for agents to learn how to act by observing or imitating other agents. We describe an algorithm, the IQ-algorithm, that integrates imitation with Q-learning. Roughly, a Q-learner uses the observations it has made of an expert agent to bias its exploration in promising directions. This algorithm goes beyond previous work in this direction by relaxing the oft-made assumptions that the learner (observer) and the expert (observed agent) share the same objectives and abilities. Our preliminary experiments demonstrate significant transfer between agents using the IQ-model and in many cases reductions in training time. ",
"neighbors": [
565,
656,
1643,
1687
],
"mask": "Train"
},
{
"node_id": 774,
"label": 2,
"text": "Title: Face Recognition: A Hybrid Neural Network Approach \nAbstract: Faces represent complex, multidimensional, meaningful visual stimuli and developing a computational model for face recognition is difficult (Turk and Pentland, 1991). We present a hybrid neural network solution which compares favorably with other methods. The system combines local image sampling, a self-organizing map neural network, and a convolutional neural network. The self-organizing map provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides for partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the self-organizing map, and a multilayer perceptron in place of the convolutional network. The Karhunen-Loeve transform performs almost as well (5.3% error versus 3.8%). The multilayer perceptron performs very poorly (40% error versus 3.8%). The method is capable of rapid classification, requires only fast, approximate normalization and preprocessing, and consistently exhibits better classification performance than the eigenfaces approach (Turk and Pentland, 1991) on the database considered as the number of images per person in the training database is varied from 1 to 5. With 5 images per person the proposed method and eigenfaces result in 3.8% and 10.5% error respectively. The recognizer provides a measure of confidence in its output and classification error approaches zero when rejecting as few as 10% of the examples. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze computational complexity and discuss how new classes could be added to the trained recognizer. ",
"neighbors": [
331,
1493,
2707
],
"mask": "Test"
},
{
"node_id": 775,
"label": 4,
"text": "Title: Asynchronous Modified Policy Iteration with Single-sided Updates \nAbstract: We present a new algorithm for solving Markov decision problems that extends the modified policy iteration algorithm of Puterman and Shin [6] in two important ways: 1) The new algorithm is asynchronous in that it allows the values of states to be updated in arbitrary order, and it does not need to consider all actions in each state while updating the policy. 2) The new algorithm converges under more general initial conditions than those required by modified policy iteration. Specifically, the set of initial policy-value function pairs for which our algorithm guarantees convergence is a strict superset of the set for which modified policy iteration converges. This generalization was obtained by making a simple and easily implementable change to the policy evaluation operator used in updating the value function. Both the asynchronous nature of our algorithm and its convergence under more general conditions expand the range of problems to which our algorithm can be applied. ",
"neighbors": [
162,
875,
1459
],
"mask": "Train"
},
{
"node_id": 776,
"label": 3,
"text": "Title: CAUSATION, ACTION, AND COUNTERFACTUALS \nAbstract: We present a new algorithm for solving Markov decision problems that extends the modified policy iteration algorithm of Puterman and Shin [6] in two important ways: 1) The new algorithm is asynchronous in that it allows the values of states to be updated in arbitrary order, and it does not need to consider all actions in each state while updating the policy. 2) The new algorithm converges under more general initial conditions than those required by modified policy iteration. Specifically, the set of initial policy-value function pairs for which our algorithm guarantees convergence is a strict superset of the set for which modified policy iteration converges. This generalization was obtained by making a simple and easily implementable change to the policy evaluation operator used in updating the value function. Both the asynchronous nature of our algorithm and its convergence under more general conditions expand the range of problems to which our algorithm can be applied. ",
"neighbors": [
342,
398,
1326,
2088,
2166
],
"mask": "Train"
},
{
"node_id": 777,
"label": 3,
"text": "Title: MARKOV CHAIN MONTE CARLO SAMPLING FOR EVALUATING MULTIDIMENSIONAL INTEGRALS WITH APPLICATION TO BAYESIAN COMPUTATION \nAbstract: Recently, Markov chain Monte Carlo (MCMC) sampling methods have become widely used for determining properties of a posterior distribution. Alternative to the Gibbs sampler, we elaborate on the Hit-and-Run sampler and its generalization, a black-box sampling scheme, to generate a time-reversible Markov chain from a posterior distribution. The proof of convergence and its applications to Bayesian computation with constrained parameter spaces are provided and comparisons with the other MCMC samplers are made. In addition, we propose an importance weighted marginal density estimation (IWMDE) method. An IWMDE is obtained by averaging many dependent observations of the ratio of the full joint posterior densities multiplied by a weighting conditional density w. The asymptotic properties for the IWMDE and the guidelines for choosing a weighting conditional density w are also considered. The generalized version of IWMDE for estimating marginal posterior densities when the full joint posterior density contains analytically intractable normalizing constants is developed. Furthermore, we develop Monte Carlo methods based on Kullback-Leibler divergences for comparing marginal posterior density estimators. This article is a summary of the author's Ph.D. thesis and it was presented in the Savage Award session. ",
"neighbors": [
533
],
"mask": "Validation"
},
{
"node_id": 778,
"label": 6,
"text": "Title: On the Sample Complexity of Noise-Tolerant Learning \nAbstract: In this paper, we further characterize the complexity of noise-tolerant learning in the PAC model. Specifically, we show a general lower bound of log(1=ffi) on the number of examples required for PAC learning in the presence of classification noise. Combined with a result of Simon, we effectively show that the sample complexity of PAC learning in the presence of classification noise is VC(F) \"(12) 2 : Furthermore, we demonstrate the optimality of the general lower bound by providing a noise-tolerant learning algorithm for the class of symmetric Boolean functions which uses a sample size within a constant factor of this bound. Finally, we note that our general lower bound compares favorably with various general upper bounds for PAC learning in the presence of classification noise. ",
"neighbors": [
25,
56,
109
],
"mask": "Train"
},
{
"node_id": 779,
"label": 2,
"text": "Title: Monte Carlo Comparison of Non-hierarchical Unsupervised Classifiers \nAbstract: In this paper, we further characterize the complexity of noise-tolerant learning in the PAC model. Specifically, we show a general lower bound of log(1=ffi) on the number of examples required for PAC learning in the presence of classification noise. Combined with a result of Simon, we effectively show that the sample complexity of PAC learning in the presence of classification noise is VC(F) \"(12) 2 : Furthermore, we demonstrate the optimality of the general lower bound by providing a noise-tolerant learning algorithm for the class of symmetric Boolean functions which uses a sample size within a constant factor of this bound. Finally, we note that our general lower bound compares favorably with various general upper bounds for PAC learning in the presence of classification noise. ",
"neighbors": [
542,
638,
684,
747,
1203
],
"mask": "Validation"
},
{
"node_id": 780,
"label": 1,
"text": "Title: Between-host evolution of mutation-rate and within-host evolution of virulence. \nAbstract: It has been recently realized that parasite virulence (the harm caused by parasites to their hosts) can be an adaptive trait. Selection for a particular level of virulence can happen either at at the level of between-host tradeoffs or as a result of short-sighted within-host competition. This paper describes some simulations which study the effect that modifier genes for changes in mutation rate have on suppressing this short-sighted development of virulence, and investigates the interaction between this and a simplified model of im mune clearance.",
"neighbors": [
1139,
1598
],
"mask": "Test"
},
{
"node_id": 781,
"label": 1,
"text": "Title: Evolving Visual Routines Architecture and Planning, \nAbstract: It has been recently realized that parasite virulence (the harm caused by parasites to their hosts) can be an adaptive trait. Selection for a particular level of virulence can happen either at at the level of between-host tradeoffs or as a result of short-sighted within-host competition. This paper describes some simulations which study the effect that modifier genes for changes in mutation rate have on suppressing this short-sighted development of virulence, and investigates the interaction between this and a simplified model of im mune clearance.",
"neighbors": [
163,
846,
1184,
1533
],
"mask": "Train"
},
{
"node_id": 782,
"label": 2,
"text": "Title: on Qualitative Reasoning about Physical Systems Deriving Monotonic Function Envelopes from Observations \nAbstract: Much work in qualitative physics involves constructing models of physical systems using functional descriptions such as \"flow monotonically increases with pressure.\" Semiquantitative methods improve model precision by adding numerical envelopes to these monotonic functions. Ad hoc methods are normally used to determine these envelopes. This paper describes a systematic method for computing a bounding envelope of a multivariate monotonic function given a stream of data. The derived envelope is computed by determining a simultaneous confidence band for a special neural network which is guaranteed to produce only monotonic functions. By composing these envelopes, more complex systems can be simulated using semiquantitative methods. ",
"neighbors": [
1532
],
"mask": "Train"
},
{
"node_id": 783,
"label": 0,
"text": "Title: Resolving PP attachment Ambiguities with Memory-Based Learning \nAbstract: In this paper we describe the application of Memory-Based Learning to the problem of Prepositional Phrase attachment disambiguation. We compare Memory-Based Learning, which stores examples in memory and generalizes by using intelligent similarity metrics, with a number of recently proposed statistical methods that are well suited to large numbers of features. We evaluate our methods on a common benchmark dataset and show that our method compares favorably to previous methods, and is well-suited to incorporating various unconventional representations of word patterns such as value difference metrics and Lexical Space.",
"neighbors": [
1155,
1328,
1407,
1812
],
"mask": "Validation"
},
{
"node_id": 784,
"label": 2,
"text": "Title: Studies of Neurological Transmission Analysis using Hierarchical Bayesian Mixture Models \nAbstract: Hierarchically structured mixture models are studied in the context of data analysis and inference on neural synaptic transmission characteristics in mammalian, and other, central nervous systems. Mixture structures arise due to uncertainties about the stochastic mechanisms governing the responses to electro-chemical stimulation of individual neuro-transmitter release sites at nerve junctions. Models attempt to capture scientific features such as the sensitivity of individual synaptic transmission sites to electro-chemical stimuli, and the extent of their electro-chemical responses when stimulated. This is done via suitably structured classes of prior distributions for parameters describing these features. Such priors may be structured to permit assessment of currently topical scientific hypotheses about fundamental neural function. Posterior analysis is implemented via stochastic simulation. Several data analyses are described to illustrate the approach, with resulting neurophysiological insights in some recently generated experimental contexts. Further developments and open questions, both neurophysiological and statistical, are noted. Research partially supported by the NSF under grants DMS-9024793, DMS-9305699 and DMS-9304250. This work represents part of a collaborative project with Dr Dennis A Turner, of Duke University Medical Center and Durham VA. Data was provided by Dr Turner and by Dr Howard V Wheal of Southampton University. A slightly revised version of this paper is published in the Journal of the American Statistical Association (vol 92, pp587-606), under the modified title Hierarchical Mixture Models in Neurological Transmission Analysis. The author is the recipient of the 1997 Mitchell Prize for \"the Bayesian analysis of a substantive and concrete problem\" based on the work reported in this paper. ",
"neighbors": [
845,
917,
1338,
1613
],
"mask": "Validation"
},
{
"node_id": 785,
"label": 0,
"text": "Title: RAPID DEVELOPMENT OF NLP MODULES WITH MEMORY-BASED LEARNING \nAbstract: The need for software modules performing natural language processing (NLP) tasks is growing. These modules should perform efficiently and accurately, while at the same time rapid development is often mandatory. Recent work has indicated that machine learning techniques in general, and memory-based learning (MBL) in particular, offer the tools to meet both ends. We present examples of modules trained with MBL on three NLP tasks: (i) text-to-speech conversion, (ii) part-of-speech tagging, and (iii) phrase chunking. We demonstrate that the three modules display high generalization accuracy, and argue why MBL is applicable similarly well to a large class of other NLP tasks. ",
"neighbors": [
862,
1155,
1328,
1513,
1812
],
"mask": "Train"
},
{
"node_id": 786,
"label": 6,
"text": "Title: Learning Boolean Read-Once Formulas over Generalized Bases \nAbstract: A read-once formula is one in which each variable appears on at most a single input. Angluin, Hellerstein, and Karpinski give a polynomial time algorithm that uses membership and equivalence queries to identify exactly read-once boolean formulas over the basis fAND; OR; NOTg [AHK93]. The goal of this work is to consider natural generalizations of these gates, in order to develop exact identification algorithms for more powerful classes of formulas. We show that read-once formulas over a basis of arbitrary boolean functions of constant fan-in k or less (i.e. any f : f0; 1g 1ck ! f0; 1g) are exactly identifiable in polynomial time using membership and equivalence queries. We show that read-once formulas over the basis of arbitrary symmetric boolean functions are also exactly identifiable in polynomial time in this model. Given standard cryptographic assumptions, there is no polynomial time identification algorithm for read-twice formulas over either of these bases using membership and equivalence queries. We further show that for any basis class B meeting certain technical conditions, any polynomial time identification algorithm for read-once formulas over B can be extended to a polynomial time identification algorithm for read-once formulas over the union of B and the arbitrary functions of fan-in k or less. As a result, read-once formulas over the union of arbitrary symmetric and arbitrary constant fan-in gates are also exactly identifiable in polynomial time using membership and equivalence queries. ",
"neighbors": [
791,
1003,
1004,
1363
],
"mask": "Test"
},
{
"node_id": 787,
"label": 3,
"text": "Title: Hidden Markov decision trees \nAbstract: We study a time series model that can be viewed as a decision tree with Markov temporal structure. The model is intractable for exact calculations, thus we utilize variational approximations. We consider three different distributions for the approximation: one in which the Markov calculations are performed exactly and the layers of the decision tree are decoupled, one in which the decision tree calculations are performed exactly and the time steps of the Markov chain are decoupled, and one in which a Viterbi-like assumption is made to pick out a single most likely state sequence. We present simulation results for artificial data and the Bach chorales. Accepted for oral presentation at NIPS*96. ",
"neighbors": [
74,
1287,
1288,
1437
],
"mask": "Validation"
},
{
"node_id": 788,
"label": 3,
"text": "Title: Stochastic simulation algorithms for dynamic probabilistic networks \nAbstract: Stochastic simulation algorithms such as likelihood weighting often give fast, accurate approximations to posterior probabilities in probabilistic networks, and are the methods of choice for very large networks. Unfortunately, the special characteristics of dynamic probabilistic networks (DPNs), which are used to represent stochastic temporal processes, mean that standard simulation algorithms perform very poorly. In essence, the simulation trials diverge further and further from reality as the process is observed over time. In this paper, we present simulation algorithms that use the evidence observed at each time step to push the set of trials back towards reality. The first algorithm, \"evidence reversal\" (ER) restructures each time slice of the DPN so that the evidence nodes for the slice become ancestors of the state variables. The second algorithm, called \"survival of the fittest\" sampling (SOF), \"repopulates\" the set of trials at each time step using a stochastic reproduction rate weighted by the likelihood of the evidence according to each trial. We compare the performance of each algorithm with likelihood weighting on the original network, and also investigate the benefits of combining the ER and SOF methods. The ER/SOF combination appears to maintain bounded error independent of the number of time steps in the simulation.",
"neighbors": [
945,
1268,
1414,
2323,
2341,
2425
],
"mask": "Train"
},
{
"node_id": 789,
"label": 1,
"text": "Title: Stochastic Random or probabilistic but with some direction. For example the arrival of people at\nAbstract: Simulated Annealing Search technique where a single trial solution is modified at random. An energy is defined which represents how good the solution is. The goal is to find the best solution by minimising the energy. Changes which lead to a lower energy are always accepted; an increase is probabilistically accepted. The probability is given by exp(E=k B T ). Where E is the change in energy, k B is a constant and T is the Temperature. Initially the temperature is high corresponding to a liquid or molten state where large changes are possible and it is progressively reduced using a cooling schedule so allowing smaller changes until the system solidifies at a low energy solution. ",
"neighbors": [
415,
1206,
1409
],
"mask": "Train"
},
{
"node_id": 790,
"label": 6,
"text": "Title: Learning with Rare Cases and Small Disjuncts \nAbstract: Systems that learn from examples often create a disjunctive concept definition. Small disjuncts are those disjuncts which cover only a few training examples. The problem with small disjuncts is that they are more error prone than large disjuncts. This paper investigates the reasons why small disjuncts are more error prone than large disjuncts. It shows that when there are rare cases within a domain, then factors such as attribute noise, missing attributes, class noise and training set size can result in small disjuncts being more error prone than large disjuncts and in rare cases being more error prone than common cases. This paper also assesses the impact that these error prone small disjuncts and rare cases have on inductive learning (i.e., on error rate). One key conclusion is that when low levels of attribute noise are applied only to the training set (the ability to learn the correct concept is being evaluated), rare cases within a domain are primarily responsible for making learning difficult.",
"neighbors": [
1234,
1510,
2057
],
"mask": "Validation"
},
{
"node_id": 791,
"label": 6,
"text": "Title: Asking Questions to Minimize Errors \nAbstract: A number of efficient learning algorithms achieve exact identification of an unknown function from some class using membership and equivalence queries. Using a standard transformation such algorithms can easily be converted to on-line learning algorithms that use membership queries. Under such a transformation the number of equivalence queries made by the query algorithm directly corresponds to the number of mistakes made by the on-line algorithm. In this paper we consider several of the natural classes known to be learnable in this setting, and investigate the minimum number of equivalence queries with accompanying counterexamples (or equivalently the minimum number of mistakes in the on-line model) that can be made by a learning algorithm that makes a polynomial number of membership queries and uses polynomial computation time. We are able both to reduce the number of equivalence queries used by the previous algorithms and often to prove matching lower bounds. As an example, consider the class of DNF formulas over n variables with at most k = O(log n) terms. Previously, the algorithm of Blum and Rudich [BR92] provided the best known upper bound of 2 O(k) log n for the minimum number of equivalence queries needed for exact identification. We greatly improve on this upper bound showing that exactly k counterexamples are needed if the learner knows k a priori and exactly k +1 counterexamples are needed if the learner does not know k a priori. This exactly matches known lower bounds [BC92]. For many of our results we obtain a complete characterization of the tradeoff between the number of membership and equivalence queries needed for exact identification. The classes we consider here are monotone DNF formulas, Horn sentences, O(log n)-term DNF formulas, read-k sat-j DNF formulas, read-once formulas over various bases, and deterministic finite automata. ",
"neighbors": [
786,
1003,
1004,
1560,
1661
],
"mask": "Train"
},
{
"node_id": 792,
"label": 6,
"text": "Title: Learning Unions of Rectangles with Queries \nAbstract: A number of efficient learning algorithms achieve exact identification of an unknown function from some class using membership and equivalence queries. Using a standard transformation such algorithms can easily be converted to on-line learning algorithms that use membership queries. Under such a transformation the number of equivalence queries made by the query algorithm directly corresponds to the number of mistakes made by the on-line algorithm. In this paper we consider several of the natural classes known to be learnable in this setting, and investigate the minimum number of equivalence queries with accompanying counterexamples (or equivalently the minimum number of mistakes in the on-line model) that can be made by a learning algorithm that makes a polynomial number of membership queries and uses polynomial computation time. We are able both to reduce the number of equivalence queries used by the previous algorithms and often to prove matching lower bounds. As an example, consider the class of DNF formulas over n variables with at most k = O(log n) terms. Previously, the algorithm of Blum and Rudich [BR92] provided the best known upper bound of 2 O(k) log n for the minimum number of equivalence queries needed for exact identification. We greatly improve on this upper bound showing that exactly k counterexamples are needed if the learner knows k a priori and exactly k +1 counterexamples are needed if the learner does not know k a priori. This exactly matches known lower bounds [BC92]. For many of our results we obtain a complete characterization of the tradeoff between the number of membership and equivalence queries needed for exact identification. The classes we consider here are monotone DNF formulas, Horn sentences, O(log n)-term DNF formulas, read-k sat-j DNF formulas, read-once formulas over various bases, and deterministic finite automata. ",
"neighbors": [
798,
1095
],
"mask": "Train"
},
{
"node_id": 793,
"label": 1,
"text": "Title: A Survey of Evolution Strategies \nAbstract: ",
"neighbors": [
42,
163,
262,
856,
943,
959,
1069,
1070,
1110,
1127,
1139,
1205,
1249,
1330,
1333,
1334,
1380,
1455,
1467,
1691,
1694,
1715,
1734
],
"mask": "Train"
},
{
"node_id": 794,
"label": 6,
"text": "Title: Learning rules with local exceptions \nAbstract: We present a learning algorithm for rule-based concept representations called ripple-down rule sets. Ripple-down rule sets allow us to deal with the exceptions for each rule separately by introducing exception rules, exception rules for each exception rule etc. up to a constant depth. These local exception rules are in contrast to decision lists, in which the exception rules must be placed into a global ordering of the rules. The localization of exceptions makes it possible to represent concepts that have no decision list representation. On the other hand, decision lists with a constant number of alternations between rules for different classes can be represented by constant depth ripple-down rule sets with only a polynomial increase in size. Our algorithm is an Occam algorithm for constant depth ripple-down rule sets and, hence, a PAC learning algorithm. It is based on repeatedly applying the greedy approximation method for the weighted set cover problem to find good exception rule sets.",
"neighbors": [
795
],
"mask": "Train"
},
{
"node_id": 795,
"label": 6,
"text": "Title: Learning Hierarchical Rule Sets \nAbstract: We present an algorithm for learning sets of rules that are organized into up to k levels. Each level can contain an arbitrary number of rules \"if c then l\" where l is the class associated to the level and c is a concept from a given class of basic concepts. The rules of higher levels have precedence over the rules of lower levels and can be used to represent exceptions. As basic concepts we can use Boolean attributes in the infinite attribute space model, or certain concepts defined in terms of substrings. Given a sample of m examples, the algorithm runs in polynomial time and produces a consistent concept representation of size O((log m) k n k ), where n is the size of the smallest consistent representation with k levels of rules. This implies that the algorithm learns in the PAC model. The algorithm repeatedly applies the greedy heuristics for weighted set cover. The weights are obtained from approximate solutions to previous set cover problems.",
"neighbors": [
794,
865
],
"mask": "Train"
},
{
"node_id": 796,
"label": 2,
"text": "Title: Characterizing Carbon Dynamics in a Northern Forest Using SIR-C/X-SAR Imagery Characterizing Carbon Dynamics in a\nAbstract: 1 ABSTRACT ",
"neighbors": [
1629
],
"mask": "Train"
},
{
"node_id": 797,
"label": 2,
"text": "Title: Regularities in a Random Mapping from Orthography to Semantics \nAbstract: In this paper we investigate representational and methodological issues in a attractor network model of the mapping from orthography to semantics based on [Plaut, 1995]. We find that, contrary to psycholinguistic studies, the response time to concrete words (represented by more 1 bits in the output pattern) is slower than for abstract words. This model also predicts that response times to words in a dense semantic neighborhood will be faster than words which have few semantically similar neighbors in the language. This is conceptually consistent with the neighborhood effect seen in the mapping from orthography to phonology [Seidenberg & McClelland, 1989, Plaut et al., 1996] in that patterns with many neighbors are faster in both pathways, but since there is no regularity in the random mapping used here, it is clear that the cause of this effect is different than that of previous experiments. We also report a rather distressing finding. Reaction time in this model is measured by the time it takes the network to settle after being presented with a new input. When the criterion used to determine when the network is settled is changed to include testing of the hidden units, each of the results reported above change the direction of effect abstract words are now slower, as are words in dense semantic neighborhoods. Since there are independent reasons to exclude hidden units from the stopping criterion, and this is what is done in common practice, we believe this phenomenon to be of interest mostly to neural network practitioners. However, it does provide some insight into the interaction between the hidden and output units during settling. ",
"neighbors": [
1645
],
"mask": "Test"
},
{
"node_id": 798,
"label": 6,
"text": "Title: Composite Geometric Concepts and Polynomial Predictability \nAbstract: In this paper we investigate representational and methodological issues in a attractor network model of the mapping from orthography to semantics based on [Plaut, 1995]. We find that, contrary to psycholinguistic studies, the response time to concrete words (represented by more 1 bits in the output pattern) is slower than for abstract words. This model also predicts that response times to words in a dense semantic neighborhood will be faster than words which have few semantically similar neighbors in the language. This is conceptually consistent with the neighborhood effect seen in the mapping from orthography to phonology [Seidenberg & McClelland, 1989, Plaut et al., 1996] in that patterns with many neighbors are faster in both pathways, but since there is no regularity in the random mapping used here, it is clear that the cause of this effect is different than that of previous experiments. We also report a rather distressing finding. Reaction time in this model is measured by the time it takes the network to settle after being presented with a new input. When the criterion used to determine when the network is settled is changed to include testing of the hidden units, each of the results reported above change the direction of effect abstract words are now slower, as are words in dense semantic neighborhoods. Since there are independent reasons to exclude hidden units from the stopping criterion, and this is what is done in common practice, we believe this phenomenon to be of interest mostly to neural network practitioners. However, it does provide some insight into the interaction between the hidden and output units during settling. ",
"neighbors": [
507,
792,
1095,
1105,
1433
],
"mask": "Validation"
},
{
"node_id": 799,
"label": 0,
"text": "Title: A utility-based approach to learning in a mixed Case-Based and Model-Based Reasoning architecture \nAbstract: Case-based reasoning (CBR) can be used as a form of \"caching\" solved problems to speedup later problem solving. Using \"cached\" cases brings additional costs with it due to retrieval time, case adaptation time and also storage space. Simply storing all cases will result in a situation in which retrieving and trying to adapt old cases will take more time (on average) than not caching at all. This means that caching must be applied selectively to build a case memory that is actually useful. This is a form of the utility problem [4, 2]. The approach taken here is to construct a \"cost model\" of a system that can be used to predict the effect of changes to the system. In this paper we describe the utility problem associated with \"caching\" cases and the construction of a \"cost model\". We present experimental results that demonstrate that the model can be used to predict the effect of certain changes to the case memory.",
"neighbors": [
1122,
1699,
1706,
2656
],
"mask": "Train"
},
{
"node_id": 800,
"label": 1,
"text": "Title: Vector Quantizer Design Using Genetic Algorithms \nAbstract: A Genetic Algorithmic (GA) approach to vector quantizer design that combines the conventional Generalized Lloyd Algorithm (GLA) [6] is presented. We refer to this hybrid as the Genetic Generalized Lloyd Algorithm (GGLA). It works briefly as follows: A finite number of codebooks, called chromosomes, are selected. Each codebook undergoes iterative cycles of reproduction. We perform experiments with various alternative design choices using Gaussian-Markov processes, speech, and image as source data and signal-to-noise ratio (SNR) as the performance measure. In most cases, the GGLA showed performance improvements with respect to the GLA. We also compare our results with the Zador-Gersho formula [2, 9]. ",
"neighbors": [
163,
1136
],
"mask": "Test"
},
{
"node_id": 801,
"label": 0,
"text": "Title: Massively Parallel Support for Case-based Planning \nAbstract: In case-based planning (CBP), previously generated plans are stored as cases in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over planning from scratch (generative planning), thus offering a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory that requires significant domain engineering and complex memory indexing schemes to enable efficient case retrieval. In contrast, our CBP system, CaPER, is based on a massively parallel frame-based AI language and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large casebases can be used; and memory can be probed in numerous alternate ways, allowing more specific retrieval of stored plans that better fit a target problem with less adaptation. fl Preliminary version of an article appearing in IEEE Expert, February 1994, pp. 8-14. This paper is an extended version of [1]. ",
"neighbors": [
313,
1475,
1542
],
"mask": "Train"
},
{
"node_id": 802,
"label": 3,
"text": "Title: Double Censoring: Characterization and Computation of the Nonparametric Maximum Likelihood Estimator \nAbstract: In case-based planning (CBP), previously generated plans are stored as cases in memory and can be reused to solve similar planning problems in the future. CBP can save considerable time over planning from scratch (generative planning), thus offering a potential (heuristic) mechanism for handling intractable problems. One drawback of CBP systems has been the need for a highly structured memory that requires significant domain engineering and complex memory indexing schemes to enable efficient case retrieval. In contrast, our CBP system, CaPER, is based on a massively parallel frame-based AI language and can do extremely fast retrieval of complex cases from a large, unindexed memory. The ability to do fast, frequent retrievals has many advantages: indexing is unnecessary; very large casebases can be used; and memory can be probed in numerous alternate ways, allowing more specific retrieval of stored plans that better fit a target problem with less adaptation. fl Preliminary version of an article appearing in IEEE Expert, February 1994, pp. 8-14. This paper is an extended version of [1]. ",
"neighbors": [
973,
1168
],
"mask": "Test"
},
{
"node_id": 803,
"label": 1,
"text": "Title: Optimal and Asymptotically Optimal Equi-partition of Rectangular Domains via Stripe Decomposition \nAbstract: We present an efficient method for assigning any number of processors to tasks associated with the cells of a rectangular uniform grid. Load balancing equi-partition constraints are observed while approximately minimizing the total perimeter of the partition, which corresponds to the amount of interprocessor communication. This method is based upon decomposition of the grid into stripes of \"optimal\" height. We prove that under some mild assumptions, as the problem size grows large in all parameters, the error bound associated with this feasible solution approaches zero. We also present computational results from a high level parallel Genetic Algorithm that utilizes this method, and make comparisons with other methods. On a network of workstations, our algorithm solves within minutes instances of the problem that would require one billion binary variables in a Quadratic Assignment formulation.",
"neighbors": [
53,
357,
1439,
1563
],
"mask": "Train"
},
{
"node_id": 804,
"label": 4,
"text": "Title: Exploration Bonuses and Dual Control \nAbstract: Finding the Bayesian balance between exploration and exploitation in adaptive optimal control is in general intractable. This paper shows how to compute suboptimal estimates based on a certainty equivalence approximation arising from a form of dual control. This systematizes and extends existing uses of exploration bonuses in reinforcement learning (Sutton, 1990). The approach has two components: a statistical model of uncertainty in the world and a way of turning this into exploratory behaviour. ",
"neighbors": [
1459,
1585,
1664,
1697
],
"mask": "Test"
},
{
"node_id": 805,
"label": 2,
"text": "Title: Critical Points for Least-Squares Problems Involving Certain Analytic Functions, with Applications to Sigmoidal Nets \nAbstract: This paper deals with nonlinear least-squares problems involving the fitting to data of parameterized analytic functions. For generic regression data, a general result establishes the countability, and under stronger assumptions finiteness, of the set of functions giving rise to critical points of the quadratic loss function. In the special case of what are usually called \"single-hidden layer neural networks,\" which are built upon the standard sigmoidal activation tanh(x) (or equivalently (1 + e x ) 1 ), a rough upper bound for this cardinality is provided as well.",
"neighbors": [
930,
990
],
"mask": "Train"
},
{
"node_id": 806,
"label": 0,
"text": "Title: The Role of Generic Models in Conceptual Change \nAbstract: 1 This research was funded in part by NSF Grant No. IRI-92-10925 and in part by ONR Grant No. N00014-92-J-1234. We thank John Clement for the use of his protocol transcript, James Greeno for his contribution to developing our constructive modeling interpretation of it, and Ryan Tweney for his helpful comments Todd W. Griffith, Nancy J. Nersessian, and Ashok Goel Abstract We hypothesize generic models to be central in conceptual change in science. This hypothesis has its origins in two theoretical sources. The first source, constructive modeling, derives from a philosophical theory that synthesizes analyses of historical conceptual changes in science with investigations of reasoning and representation in cognitive psychology. The theory of constructive modeling posits generic mental models as productive in conceptual change. The second source, adaptive modeling, derives from a computational theory of creative design. Both theories posit situation independent domain abstractions, i.e. generic models. Using a constructive modeling interpretation of the reasoning exhibited in protocols collected by John Clement (1989) of a problem solving session involving conceptual change, we employ the resources of the theory of adaptive modeling to develop a new computational model, ToRQUE. Here we describe a piece of our analysis of the protocol to illustrate how our synthesis of the two theories is being used to develop a system for articulating and testing ToRQUE. The results of our research show how generic modeling plays a central role in conceptual change. They also demonstrate how such an interdisciplinary synthesis can provide significant insights into scientific reasoning. ",
"neighbors": [
1121,
1138,
1354
],
"mask": "Train"
},
{
"node_id": 807,
"label": 4,
"text": "Title: Designing Neural Networks for Adaptive Control \nAbstract: This paper discusses the design of neural networks to solve specific problems of adaptive control. In particular, it investigates the influence of typical problems arising in real-world control tasks as well as techniques for their solution that exist in the framework of neurocontrol. Based on this investigation, a systematic design method is developed. The method is exemplified for the development of an adaptive force controller for a robot manipulator. ",
"neighbors": [
294,
1438,
1672
],
"mask": "Test"
},
{
"node_id": 808,
"label": 2,
"text": "Title: Unsupervised Discrimination of Clustered Data via Optimization of Binary Information Gain \nAbstract: We present the information-theoretic derivation of a learning algorithm that clusters unlabelled data with linear discriminants. In contrast to methods that try to preserve information about the input patterns, we maximize the information gained from observing the output of robust binary discriminators implemented with sigmoid nodes. We derive a local weight adaptation rule via gradient ascent in this objective, demonstrate its dynamics on some simple data sets, relate our approach to previous work and suggest directions in which it may be extended.",
"neighbors": [
359,
731,
863,
1320,
1342,
1450
],
"mask": "Validation"
},
{
"node_id": 809,
"label": 2,
"text": "Title: A Self-Adjusting Dynamic Logic Module \nAbstract: This paper presents an ASOCS (Adaptive Self-Organizing Concurrent System) model for massively parallel processing of incrementally defined rule systems in such areas as adaptive logic, robotics, logical inference, and dynamic control. An ASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. This paper focuses on Adaptive Algorithm 2 (AA2) and details its architecture and learning algorithm. AA2 has significant memory and knowledge maintenance advantages over previous ASOCS models. An ASOCS can operate in either a data processing mode or a learning mode. During learning mode, the ASOCS is given a new rule expressed as a boolean conjunction. The AA2 learning algorithm incorporates the new rule in a distributed fashion in a short, bounded time. During data processing mode, the ASOCS acts as a parallel hardware circuit. ",
"neighbors": [
297,
812,
814,
908,
919,
1044,
1080,
1129,
1190,
1222,
1229,
1365,
1615,
1639
],
"mask": "Test"
},
{
"node_id": 810,
"label": 2,
"text": "Title: MIXED MEMORY MARKOV MODELS FOR TIME SERIES ANALYSIS \nAbstract: This paper presents a method for analyzing coupled time series using Markov models in a domain where the state space is immense. To make the parameter estimation tractable, the large state space is represented as the Cartesian product of smaller state spaces, a paradigm known as factorial Markov models. The transition matrix for this model is represented as a mixture of the transition matrices of the underlying dynamical processes. This formulation is know as mixed memory Markov models. Using this framework, we analyze the daily exchange rates for five currencies - British pound, Canadian dollar, Deutsch mark, Japanese yen, and Swiss franc as measured against the U.S. dollar.",
"neighbors": [
1287,
1718
],
"mask": "Train"
},
{
"node_id": 811,
"label": 1,
"text": "Title: ROBOT LEARNING WITH PARALLEL GENETIC ALGORITHMS ON NETWORKED COMPUTERS \nAbstract: This work explores the use of machine learning methods for extracting knowledge from simulations of complex systems. In particular, we use genetic algorithms to learn rule-based strategies used by autonomous robots. The evaluation of a given strategy may require several executions of a simulation to produce a meaningful estimate of the quality of the strategy. As a consequence, the evaluation of a single individual in the genetic algorithm requires a fairly substantial amount of computation. Such a system suggests the sort of large-grained parallelism that is available on a network of workstations. We describe an implementation of a parallel genetic algorithm, and present case studies of the resulting speedup on two robot learning tasks. ",
"neighbors": [
910,
1140,
1253
],
"mask": "Train"
},
{
"node_id": 812,
"label": 2,
"text": "Title: Word Perfect Corp. A TRANSFORMATION FOR IMPLEMENTING EFFICIENT DYNAMIC BACKPROPAGATION NEURAL NETWORKS \nAbstract: Most Artificial Neural Networks (ANNs) have a fixed topology during learning, and often suffer from a number of shortcomings as a result. Variations of ANNs that use dynamic topologies have shown ability to overcome many of these problems. This paper introduces Location-Independent Transformations (LITs) as a general strategy for implementing distributed feedforward networks that use dynamic topologies (dynamic ANNs) efficiently in parallel hardware. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient sup port for adding and deleting nodes dynamically during learning. In particular, this paper presents an LIT for standard Backpropagation with two layers of weights, and shows how dynamic extensions to Backpropagation can be supported. ",
"neighbors": [
809,
814,
1044,
1080,
1341,
1365
],
"mask": "Train"
},
{
"node_id": 813,
"label": 1,
"text": "Title: The Application of a Parallel Genetic Algorithm to the n=m=P=C max Flowshop Problem \nAbstract: Hard combinatorial problems in sequencing and scheduling led recently into further research of genetic algorithms. Canonical coding of the symmetric TSP can be modified into a coding of the n-job m-machine flowshop problem, which configurates the solution space in a different way. We show that well known genetic operators act intelligently on this coding scheme. They implecitely prefer a subset of solutions which contain the probably best solutions with respect to an objective. We conjecture that every new problem needs a determination of this necessary condition for a genetic algorithm to work, i. e. a proof by experiment. We implemented an asynchronous parallel genetic algorithm on a UNIX-based computer network. Computational results of the new heuristic are discussed. ",
"neighbors": [
163,
1523
],
"mask": "Train"
},
{
"node_id": 814,
"label": 2,
"text": "Title: A VLSI Implementation of a Parallel, Self-Organizing Learning Model \nAbstract: This paper presents a VLSI implementation of the Priority Adaptive Self-Organizing Concurrent System (PASOCS) learning model that is built using a multi-chip module (MCM) substrate. Many current hardware implementations of neural network learning models are direct implementations of classical neural network structures|a large number of simple computing nodes connected by a dense number of weighted links. PASOCS is one of a class of ASOCS (Adaptive Self-Organizing Concurrent System) connectionist models whose overall goal is the same as classical neural networks models, but whose functional mechanisms differ significantly. This model has potential application in areas such as pattern recognition, robotics, logical inference, and dynamic control. ",
"neighbors": [
809,
812,
1044,
1080,
1129,
1321,
1365
],
"mask": "Train"
},
{
"node_id": 815,
"label": 1,
"text": "Title: Genetic Algorithm based Scheduling in a Dynamic Manufacturing Environment \nAbstract: The application of adaptive optimization strategies to scheduling in manufacturing systems has recently become a research topic of broad interest. Population based approaches to scheduling predominantly treat static data models, whereas real-world scheduling tends to be a dynamic problem. This paper briefly outlines the application of a genetic algorithm to the dynamic job shop problem arising in production scheduling. First we sketch a genetic algorithm which can handle release times of jobs. In a second step a preceding simulation method is used to improve the performance of the algorithm. Finally the job shop is regarded as a nondeterministic optimization problem arising from the occurrence of job releases. Temporal Decomposition leads to a scheduling control that interweaves both simulation in time and genetic search.",
"neighbors": [
880,
1523
],
"mask": "Train"
},
{
"node_id": 816,
"label": 2,
"text": "Title: Comparing Adaptive and Non-Adaptive Connection Pruning With Pure Early Stopping \nAbstract: Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization, as is shown in this empirical study. However, an open problem in the pruning methods known today (OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This work presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. Results of statistical significance tests comparing autoprune, lprune, and static networks with early stopping are given, based on extensive experimentation with 14 different problems. The results indicate that training with pruning is often significantly better and rarely significantly worse than training with early stopping without pruning. Furthermore, lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required. ",
"neighbors": [
881,
1203
],
"mask": "Test"
},
{
"node_id": 817,
"label": 0,
"text": "Title: Case-Based Similarity Assessment: Estimating Adaptability from Experience \nAbstract: Case-based problem-solving systems rely on similarity assessment to select stored cases whose solutions are easily adaptable to fit current problems. However, widely-used similarity assessment strategies, such as evaluation of semantic similarity, can be poor predictors of adaptability. As a result, systems may select cases that are difficult or impossible for them to adapt, even when easily adaptable cases are available in memory. This paper presents a new similarity assessment approach which couples similarity judgments directly to a case library containing the system's adaptation knowledge. It examines this approach in the context of a case-based planning system that learns both new plans and new adaptations. Empirical tests of alternative similarity assessment strategies show that this approach enables better case selection and increases the benefits accrued from learned adaptations. ",
"neighbors": [
818,
819,
1125,
1212,
1367
],
"mask": "Train"
},
{
"node_id": 818,
"label": 0,
"text": "Title: Learning to Integrate Multiple Knowledge Sources for Case-Based Reasoning \nAbstract: The case-based reasoning process depends on multiple overlapping knowledge sources, each of which provides an opportunity for learning. Exploiting these opportunities requires not only determining the learning mechanisms to use for each individual knowledge source, but also how the different learning mechanisms interact and their combined utility. This paper presents a case study examining the relative contributions and costs involved in learning processes for three different knowledge sources|cases, case adaptation knowledge, and similarity information|in a case-based planner. It demonstrates the importance of interactions between different learning processes and identifies a promising method for integrating multiple learning methods to improve case-based reasoning.",
"neighbors": [
582,
817,
819,
1125,
1212,
1215
],
"mask": "Train"
},
{
"node_id": 819,
"label": 0,
"text": "Title: A Case Study of Case-Based CBR \nAbstract: Case-based reasoning depends on multiple knowledge sources beyond the case library, including knowledge about case adaptation and criteria for similarity assessment. Because hand coding this knowledge accounts for a large part of the knowledge acquisition burden for developing CBR systems, it is appealing to acquire it by learning, and CBR is a promising learning method to apply. This observation suggests developing case-based CBR systems, CBR systems whose components themselves use CBR. However, despite early interest in case-based approaches to CBR, this method has received comparatively little attention. Open questions include how case-based components of a CBR system should be designed, the amount of knowledge acquisition effort they require, and their effectiveness. This paper investigates these questions through a case study of issues addressed, methods used, and results achieved by a case-based planning system that uses CBR to guide its case adaptation and similarity assessment. The paper discusses design considerations and presents empirical results that support the usefulness of case-based CBR, that point to potential problems and tradeoffs, and that directly demonstrate the overlapping roles of different CBR knowledge sources. The paper closes with general lessons about case-based CBR and areas for future research.",
"neighbors": [
817,
818,
1154,
1212,
1215
],
"mask": "Validation"
},
{
"node_id": 820,
"label": 2,
"text": "Title: NESTED NETWORKS FOR ROBOT CONTROL \nAbstract: Case-based reasoning depends on multiple knowledge sources beyond the case library, including knowledge about case adaptation and criteria for similarity assessment. Because hand coding this knowledge accounts for a large part of the knowledge acquisition burden for developing CBR systems, it is appealing to acquire it by learning, and CBR is a promising learning method to apply. This observation suggests developing case-based CBR systems, CBR systems whose components themselves use CBR. However, despite early interest in case-based approaches to CBR, this method has received comparatively little attention. Open questions include how case-based components of a CBR system should be designed, the amount of knowledge acquisition effort they require, and their effectiveness. This paper investigates these questions through a case study of issues addressed, methods used, and results achieved by a case-based planning system that uses CBR to guide its case adaptation and similarity assessment. The paper discusses design considerations and presents empirical results that support the usefulness of case-based CBR, that point to potential problems and tradeoffs, and that directly demonstrate the overlapping roles of different CBR knowledge sources. The paper closes with general lessons about case-based CBR and areas for future research.",
"neighbors": [
829,
1115,
1146,
1252
],
"mask": "Train"
},
{
"node_id": 821,
"label": 2,
"text": "Title: Massive Data Discrimination via Linear Support Vector Machines \nAbstract: A linear support vector machine formulation is used to generate a fast, finitely-terminating linear-programming algorithm for discriminating between two massive sets in n-dimensional space, where the number of points can be orders of magnitude larger than n. The algorithm creates a succession of sufficiently small linear programs that separate chunks of the data at a time. The key idea is that a small number of support vectors, corresponding to linear programming constraints with positive dual variables, are carried over between the successive small linear programs, each of which containing a chunk of the data. We prove that this procedure is monotonic and terminates in a finite number of steps at an exact solution that leads to a globally optimal separating plane for the entire dataset. Numerical results on fully dense publicly available datasets, numbering 20,000 to 1 million points in 32-dimensional space, confirm the theoretical results and demonstrate the ability to handle very large problems.",
"neighbors": [
607,
1389,
1421
],
"mask": "Test"
},
{
"node_id": 822,
"label": 6,
"text": "Title: Achieving High-Accuracy Text-to-Speech with Machine Learning \nAbstract: In 1987, Sejnowski and Rosenberg developed their famous NETtalk system for English text-to-speech. This chapter describes a machine learning approach to text-to-speech that builds upon and extends the initial NETtalk work. Among the many extensions to the NETtalk system were the following: a different learning algorithm, a wider input \"window\", error-correcting output coding, a right-to-left scan of the word to be pronounced (with the results of each decision influencing subsequent decisions), and the addition of several useful input features. These changes yielded a system that performs much better than the original NETtalk system. After training on 19,002 words, the system achieves 93.7% correct pronunciation of individual phonemes and 64.8% correct pronunciation of whole words (where the pronunciation must exactly match the dictionary pronunciation to be correct). Based on the judgements of three human participants in a blind assessment study, our system was estimated to have a serious error rate of 16.7% (on whole words) compared to an error rate of 26.1% for the DECTalk3.0 rulebase.",
"neighbors": [
1484,
1644
],
"mask": "Train"
},
{
"node_id": 823,
"label": 2,
"text": "Title: Misclassification Minimization \nAbstract: The problem of minimizing the number of misclassified points by a plane, attempting to separate two point sets with intersecting convex hulls in n-dimensional real space, is formulated as a linear program with equilibrium constraints (LPEC). This general LPEC can be converted to an exact penalty problem with a quadratic objective and linear constraints. A Frank-Wolfe-type algorithm is proposed for the penalty problem that terminates at a stationary point or a global solution. Novel aspects of the approach include: (i) A linear complementarity formulation of the step function that \"counts\" misclassifications, (ii) Exact penalty formulation without boundedness, nondegeneracy or constraint qualification assumptions, (iii) An exact solution extraction from the sequence of minimizers of the penalty function for a finite value of the penalty parameter for the general LPEC and an explicitly exact solution for the LPEC with uncoupled constraints, and (iv) A parametric quadratic programming formulation of the LPEC associated with the misclassification minimization problem.",
"neighbors": [
142,
227,
427,
1283
],
"mask": "Train"
},
{
"node_id": 824,
"label": 0,
"text": "Title: Merge Strategies for Multiple Case Plan Replay \nAbstract: Planning by analogical reasoning is a learning method that consists of the storage, retrieval, and replay of planning episodes. Planning performance improves with the accumulation and reuse of a library of planning cases. Retrieval is driven by domain-dependent similarity metrics based on planning goals and scenarios. In complex situations with multiple goals, retrieval may find multiple past planning cases that are jointly similar to the new planning situation. This paper presents the issues and implications involved in the replay of multiple planning cases, as opposed to a single one. Multiple case plan replay involves the adaptation and merging of the annotated derivations of the planning cases. Several merge strategies for replay are introduced that can process with various forms of eagerness the differences between the past and new situations and the annotated justifications at the planning cases. In particular, we introduce an effective merging strategy that considers plan step choices especially appropriate for the interleaving of planning and plan execution. We illustrate and discuss the effectiveness of the merging strategies in specific domains.",
"neighbors": [
1215,
1621,
1707
],
"mask": "Validation"
},
{
"node_id": 825,
"label": 0,
"text": "Title: Towards Mixed-Initiative Rationale-Supported Planning \nAbstract: This paper introduces our work on mixed-initiative, rationale-supported planning. The work centers on the principled reuse and modification of past plans by exploiting their justification structure. The goal is to record as much as possible of the rationale underlying each planning decision in a mixed-initiative framework where human and machine planners interact. This rationale is used to determine which past plans are relevant to a new situation, to focus user's modification and replanning on different relevant steps when external circumstances dictate, and to ensure consistency in multi-user distributed scenarios. We build upon our previous work in Prodigy/Analogy, which incorporates algorithms to capture and reuse the rationale of an automated planner during its plan generation. To support a mixed-initiative environment, we have developed user interactive capabilities in the Prodigy planning and learning system. We are also working towards the integration of the rationale-supported plan reuse in Prodigy/Analogy with the plan retrieval and modification tools of ForMAT. Finally, we have focused on the user's input into the process of plan reuse, in particular when conditional planning is needed. ",
"neighbors": [
1215,
1707
],
"mask": "Train"
},
{
"node_id": 826,
"label": 2,
"text": "Title: Combining the Predictions of Multiple Classifiers: Using Competitive Learning to Initialize Neural Networks \nAbstract: The primary goal of inductive learning is to generalize well that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve generalization. One of their key assumptions is that the individual networks should be independent in the errors they produce. In the standard way of performing backpropagation this assumption may be violated, because the standard procedure is to initialize network weights in the region of weight space near the origin. This means that backpropagation's gradient-descent search may only reach a small subset of the possible local minima. In this paper we present an approach to initializing neural networks that uses competitive learning to intelligently create networks that are originally located far from the origin of weight space, thereby potentially increasing the set of reachable local minima. We report experiments on two real-world datasets where combinations of networks initialized with our method generalize better than combina tions of networks initialized the traditional way.",
"neighbors": [
1237,
1273,
1422,
1457,
1606
],
"mask": "Validation"
},
{
"node_id": 827,
"label": 3,
"text": "Title: Two Algorithms for Inducing Structural Equation Models from Data \nAbstract: We present two algorithms for inducing structural equation models from data. Assuming no latent variables, these models have a causal interpretation and their parameters may be estimated by linear multiple regression. Our algorithms are comparable with PC [15] and IC [12, 11], which rely on conditional independence. We present the algorithms and empirical comparisons with PC and IC. ",
"neighbors": [
909,
913,
1527,
1894
],
"mask": "Test"
},
{
"node_id": 828,
"label": 2,
"text": "Title: AN ANYTIME APPROACH TO CONNECTIONIST THEORY REFINEMENT: REFINING THE TOPOLOGIES OF KNOWLEDGE-BASED NEURAL NETWORKS \nAbstract: We present two algorithms for inducing structural equation models from data. Assuming no latent variables, these models have a causal interpretation and their parameters may be estimated by linear multiple regression. Our algorithms are comparable with PC [15] and IC [12, 11], which rely on conditional independence. We present the algorithms and empirical comparisons with PC and IC. ",
"neighbors": [
1422,
1457
],
"mask": "Train"
},
{
"node_id": 829,
"label": 2,
"text": "Title: Approximation with neural networks: Between local and global approximation \nAbstract: We investigate neural network based approximation methods. These methods depend on the locality of the basis functions. After discussing local and global basis functions, we propose a a multi-resolution hierarchical method. The various resolutions are stored at various levels in a tree. At the root of the tree, a global approximation is kept; the leafs store the learning samples themselves. Intermediate nodes store intermediate representations. In order to find an optimal partitioning of the input space, self-organising maps (SOM's) are used. The proposed method has implementational problems reminiscent of those encountered in many-particle simulations. We will investigate the parallel implementation of this method, using parallel hierarchical meth ods for many-particle simulations as a starting point.",
"neighbors": [
820,
962,
1252
],
"mask": "Train"
},
{
"node_id": 830,
"label": 2,
"text": "Title: Orthogonal incremental learning of a feedforward network \nAbstract: Orthogonal incremental learning (OIL) is a new approach of incremental training for a feedforward network with a single hidden layer. OIL is based on the idea to describe the output weights (but not the hidden nodes) as a set of orthogonal basis functions. Hidden nodes are treated as the orthogonal representation of the network in the output weights domain. We proved that a separate training of hidden nodes does not conflict with previously optimized nodes and is described by a special relationship orthogonal backpropagation (OBP) rule. An advantage of OIL over existing algorithms is extremely fast learning. This approach can be also easily extended to build-up incrementally an arbitrary function as a linear composition of adjustable functions which are not necessarily orthogonal. OIL has been tested on `two-spirals' and `Net Talk' benchmark problems. ",
"neighbors": [
1252
],
"mask": "Test"
},
{
"node_id": 831,
"label": 0,
"text": "Title: Beyond predictive accuracy: what? \nAbstract: Today's potential users of machine learning technology are faced with the non-trivial problem of choosing, from the large, ever-increasing number of available tools, the one most appropriate for their particular task. To assist the often non-initiated users, it is desirable that this model selection process be automated. Using experience from base level learning, researchers have proposed meta-learning as a possible solution. Historically, predictive accuracy has been the de facto criterion, with most work in meta-learning focusing on the discovery of rules that match applications to models based on accuracy only. Although predictive accuracy is clearly an important criterion, it is also the case that there are a number of other criteria that could, and often ought to, be considered when learning about model selection. This paper presents a number of such criteria and discusses the impact they have on meta-level approaches to model selection.",
"neighbors": [
885,
2243,
2466
],
"mask": "Train"
},
{
"node_id": 832,
"label": 2,
"text": "Title: Learning Continuous Attractors in Recurrent Networks \nAbstract: One approach to invariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing information, the network develops a continuous attractor that models the manifold from which the patterns are drawn. From a statistical viewpoint, the pattern completion task allows a formulation of unsupervised A classic approach to invariant object recognition is to use a recurrent neural network as an associative memory[1]. In spite of the intuitive appeal and biological plausibility of this approach, it has largely been abandoned in practical applications. This paper introduces two new concepts that could help resurrect it: object representation by continuous attractors, and learning attractors by pattern completion. In most models of associative memory, memories are stored as attractive fixed points at discrete locations in state space[1]. Discrete attractors may not be appropriate for patterns with continuous variability, like the images of a three-dimensional object from different viewpoints. When the instantiations of an object lie on a continuous pattern manifold, it is more appropriate to represent objects by attractive manifolds of fixed points, or continuous attractors. To make this idea practical, it is important to find methods for learning attractors from examples. A naive method is to train the network to retain examples in short-term memory. This method is deficient because it does not prevent the network from storing spurious fixed points that are unrelated to the examples. A superior method is to train the network to restore examples that have been corrupted, so that it learns to complete patterns by filling in missing information. learning in terms of regression rather than density estimation.",
"neighbors": [
1052,
1701
],
"mask": "Train"
},
{
"node_id": 833,
"label": 1,
"text": "Title: Graph Coloring with Adaptive Evolutionary Algorithms \nAbstract: This paper presents the results of an experimental investigation on solving graph coloring problems with Evolutionary Algorithms (EA). After testing different algorithm variants we conclude that the best option is an asexual EA using order-based representation and an adaptation mechanism that periodically changes the fitness function during the evolution. This adaptive EA is general, using no domain specific knowledge, except, of course, from the decoder (fitness function). We compare this adaptive EA to a powerful traditional graph coloring technique DSatur and the Grouping GA on a wide range of problem instances with different size, topology and edge density. The results show that the adaptive EA is superior to the Grouping GA and outperforms DSatur on the hardest problem instances. Furthermore, it scales up better with the problem size than the other two algorithms and indicates a linear computational complexity. ",
"neighbors": [
714,
1218,
1516,
1796,
2001
],
"mask": "Validation"
},
{
"node_id": 834,
"label": 2,
"text": "Title: Simple Neuron Models for Independent Component Analysis \nAbstract: Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbian-like learning rules are introduced for estimating one of the independent components from sphered data. Some of the learning rules can be used to estimate an independent component which has a negative kurtosis, and the others estimate a component of positive kurtosis. Next, a two-unit system is introduced to estimate an independent component of any kurtosis. The results are then generalized to estimate independent components from non-sphered (raw) mixtures. To separate several independent components, a system of several neurons with linear negative feedback is used. The convergence of the learning rules is rigorously proven without any unnecessary hypotheses on the distributions of the independent components.",
"neighbors": [
576,
1067,
1520,
1814,
1821
],
"mask": "Test"
},
{
"node_id": 835,
"label": 0,
"text": "Title: Case-Based Acquisition of Place Knowledge \nAbstract: In this paper we define the task of place learning and describe one approach to this problem. The framework represents distinct places using evidence grids, a probabilistic description of occupancy. Place recognition relies on case-based classification, augmented by a registration process to correct for translations. The learning mechanism is also similar to that in case-based systems, involving the simple storage of inferred evidence grids. Experimental studies with both physical and simulated robots suggest that this approach improves place recognition with experience, that it can handle significant sensor noise, and that it scales well to increasing numbers of places. Previous researchers have studied evidence grids and place learning, but they have not combined these two powerful concepts, nor have they used the experimental methods of machine learning to evaluate their methods' abilities. ",
"neighbors": [
688,
1248
],
"mask": "Test"
},
{
"node_id": 836,
"label": 6,
"text": "Title: Unsupervised Constructive Learning \nAbstract: In constructive induction (CI), the learner's problem representation is modified as a normal part of the learning process. This is useful when the initial representation is inadequate or inappropriate. In this paper, I argue that the distinction between constructive and non-constructive methods is unclear. I propose a theoretical model which allows (a) a clean distinction to be made and (b) the process of CI to be properly motivated. I also show that although constructive induction has been used almost exclusively in the context of supervised learning, there is no reason why it cannot form a part of an unsupervised regime.",
"neighbors": [
375,
426,
1266,
1595
],
"mask": "Train"
},
{
"node_id": 837,
"label": 5,
"text": "Title: Inductive Database Design \nAbstract: When designing a (deductive) database, the designer has to decide for each predicate (or relation) whether it should be defined extensionally or intensionally, and what the definition should look like. An intelligent system is presented to assist the designer in this task. It starts from an example database in which all predicates are defined extensionally. It then tries to compact the database by transforming extensionally defined predicates into intensionally defined ones. The intelligent system employs techniques from the area of inductive logic programming. ",
"neighbors": [
1007,
1489,
1686
],
"mask": "Test"
},
{
"node_id": 838,
"label": 3,
"text": "Title: Possible World Partition Sequences: A Unifying Framework for Uncertain Reasoning \nAbstract: When we work with information from multiple sources, the formalism each employs to handle uncertainty may not be uniform. In order to be able to combine these knowledge bases of different formats, we need to first establish a common basis for characterizing and evaluating the different formalisms, and provide a semantics for the combined mechanism. A common framework can provide an infrastructure for building an integrated system, and is essential if we are to understand its behavior. We present a unifying framework based on an ordered partition of possible worlds called partition sequences, which corresponds to our intuitive notion of biasing towards certain possible scenarios when we are uncertain of the actual situation. We show that some of the existing formalisms, namely, default logic, autoepistemic logic, probabilistic conditioning and thresholding (generalized conditioning), and possibility theory can be incorporated into this general framework.",
"neighbors": [
1458
],
"mask": "Train"
},
{
"node_id": 839,
"label": 2,
"text": "Title: Signal Separation by Nonlinear Hebbian Learning \nAbstract: When we work with information from multiple sources, the formalism each employs to handle uncertainty may not be uniform. In order to be able to combine these knowledge bases of different formats, we need to first establish a common basis for characterizing and evaluating the different formalisms, and provide a semantics for the combined mechanism. A common framework can provide an infrastructure for building an integrated system, and is essential if we are to understand its behavior. We present a unifying framework based on an ordered partition of possible worlds called partition sequences, which corresponds to our intuitive notion of biasing towards certain possible scenarios when we are uncertain of the actual situation. We show that some of the existing formalisms, namely, default logic, autoepistemic logic, probabilistic conditioning and thresholding (generalized conditioning), and possibility theory can be incorporated into this general framework.",
"neighbors": [
59,
354,
576,
874,
1067,
1072,
1520,
1526,
1709
],
"mask": "Train"
},
{
"node_id": 840,
"label": 2,
"text": "Title: Using the Grow-And-Prune Network to Solve Problems of Large Dimensionality \nAbstract: This paper investigates a technique for creating sparsely connected feed-forward neural networks which may be capable of producing networks that have very large input and output layers. The architecture appears to be particularly suited to tasks that involve sparse training data as it is able to take advantage of the sparseness to further reduce training time. Some initial results are presented based on tests on the 16 bit compression problem. ",
"neighbors": [
1196
],
"mask": "Train"
},
{
"node_id": 841,
"label": 3,
"text": "Title: Bayesian inference for nondecomposable graphical Gaussian models \nAbstract: In this paper we propose a method to calculate the posterior probability of a nondecomposable graphical Gaussian model. Our proposal is based on a new device to sample from Wishart distributions, conditional on the graphical constraints. As a result, our methodology allows Bayesian model selection within the whole class of graphical Gaussian models, including nondecomposable ones.",
"neighbors": [
1240,
2559
],
"mask": "Train"
},
{
"node_id": 842,
"label": 4,
"text": "Title: Metrics for Temporal Difference Learning \nAbstract: For an absorbing Markov chain with a reinforcement on each transition, Bertsekas (1995a) gives a simple example where the function learned by TD( ll ) depends on ll . Bertsekas showed that for ll =1 the approximation is optimal with respect to a least-squares error of the value function, and that for ll =0 the approximation obtained by the TD method is poor with respect to the same metric. With respect to the error in the values, TD(1) approximates the function better than TD(0). However, with respect to the error in the differences in the values, TD(0) approximates the function better than TD(1). TD(1) is only better than TD(0) with respect to the former metric rather than the latter. In addition, direct TD( ll ) weights the errors unequally, while residual gradient methods (Baird, 1995, Harmon, Baird, & Klopf, 1995) weight the errors equally. For the case of control, a simple Markov decision process is presented for which direct TD(0) and residual gradient TD(0) both learn the optimal policy, while TD( 11 ) learns a suboptimal policy. These results suggest that, for this example, the differences in state values are more significant than the state values themselves, so TD(0) is preferable to TD(1). ",
"neighbors": [
565,
1540
],
"mask": "Train"
},
{
"node_id": 843,
"label": 4,
"text": "Title: Locally Weighted Learning for Control \nAbstract: Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We explain various forms that control tasks can take, and how this affects the choice of learning paradigm. The discussion section explores the interesting impact that explicitly remembering all previous experiences has on the problem of learning to control. ",
"neighbors": [
568,
575,
1101,
2658
],
"mask": "Test"
},
{
"node_id": 844,
"label": 1,
"text": "Title: Evolving Compact Solutions in Genetic Programming: A Case Study \nAbstract: Genetic programming (GP) is a variant of genetic algorithms where the data structures handled are trees. This makes GP especially useful for evolving functional relationships or computer programs, as both can be represented as trees. Symbolic regression is the determination of a function dependence y = g(x) that approximates a set of data points (x i ; y i ). In this paper the feasibility of symbolic regression with GP is demonstrated on two examples taken from different domains. Furthermore several suggested methods from literature are compared that are intended to improve GP performance and the readability of solutions by taking into account introns or redundancy that occurs in the trees and keeping the size of the trees small. The experiments show that GP is an elegant and useful tool to derive complex functional dependencies on numerical data.",
"neighbors": [
55,
934,
974,
1184,
1784
],
"mask": "Validation"
},
{
"node_id": 845,
"label": 2,
"text": "Title: Mixture Models in the Exploration of Structure-Activity Relationships in Drug Design \nAbstract: We report on a study of mixture modeling problems arising in the assessment of chemical structure-activity relationships in drug design and discovery. Pharmaceutical research laboratories developing test compounds for screening synthesize many related candidate compounds by linking together collections of basic molecular building blocks, known as monomers. These compounds are tested for biological activity, feeding in to screening for further analysis and drug design. The tests also provide data relating compound activity to chemical properties and aspects of the structure of associated monomers, and our focus here is studying such relationships as an aid to future monomer selection. The level of chemical activity of compounds is based on the geometry of chemical binding of test compounds to target binding sites on receptor compounds, but the screening tests are unable to identify binding configurations. Hence potentially critical covari-ate information is missing as a natural latent variable. Resulting statistical models are then mixed with respect to such missing information, so complicating data analysis and inference. This paper reports on a study of a two-monomer, two-binding site framework and associated data. We build structured mixture models that mix linear regression models, predicting chemical effectiveness, with respect to site-binding selection mechanisms. We discuss aspects of modeling and analysis, including problems and pitfalls, and describe results of analyses of a simulated and real data set. In modeling real data, we are led into critical model extensions that introduce hierarchical random effects components to adequately capture heterogeneities in both the site binding mechanisms and in the resulting levels of effectiveness of compounds once bound. Comments on current and potential future directions conclude the report. ",
"neighbors": [
784
],
"mask": "Test"
},
{
"node_id": 846,
"label": 1,
"text": "Title: Evolving Visually Guided Robots \nAbstract: A version of this paper appears in: Proceedings of SAB92, the Second International Conference on Simulation of Adaptive Behaviour J.-A. Meyer, H. Roitblat, and S. Wilson, editors, MIT Press Bradford Books, Cambridge, MA, 1993. ",
"neighbors": [
219,
757,
781,
1533,
2173,
2204
],
"mask": "Validation"
},
{
"node_id": 847,
"label": 6,
"text": "Title: A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with\nAbstract: We give an analysis of the generalization error of cross validation in terms of two natural measures of the difficulty of the problem under consideration: the approximation rate (the accuracy to which the target function can be ideally approximated as a function of the number of hypothesis parameters), and the estimation rate (the deviation between the training and generalization errors as a function of the number of hypothesis parameters). The approximation rate captures the complexity of the target function with respect to the hypothesis model, and the estimation rate captures the extent to which the hypothesis model suffers from overfitting. Using these two measures, we give a rigorous and general bound on the error of cross validation. The bound clearly shows the tradeoffs involved with making fl the fraction of data saved for testing too large or too small. By optimizing the bound with respect to fl, we then argue (through a combination of formal analysis, plotting, and controlled experimentation) that the following qualitative properties of cross validation behavior should be quite robust to significant changes in the underlying model selection problem: ",
"neighbors": [
710,
848,
967,
1032,
1402,
1468
],
"mask": "Train"
},
{
"node_id": 848,
"label": 6,
"text": "Title: An Experimental and Theoretical Comparison of Model Selection Methods on simple model selection problems, the\nAbstract: We investigate the problem of model selection in the setting of supervised learning of boolean functions from independent random examples. More precisely, we compare methods for finding a balance between the complexity of the hypothesis chosen and its observed error on a random training sample of limited size, when the goal is that of minimizing the resulting generalization error. We undertake a detailed comparison of three well-known model selection methods | a variation of Vapnik's Guaranteed Risk Minimization (GRM), an instance of Rissanen's Minimum Description Length Principle (MDL), and cross validation (CV). We introduce a general class of model selection methods (called penalty-based methods) that includes both GRM and MDL, and provide general methods for analyzing such rules. We provide both controlled experimental evidence and formal theorems to support the following conclusions: * The class of penalty-based methods is fundamentally handicapped in the sense that there exist two types of model selection problems for which every penalty-based method must incur large generalization error on at least one, while CV enjoys small generalization error Despite the inescapable incomparability of model selection methods under certain circumstances, we conclude with a discussion of our belief that the balance of the evidence provides specific reasons to prefer CV to other methods, unless one is in possession of detailed problem-specific information. on both.",
"neighbors": [
591,
847,
967,
1032,
1223,
1388,
1400,
1468,
1607,
1739
],
"mask": "Test"
},
{
"node_id": 849,
"label": 5,
"text": "Title: Generalization of Clauses under Implication \nAbstract: In the area of inductive learning, generalization is a main operation, and the usual definition of induction is based on logical implication. Recently there has been a rising interest in clausal representation of knowledge in machine learning. Almost all inductive learning systems that perform generalization of clauses use the relation -subsumption instead of implication. The main reason is that there is a well-known and simple technique to compute least general generalizations under -subsumption, but not under implication. However generalization under -subsumption is inappropriate for learning recursive clauses, which is a crucial problem since recursion is the basic program structure of logic programs. We note that implication between clauses is undecidable, and we therefore introduce a stronger form of implication, called T-implication, which is decidable between clauses. We show that for every finite set of clauses there exists a least general generalization under T-implication. We describe a technique to reduce generalizations under implication of a clause to generalizations under -subsumption of what we call an expansion of the original clause. Moreover we show that for every non-tautological clause there exists a T-complete expansion, which means that every generalization under T-implication of the clause is reduced to a generalization under -subsumption of the expansion.",
"neighbors": [
1720
],
"mask": "Validation"
},
{
"node_id": 850,
"label": 3,
"text": "Title: COMPUTING DISTRIBUTIONS OF ORDER STATISTICS \nAbstract: Recurrence relationships among the distribution functions of order statistics of independent, but not identically distributed, random quantities are derived. These results extend known theory and provide computationally practicable algorithms for a variety of problems. ",
"neighbors": [
917
],
"mask": "Train"
},
{
"node_id": 851,
"label": 3,
"text": "Title: Bayesian Probability Theory A General Method for Machine Learning \nAbstract: This paper argues that Bayesian probability theory is a general method for machine learning. From two well-founded axioms, the theory is capable of accomplishing learning tasks that are incremental or non-incremental, supervised or unsupervised. It can learn from different types of data, regardless of whether they are noisy or perfect, independent facts or behaviors of an unknown machine. These capabilities are (partially) demonstrated in the paper through the uniform application of the theory to two typical types of machine learning: incremental concept learning and unsupervised data classification. The generality of the theory suggests that the process of learning may not have so many different \"types\" as currently held, and the method that is the oldest may be the best after all. ",
"neighbors": [
1491
],
"mask": "Test"
},
{
"node_id": 852,
"label": 3,
"text": "Title: Bayesian Models for Non-Linear Autoregressions \nAbstract: This paper argues that Bayesian probability theory is a general method for machine learning. From two well-founded axioms, the theory is capable of accomplishing learning tasks that are incremental or non-incremental, supervised or unsupervised. It can learn from different types of data, regardless of whether they are noisy or perfect, independent facts or behaviors of an unknown machine. These capabilities are (partially) demonstrated in the paper through the uniform application of the theory to two typical types of machine learning: incremental concept learning and unsupervised data classification. The generality of the theory suggests that the process of learning may not have so many different \"types\" as currently held, and the method that is the oldest may be the best after all. ",
"neighbors": [
1015,
1654
],
"mask": "Train"
},
{
"node_id": 853,
"label": 6,
"text": "Title: Error-Correcting Output Codes for Local Learners \nAbstract: Error-correcting output codes (ECOCs) represent classes with a set of output bits, where each bit encodes a binary classification task corresponding to a unique partition of the classes. Algorithms that use ECOCs learn the function corresponding to each bit, and combine them to generate class predictions. ECOCs can reduce both variance and bias errors for multiclass classification tasks when the errors made at the output bits are not correlated. They work well with algorithms that eagerly induce global classifiers (e.g., C4.5) but do not assist simple local classifiers (e.g., nearest neighbor), which yield correlated predictions across the output bits. We show that the output bit predictions of local learners can be decorrelated by selecting different features for each bit. We present promising empirical results for this combination of ECOCs, near est neighbor, and feature selection.",
"neighbors": [
1019,
1053,
1732,
2423
],
"mask": "Validation"
},
{
"node_id": 854,
"label": 1,
"text": "Title: A Comparison of Random Search versus Genetic Programming as Engines for Collective Adaptation \nAbstract: We have integrated the distributed search of genetic programming (GP) based systems with collective memory to form a collective adaptation search method. Such a system significantly improves search as problem complexity is increased. Since the pure GP approach does not scale well with problem complexity, a natural question is which of the two components is actually contributing to the search process. We investigate a collective memory search which utilizes a random search engine and find that it significantly outperforms the GP based search engine. We examine the solution space and show that as problem complexity and search space grow, a collective adaptive system will perform better than a collective memory search employing random search as an engine.",
"neighbors": [
1178,
1231,
2211,
2598
],
"mask": "Test"
},
{
"node_id": 855,
"label": 3,
"text": "Title: Hierarchical priors and mixture models, with application in regression and density estimation \nAbstract: We have integrated the distributed search of genetic programming (GP) based systems with collective memory to form a collective adaptation search method. Such a system significantly improves search as problem complexity is increased. Since the pure GP approach does not scale well with problem complexity, a natural question is which of the two components is actually contributing to the search process. We investigate a collective memory search which utilizes a random search engine and find that it significantly outperforms the GP based search engine. We examine the solution space and show that as problem complexity and search space grow, a collective adaptive system will perform better than a collective memory search employing random search as an engine.",
"neighbors": [
1338,
1654
],
"mask": "Train"
},
{
"node_id": 856,
"label": 1,
"text": "Title: Hierarchical priors and mixture models, with application in regression and density estimation \nAbstract: A Genetic Algorithm Tutorial Darrell Whitley Technical Report CS-93-103 (Revised) November 10, 1993 ",
"neighbors": [
163,
793,
1016,
1153
],
"mask": "Train"
},
{
"node_id": 857,
"label": 0,
"text": "Title: How to Retrieve Relevant Information? \nAbstract: The document presents an approach to judging relevance of retrieved information based on a novel approach to similarity assessment. Contrary to other systems, we define relevance measures (context in similarity) at query time. This is necessary if since without a context in similarity one cannot guarantee that similar items will also be relevant.",
"neighbors": [
1125,
1483,
2060
],
"mask": "Validation"
},
{
"node_id": 858,
"label": 0,
"text": "Title: MULTISTRATEGY LEARNING IN REACTIVE CONTROL SYSTEMS FOR AUTONOMOUS ROBOTIC NAVIGATION \nAbstract: This paper presents a self-improving reactive control system for autonomous robotic navigation. The navigation module uses a schema-based reactive control system to perform the navigation task. The learning module combines case-based reasoning and reinforcement learning to continuously tune the navigation system through experience. The case-based reasoning component perceives and characterizes the system's environment, retrieves an appropriate case, and uses the recommendations of the case to tune the parameters of the reactive control system. The reinforcement learning component refines the content of the cases based on the current experience. Together, the learning components perform on-line adaptation, resulting in improved performance as the reactive control system tunes itself to the environment, as well as on-line case learning, resulting in an improved library of cases that capture environmental regularities necessary to perform on-line adaptation. The system is extensively evaluated through simulation studies using several performance metrics and system configurations.",
"neighbors": [
281,
566,
991,
1084,
2035,
2303,
2556
],
"mask": "Validation"
},
{
"node_id": 859,
"label": 6,
"text": "Title: On-Site Learning \nAbstract: A model for on-site learning is presented. The system learns by querying \"hard\" patterns while classifying \"easy\" ones. This model is related to query-based filtering methods, but takes into account that in addition to labelling, filtering through the data has a cost. A few simple policies are introduced and analyzed for a simple problem (1D high low game). In addition the Query-by-Committee algorithm (Seung et. al) is suggested as a good approximator of the model space for real-world domains. Results using this algorithm on a synthesized problem and a real-world OCR task using both a backpropagation network and a nearest neighbor classifier show that an on-site learner can perform as well as a classifier trained off-site, while achieving significant cost reduction. ",
"neighbors": [
1198
],
"mask": "Test"
},
{
"node_id": 860,
"label": 1,
"text": "Title: A Study in Program Response and the Negative Effects of Introns in Genetic Programming \nAbstract: The standard method of obtaining a response in tree-based genetic programming is to take the value returned by the root node. In non-tree representations, alternate methods have been explored. One alternative is to treat a specific location in indexed memory as the response value when the program terminates. The purpose of this paper is to explore the applicability of this technique to tree-structured programs and to explore the intron effects that these studies bring to light. This paper's experimental results support the finding that this memory-based program response technique is an improvement for some, but not all, problems. In addition, this paper's experimental results support the finding that, contrary to past research and speculation, the addition or even facilitation of introns can seriously degrade the search performance of genetic programming.",
"neighbors": [
934,
1184,
1911,
1940
],
"mask": "Test"
},
{
"node_id": 861,
"label": 6,
"text": "Title: In Defense of C4.5: Notes on Learning One-Level Decision Trees \nAbstract: We discuss the implications of Holte's recently-published article, which demonstrated that on the most commonly used data very simple classification rules are almost as accurate as decision trees produced by Quinlan's C4.5. We consider, in particular, what is the significance of Holte's results for the future of top-down induction of decision trees. To an extent, Holte questioned the sense of further research on multilevel decision tree learning. We go in detail through all the parts of Holte's study. We try to put the results into perspective. We argue that the (in absolute terms) small difference in accuracy between 1R and C4.5 that was witnessed by Holte is still significant. We claim that C4.5 possesses additional accuracy-related advantages over 1R. In addition we discuss the representativeness of the databases used by Holte. We compare empirically the optimal accuracies of multilevel and one-level decision trees and observe some significant differences. We point out several deficien cies of limited-complexity classifiers.",
"neighbors": [
1236,
1431,
1678
],
"mask": "Train"
},
{
"node_id": 862,
"label": 0,
"text": "Title: Language-Independent Data-Oriented Grapheme-to-Phoneme Conversion \nAbstract: We describe an approach to grapheme-to-phoneme conversion which is both language-independent and data-oriented. Given a set of examples (spelling words with their associated phonetic representation) in a language, a grapheme-to-phoneme conversion system is automatically produced for that language which takes as its input the spelling of words, and produces as its output the phonetic transcription according to the rules implicit in the training data. We describe the design of the system, and compare its performance to knowledge-based and alternative data-oriented approaches.",
"neighbors": [
638,
785,
986,
1019,
1155,
1407,
1513,
1812,
2364,
2423
],
"mask": "Train"
},
{
"node_id": 863,
"label": 2,
"text": "Title: Empirical Entropy Manipulation for Real-World Problems \nAbstract: No finite sample is sufficient to determine the density, and therefore the entropy, of a signal directly. Some assumption about either the functional form of the density or about its smoothness is necessary. Both amount to a prior over the space of possible density functions. By far the most common approach is to assume that the density has a parametric form. By contrast we derive a differential learning rule called EMMA that optimizes entropy by way of kernel density estimation. Entropy and its derivative can then be calculated by sampling from this density estimate. The resulting parameter update rule is surprisingly simple and efficient. We will show how EMMA can be used to detect and correct corruption in magnetic resonance images (MRI). This application is beyond the scope of existing parametric entropy models.",
"neighbors": [
576,
808,
2499
],
"mask": "Train"
},
{
"node_id": 864,
"label": 2,
"text": "Title: A Sparse Representation for Function Approximation \nAbstract: We derive a new general representation for a function as a linear combination of local correlation kernels at optimal sparse locations (and scales) and characterize its relation to PCA, regularization, sparsity principles and Support Vector Machines. ",
"neighbors": [
611,
1079
],
"mask": "Validation"
},
{
"node_id": 865,
"label": 6,
"text": "Title: ON THE SAMPLE COMPLEXITY OF FINDING GOOD SEARCH STRATEGIES 2n trials of each undetermined experiment\nAbstract: A satisficing search problem consists of a set of probabilistic experiments to be performed in some order, without repetitions, until a satisfying configuration of successes and failures has been reached. The cost of performing the experiments depends on the order chosen. Earlier work has concentrated on finding optimal search strategies in special cases of this model, such as search trees and and-or graphs, when the cost function and the success probabilities for the experiments are given. In contrast, we study the complexity of \"learning\" an approximately optimal search strategy when some of the success probabilities are not known at the outset. Working in the fully general model, we show that if n is the number of unknown probabilities, and C is the maximum cost of performing all the experiments, then ",
"neighbors": [
251,
795,
932,
1505,
2560
],
"mask": "Test"
},
{
"node_id": 866,
"label": 2,
"text": "Title: Analyzing Phase Transitions in High-Dimensional Self-Organizing Maps \nAbstract: The Self-Organizing Map (SOM), a widely used algorithm for the unsupervised learning of neural maps, can be formulated in a low-dimensional \"feature map\" variant which requires prespecified parameters (\"features\") for the description of receptive fields, or in a more general high-dimensional variant which allows to self-organize the structure of individual receptive fields as well as their arrangement in a map. We present here a new analytical method to derive conditions for the emergence of structure in SOMs which is particularly suited for the as yet inaccessible high-dimensional SOM variant. Our approach is based on an evaluation of a map distortion function. It involves only an ansatz for the way stimuli are distributed among map neurons; the receptive fields of the map need not be known explicitely. Using this method we first calculate regions of stability for four possible states of SOMs projecting from a rectangular input space to a ring of neurons. We then analyze the transition from non-oriented to oriented receptive fields in a SOM-based model for the development of orientation maps. In both cases, the analytical results are well corroborated by the results of computer simulations. submitted to Biological Cybernetics, December 14, 1995 revised version, July 14, 1996",
"neighbors": [
18,
890
],
"mask": "Train"
},
{
"node_id": 867,
"label": 2,
"text": "Title: Comparison of Neural and Statistical Classifiers| Theory and Practice \nAbstract: Research Reports A13 January 1996 ",
"neighbors": [
46,
74,
667,
1493
],
"mask": "Validation"
},
{
"node_id": 868,
"label": 4,
"text": "Title: Adaptive Load Balancing: A Study in Multi-Agent Learning \nAbstract: We study the process of multi-agent reinforcement learning in the context of load balancing in a distributed system, without use of either central coordination or explicit communication. We first define a precise framework in which to study adaptive load balancing, important features of which are its stochastic nature and the purely local information available to individual agents. Given this framework, we show illuminating results on the interplay between basic adaptive behavior parameters and their effect on system efficiency. We then investigate the properties of adaptive load balancing in heterogeneous populations, and address the issue of exploration vs. exploitation in that context. Finally, we show that naive use of communication may not improve, and might even harm system efficiency.",
"neighbors": [
1643,
1649
],
"mask": "Train"
},
{
"node_id": 869,
"label": 3,
"text": "Title: Efficient Stochastic Source Coding and an Application to a Bayesian Network Source Model \nAbstract: Brendan J. Frey and Geoffrey E. Hinton 1997. Efficient stochastic source coding and an application to a Bayesian network source model. The Computer Journal 40, 157-165. In this paper, we introduce a new algorithm called \"bits-back coding\" that makes stochastic source codes efficient. For a given one-to-many source code, we show that this algorithm can actually be more efficient than the algorithm that always picks the shortest codeword. Optimal efficiency is achieved when codewords are chosen according to the Boltzmann distribution based on the codeword lengths. It turns out that a commonly used technique for determining parameters | maximum likelihood estimation | actually minimizes the bits-back coding cost when codewords are chosen according to the Boltzmann distribution. A tractable approximation to maximum likelihood estimation | the generalized expectation maximization algorithm | minimizes the bits-back coding cost. After presenting a binary Bayesian network model that assigns exponentially many codewords to each symbol, we show how a tractable approximation to the Boltzmann distribution can be used for bits-back coding. We illustrate the performance of bits-back coding using using nonsynthetic data with a binary Bayesian network source model that produces 2 60 possible codewords for each input symbol. The rate for bits-back coding is nearly one half of that obtained by picking the shortest codeword for each symbol. ",
"neighbors": [
76,
1374
],
"mask": "Test"
},
{
"node_id": 870,
"label": 4,
"text": "Title: Learning to take risks \nAbstract: Agents that learn about other agents and can exploit this information possess a distinct advantage in competitive situations. Games provide stylized adversarial environments to study agent learning strategies. Researchers have developed game playing programs that learn to play better from experience. We have developed a learning program that does not learn to play better, but learns to identify and exploit the weaknesses of a particular opponent by repeatedly playing it over several games. We propose a scheme for learning opponent action probabilities and a utility maximization framework that exploits this learned opponent model. We show that the proposed expected utility maximization strategy generalizes the traditional maximin strategy, and allows players to benefit by taking calculated risks that are avoided by the max-imin strategy. Experiments in the popular board game of Connect-4 show that a learning player consistently outperforms a non-learning player when pitted against another automated player using a weaker heuristic. Though our proposed mechanism does not improve the skill level of a computer player, it does improve its ability to play more effectively against a weaker opponent. ",
"neighbors": [
523,
1687
],
"mask": "Validation"
},
{
"node_id": 871,
"label": 3,
"text": "Title: Factorial Learning and the EM Algorithm \nAbstract: Many real world learning problems are best characterized by an interaction of multiple independent causes or factors. Discovering such causal structure from the data is the focus of this paper. Based on Zemel and Hinton's cooperative vector quantizer (CVQ) architecture, an unsupervised learning algorithm is derived from the Expectation-Maximization (EM) framework. Due to the combinatorial nature of the data generation process, the exact E-step is computationally intractable. Two alternative methods for computing the E-step are proposed: Gibbs sampling and mean-field approximation, and some promising empirical results are presented. ",
"neighbors": [
74,
1591
],
"mask": "Validation"
},
{
"node_id": 872,
"label": 2,
"text": "Title: A Blind Identification and Separation Technique via Multi-layer Neural Networks \nAbstract: This paper deals with the problem of blind identification and source separation which consists of estimation of the mixing matrix and/or the separation of a mixture of stochastically independent sources without a priori knowledge on the mixing matrix . The method we propose here estimates the mixture matrix by a recurrent Input-Output (IO) Identification using as inputs a nonlinear transformation of the estimated sources. Herein, the nonlinear transformation (distortion) consists in constraining the modulus of the inputs of the IO-Identification device to be a constant. In contrast to other existing approaches, the covariance of the additive noise do not need to be modeled and can be estimated as a regular parameter if needed. The proposed approach is implemented using multi-layer neural networks in order to improve performance of separation. New associated on-line un-supervised adaptive learning rules are also developed. The effectiveness of the proposed method is illustrated by some computer simulations. ",
"neighbors": [
59,
570,
1520
],
"mask": "Test"
},
{
"node_id": 873,
"label": 2,
"text": "Title: On the performance of orthogonal source separation algorithms \nAbstract: Source separation consists in recovering a set of n independent signals from m n observed instantaneous mixtures of these signals, possibly corrupted by additive noise. Many source separation algorithms use second order information in a whitening operation which reduces the non trivial part of the separation to determining a unitary matrix. Most of them further show a kind of invariance property which can be exploited to predict some general results about their performance. Our first contribution is to exhibit a lower bound to the performance in terms of accuracy of the separation. This bound is independent of the algorithm and, in the i.i.d. case, of the distribution of the source signals. Second, we show that the performance of invariant algorithms depends on the mixing matrix and on the noise level in a specific way. A consequence is that at low noise levels, the performance does not depend on the mixture but only on the distribution of the sources, via a function which is characteristic of the given source separation algorithm.",
"neighbors": [
920,
1520
],
"mask": "Test"
},
{
"node_id": 874,
"label": 2,
"text": "Title: LOCAL ADAPTIVE LEARNING ALGORITHMS FOR BLIND SEPARATION OF NATURAL IMAGES \nAbstract: In this paper a neural network approach for reconstruction of natural highly correlated images from linear (additive) mixture of them is proposed. A multi-layer architecture with local on-line learning rules is developed to solve the problem of blind separation of sources. The main motivation for using a multi-layer network instead of a single-layer one is to improve the performance and robustness of separation, while applying a very simple local learning rule, which is biologically plausible. Moreover such architecture with on-chip learning is relatively easy implementable using VLSI electronic circuits. Furthermore it enables the extraction of source signals sequentially one after the other, starting from the strongest signal and finishing with the weakest one. The experimental part focuses on separating highly correlated human faces from mixture of them, with additive noise and under unknown number of sources. ",
"neighbors": [
59,
570,
576,
839,
1520
],
"mask": "Validation"
},
{
"node_id": 875,
"label": 4,
"text": "Title: LEARNING TO SOLVE MARKOVIAN DECISION PROCESSES \nAbstract: In this paper a neural network approach for reconstruction of natural highly correlated images from linear (additive) mixture of them is proposed. A multi-layer architecture with local on-line learning rules is developed to solve the problem of blind separation of sources. The main motivation for using a multi-layer network instead of a single-layer one is to improve the performance and robustness of separation, while applying a very simple local learning rule, which is biologically plausible. Moreover such architecture with on-chip learning is relatively easy implementable using VLSI electronic circuits. Furthermore it enables the extraction of source signals sequentially one after the other, starting from the strongest signal and finishing with the weakest one. The experimental part focuses on separating highly correlated human faces from mixture of them, with additive noise and under unknown number of sources. ",
"neighbors": [
370,
554,
775,
933,
1192
],
"mask": "Train"
},
{
"node_id": 876,
"label": 6,
"text": "Title: Using and combining predictors that specialize \nAbstract: We study online learning algorithms that predict by combining the predictions of several subordinate prediction algorithms, sometimes called experts. These simple algorithms belong to the multiplicative weights family of algorithms. The performance of these algorithms degrades only logarithmically with the number of experts, making them particularly useful in applications where the number of experts is very large. However, in applications such as text categorization, it is often natural for some of the experts to abstain from making predictions on some of the instances. We show how to transform algorithms that assume that all experts are always awake to algorithms that do not require this assumption. We also show how to derive corresponding loss bounds. Our method is very general, and can be applied to a large family of online learning algorithms. We also give applications to various prediction models including decision graphs and switching experts. ",
"neighbors": [
453,
1006,
1025,
1269
],
"mask": "Train"
},
{
"node_id": 877,
"label": 5,
"text": "Title: Naive Bayesian classifier within ILP-R \nAbstract: When dealing with the classification problems, current ILP systems often lag behind state-of-the-art attributional learners. Part of the blame can be ascribed to a much larger hypothesis space which, therefore, cannot be as thoroughly explored. However, sometimes it is due to the fact that ILP systems do not take into account the probabilistic aspects of hypotheses when classifying unseen examples. This paper proposes just that. We developed a naive Bayesian classifier within our ILP-R first order learner. The learner itself uses a clever RELIEF based heuristic which is able to detect strong dependencies within the literal space when such dependencies exist. We conducted a series of experiments on artificial and real-world data sets. The results show that the combination of ILP-R together with the naive Bayesian classifier sometimes significantly improves the classification of unseen instances as measured by both classification accuracy and average information score.",
"neighbors": [
1010,
1569,
1578,
1651
],
"mask": "Validation"
},
{
"node_id": 878,
"label": 2,
"text": "Title: Nonsmooth Dynamic Simulation With Linear Programming Based Methods \nAbstract: Process simulation has emerged as a valuable tool for process design, analysis and operation. In this work, we extend the capabilities of iterated linear programming (LP) for dealing with problems encountered in dynamic nonsmooth process simulation. A previously developed LP method is refined with the addition of a new descent strategy which combines line search with a trust region approach. This adds more stability and efficiency to the method. The LP method has the advantage of naturally dealing with profile bounds as well. This is demonstrated to avoid the computational difficulties which arise from the iterates going into physically unrealistic regions. A new method for the treatment of discontinuities occurring in dynamic simulation problems is also presented in this paper. The method ensures that any event which has occurred within the time interval in consideration is detected and if more than one event occurs, the detected one is indeed the earliest one. A specific class of implicitly discontinuous process simulation problems, phase equilibrium calculations is also looked at. A new formulation is introduced to solve multiphase problems. fl To whom all correspondence should be addressed. email:biegler@cmu.edu",
"neighbors": [
1023,
1472
],
"mask": "Train"
},
{
"node_id": 879,
"label": 4,
"text": "Title: A GENERAL METHOD FOR INCREMENTAL SELF-IMPROVEMENT AND MULTI-AGENT LEARNING \nAbstract: Process simulation has emerged as a valuable tool for process design, analysis and operation. In this work, we extend the capabilities of iterated linear programming (LP) for dealing with problems encountered in dynamic nonsmooth process simulation. A previously developed LP method is refined with the addition of a new descent strategy which combines line search with a trust region approach. This adds more stability and efficiency to the method. The LP method has the advantage of naturally dealing with profile bounds as well. This is demonstrated to avoid the computational difficulties which arise from the iterates going into physically unrealistic regions. A new method for the treatment of discontinuities occurring in dynamic simulation problems is also presented in this paper. The method ensures that any event which has occurred within the time interval in consideration is detected and if more than one event occurs, the detected one is indeed the earliest one. A specific class of implicitly discontinuous process simulation problems, phase equilibrium calculations is also looked at. A new formulation is introduced to solve multiphase problems. fl To whom all correspondence should be addressed. email:biegler@cmu.edu",
"neighbors": [
700,
979
],
"mask": "Validation"
},
{
"node_id": 880,
"label": 1,
"text": "Title: Control of Parallel Population Dynamics by Social-Like Behavior of GA-Individuals \nAbstract: A frequently observed difficulty in the application of genetic algorithms to the domain of optimization arises from premature convergence. In order to preserve genotype diversity we develop a new model of auto-adaptive behavior for individuals. In this model a population member is an active individual that assumes social-like behavior patterns. Different individuals living in the same population can assume different patterns. By moving in a hierarchy of \"social states\" individuals change their behavior. Changes of social state are controlled by arguments of plausibility. These arguments are implemented as a rule set for a massively-parallel genetic algorithm. Computational experiments on 12 large-scale job shop benchmark problems show that the results of the new approach dominate the ordinary genetic algorithm significantly.",
"neighbors": [
815,
1523
],
"mask": "Train"
},
{
"node_id": 881,
"label": 2,
"text": "Title: Proben1 A Set of Neural Network Benchmark Problems and Benchmarking Rules \nAbstract: Proben1 is a collection of problems for neural network learning in the realm of pattern classification and function approximation plus a set of rules and conventions for carrying out benchmark tests with these or similar problems. Proben1 contains 15 data sets from 12 different domains. All datasets represent realistic problems which could be called diagnosis tasks and all but one consist of real world data. The datasets are all presented in the same simple format, using an attribute representation that can directly be used for neural network training. Along with the datasets, Proben1 defines a set of rules for how to conduct and how to document neural network benchmarking. The purpose of the problem and rule collection is to give researchers easy access to data for the evaluation of their algorithms and networks and to make direct comparison of the published results feasible. This report describes the datasets and the benchmarking rules. It also gives some basic performance measures indicating the difficulty of the various problems. These measures can be used as baselines for comparison. ",
"neighbors": [
74,
112,
572,
816,
1119,
1203,
1256,
1411,
2203,
2295,
2405,
2423,
2454,
2702
],
"mask": "Train"
},
{
"node_id": 882,
"label": 4,
"text": "Title: Learning To Play the Game of Chess \nAbstract: This paper presents NeuroChess, a program which learns to play chess from the final outcome of games. NeuroChess learns chess board evaluation functions, represented by artificial neural networks. It integrates inductive neural network learning, temporal differencing, and a variant of explanation-based learning. Performance results illustrate some of the strengths and weaknesses of this approach.",
"neighbors": [
523,
565,
575,
1012,
1316,
1378,
1529,
1616
],
"mask": "Train"
},
{
"node_id": 883,
"label": 0,
"text": "Title: A Preprocessing Model for Integrating CBR and Prototype-Based Neural Networks \nAbstract: Some important factors that play a major role in determining the performances of a CBR (Case-Based Reasoning) system are the complexity and the accuracy of the retrieval phase. Both flat memory and inductive approaches suffer from serious drawbacks. In the first approach, the search time increases when dealing with large scale memory base, while in the second one the modification of the case memory becomes very complex because of its sophisticated architecture. In this paper, we show how we construct a simple efficient indexing system structure. The idea is to construct a case hierarchy with two levels of memory: the lower level contains cases organised into groups of similar cases, while the upper level contains prototypes. each prototype represents one group of cases. This smaller memory is used during the retrieval phase. Prototype construction is achieved by means of an incremental prototype-based NN (Neural Network). We show that this mode of CBR-NN coupling is a preprocessing one where the neural network serves as an indexing system to the ",
"neighbors": [
1210,
1665
],
"mask": "Train"
},
{
"node_id": 884,
"label": 6,
"text": "Title: PAC Learning of One-Dimensional Patterns \nAbstract: Developing the ability to recognize a landmark from a visual image of a robot's current location is a fundamental problem in robotics. We consider the problem of PAC-learning the concept class of geometric patterns where the target geometric pattern is a configuration of k points on the real line. Each instance is a configuration of n points on the real line, where it is labeled according to whether or not it visually resembles the target pattern. To capture the notion of visual resemblance we use the Hausdorff metric. Informally, two geometric patterns P and Q resemble each other under the Hausdorff metric, if every point on one pattern is \"close\" to some point on the other pattern. We relate the concept class of geometric patterns to the landmark recognition problem and then present a polynomial-time algorithm that PAC-learns the class of one-dimensional geometric patterns. We also present some experimental results on how our algorithm performs. ",
"neighbors": [
109,
1658
],
"mask": "Train"
},
{
"node_id": 885,
"label": 6,
"text": "Title: Model selection using measure functions \nAbstract: The concept of measure functions for generalization performance is suggested. This concept provides an alternative way of selecting and evaluating learned models (classifiers). In addition, it makes it possible to state a learning problem as a computational problem. The the known prior (meta-)knowledge about the problem domain is captured in a measure function that, to each possible combination of a training set and a classifier, assigns a value describing how good the classifier is. The computational problem is then to find a classifier maximizing the measure function. We argue that measure functions are of great value for practical applications. Besides of being a tool for model selection, they: (i) force us to make explicit the relevant prior knowledge about the learning problem at hand, (ii) provide a deeper understanding of existing algorithms, and (iii) help us in the construction of problem-specific algorithms. We illustrate the last point by suggesting a novel algorithm based on incremental search for a classifier that optimizes a given measure function.",
"neighbors": [
177,
831,
1335
],
"mask": "Test"
},
{
"node_id": 886,
"label": 2,
"text": "Title: In Search Of Articulated Attractors \nAbstract: Recurrent attractor networks offer many advantages over feed-forward networks for the modeling of psychological phenomena. Their dynamic nature allows them to capture the time course of cognitive processing, and their learned weights may often be easily interpreted as soft constraints between representational components. Perhaps the most significant feature of such networks, however, is their ability to facilitate generalization by enforcing well formedness constraints on intermediate and output representations. Attractor networks which learn the systematic regularities of well formed representations by exposure to a small number of examples are said to possess articulated attractors. This paper investigates the conditions under which articulated attractors arise in recurrent networks trained using variants of backpropagation. The results of computational experiments demonstrate that such structured attrac-tors can spontaneously appear in an emergence of systematic-ity, if an appropriate error signal is presented directly to the recurrent processing elements. We show, however, that distal error signals, backpropagated through intervening weights, pose serious problems for networks of this kind. We present simulation results, discuss the reasons for this difficulty, and suggest some directions for future attempts to surmount it. ",
"neighbors": [
1295
],
"mask": "Train"
},
{
"node_id": 887,
"label": 6,
"text": "Title: Simplifying Decision Trees: A Survey \nAbstract: Induced decision trees are an extensively-researched solution to classification tasks. For many practical tasks, the trees produced by tree-generation algorithms are not comprehensible to users due to their size and complexity. Although many tree induction algorithms have been shown to produce simpler, more comprehensible trees (or data structures derived from trees) with good classification accuracy, tree simplification has usually been of secondary concern relative to accuracy and no attempt has been made to survey the literature from the perspective of simplification. We present a framework that organizes the approaches to tree simplification and summarize and critique the approaches within this framework. The purpose of this survey is to provide researchers and practitioners with a concise overview of tree-simplification approaches and insight into their relative capabilities. In our final discussion, we briefly describe some empirical findings and discuss the application of tree induction algorithms to case retrieval in case-based reasoning systems.",
"neighbors": [
983,
1531
],
"mask": "Train"
},
{
"node_id": 888,
"label": 3,
"text": "Title: Looking at Markov Samplers through Cusum Path Plots: a simple diagnostic idea \nAbstract: In this paper, we propose to monitor a Markov chain sampler using the cusum path plot of a chosen 1-dimensional summary statistic. We argue that the cusum path plot can bring out, more effectively than the sequential plot, those aspects of a Markov sampler which tell the user how quickly or slowly the sampler is moving around in its sample space, in the direction of the summary statistic. The proposal is then illustrated in four examples which represent situations where the cusum path plot works well and not well. Moreover, a rigorous analysis is given for one of the examples. We conclude that the cusum path plot is an effective tool for convergence diagnostics of a Markov sampler and for comparing different Markov samplers. ",
"neighbors": [
41,
896,
1731
],
"mask": "Test"
},
{
"node_id": 889,
"label": 3,
"text": "Title: Bounding Convergence Time of the Gibbs Sampler in Bayesian Image Restoration \nAbstract: This paper gives precise, easy to compute bounds on the convergence time of the Gibbs sampler used in Bayesian image reconstruction. For sampling from the Gibbs distribution both with and without the presence of an external field, bounds that are N 2 in the number of pixels are obtained, with a proportionality constant that is easy to calculate. Some key words: Bayesian image restoration; Convergence; Gibbs sampler; Ising model; Markov chain Monte Carlo.",
"neighbors": [
41,
892,
904,
1713,
2153
],
"mask": "Train"
},
{
"node_id": 890,
"label": 2,
"text": "Title: Breaking Rotational Symmetry in a Self-Organizing Map-Model for Orientation Map Development \nAbstract: This paper gives precise, easy to compute bounds on the convergence time of the Gibbs sampler used in Bayesian image reconstruction. For sampling from the Gibbs distribution both with and without the presence of an external field, bounds that are N 2 in the number of pixels are obtained, with a proportionality constant that is easy to calculate. Some key words: Bayesian image restoration; Convergence; Gibbs sampler; Ising model; Markov chain Monte Carlo.",
"neighbors": [
18,
866
],
"mask": "Train"
},
{
"node_id": 891,
"label": 3,
"text": "Title: Coupled hidden Markov models for modeling interacting processes \nAbstract: c flMIT Media Lab Perceptual Computing / Learning and Common Sense Technical Report 405 3nov96, revised 3jun97 Abstract We present methods for coupling hidden Markov models (hmms) to model systems of multiple interacting processes. The resulting models have multiple state variables that are temporally coupled via matrices of conditional probabilities. We introduce a deterministic O(T (CN ) 2 ) approximation for maximum a posterior (MAP) state estimation which enables fast classification and parameter estimation via expectation maximization. An \"N-heads\" dynamic programming algorithm samples from the highest probability paths through a compact state trellis, minimizing an upper bound on the cross entropy with the full (combinatoric) dynamic programming problem. The complexity is O(T (CN ) 2 ) for C chains of N states apiece observing T data points, compared with O(T N 2C ) for naive (Cartesian product), exact (state clustering), and stochastic (Monte Carlo) methods applied to the same inference problem. In several experiments examining training time, model likelihoods, classification accuracy, and robustness to initial conditions, coupled hmms compared favorably with conventional hmms and with energy-based approaches to coupled inference chains. We demonstrate and compare these algorithms on synthetic and real data, including interpretation of video.",
"neighbors": [
1437
],
"mask": "Train"
},
{
"node_id": 892,
"label": 3,
"text": "Title: Possible biases induced by MCMC convergence diagnostics \nAbstract: c flMIT Media Lab Perceptual Computing / Learning and Common Sense Technical Report 405 3nov96, revised 3jun97 Abstract We present methods for coupling hidden Markov models (hmms) to model systems of multiple interacting processes. The resulting models have multiple state variables that are temporally coupled via matrices of conditional probabilities. We introduce a deterministic O(T (CN ) 2 ) approximation for maximum a posterior (MAP) state estimation which enables fast classification and parameter estimation via expectation maximization. An \"N-heads\" dynamic programming algorithm samples from the highest probability paths through a compact state trellis, minimizing an upper bound on the cross entropy with the full (combinatoric) dynamic programming problem. The complexity is O(T (CN ) 2 ) for C chains of N states apiece observing T data points, compared with O(T N 2C ) for naive (Cartesian product), exact (state clustering), and stochastic (Monte Carlo) methods applied to the same inference problem. In several experiments examining training time, model likelihoods, classification accuracy, and robustness to initial conditions, coupled hmms compared favorably with conventional hmms and with energy-based approaches to coupled inference chains. We demonstrate and compare these algorithms on synthetic and real data, including interpretation of video.",
"neighbors": [
41,
138,
889,
904,
1713,
1716
],
"mask": "Train"
},
{
"node_id": 893,
"label": 5,
"text": "Title: LEARNING LOGICAL EXCEPTIONS IN CHESS \nAbstract: c flMIT Media Lab Perceptual Computing / Learning and Common Sense Technical Report 405 3nov96, revised 3jun97 Abstract We present methods for coupling hidden Markov models (hmms) to model systems of multiple interacting processes. The resulting models have multiple state variables that are temporally coupled via matrices of conditional probabilities. We introduce a deterministic O(T (CN ) 2 ) approximation for maximum a posterior (MAP) state estimation which enables fast classification and parameter estimation via expectation maximization. An \"N-heads\" dynamic programming algorithm samples from the highest probability paths through a compact state trellis, minimizing an upper bound on the cross entropy with the full (combinatoric) dynamic programming problem. The complexity is O(T (CN ) 2 ) for C chains of N states apiece observing T data points, compared with O(T N 2C ) for naive (Cartesian product), exact (state clustering), and stochastic (Monte Carlo) methods applied to the same inference problem. In several experiments examining training time, model likelihoods, classification accuracy, and robustness to initial conditions, coupled hmms compared favorably with conventional hmms and with energy-based approaches to coupled inference chains. We demonstrate and compare these algorithms on synthetic and real data, including interpretation of video.",
"neighbors": [
1174,
1259,
1290
],
"mask": "Train"
},
{
"node_id": 894,
"label": 3,
"text": "Title: Bayesian Analysis of Agricultural Field Experiments \nAbstract: SUMMARY The paper describes Bayesian analysis for agricultural field experiments, a topic that has received very little previous attention, despite a vast frequentist literature. Adoption of the Bayesian paradigm simplifies the interpretation of the results, especially in ranking and selection. Also, complex formulations can be analyzed with comparative ease, using Markov chain Monte Carlo methods. A key ingredient in the approach is the need for spatial representations of the unobserved fertility patterns. This is discussed in detail. Problems caused by outliers and by jumps in fertility are tackled via hierarchical-t formulations that may find use in other contexts. The paper includes three analyses of variety trials for yield and one example involving binary data; none is entirely straightforward. Some comparisons with frequentist analyses are made. The datasets are available at http://www.stat.duke.edu/~higdon/trials/data.html. ",
"neighbors": [
1255
],
"mask": "Test"
},
{
"node_id": 895,
"label": 3,
"text": "Title: CABeN: A Collection of Algorithms for Belief Networks Correspond with: \nAbstract: Portions of this report have been published in the Proceedings of the Fifteenth Annual Symposium on Computer Applications in Medical Care (November, 1991). ",
"neighbors": [
1436
],
"mask": "Train"
},
{
"node_id": 896,
"label": 3,
"text": "Title: Discretization of continuous Markov chains and MCMC convergence assessment \nAbstract: We show in this paper that continuous state space Markov chains can be rigorously discretized into finite Markov chains. The idea is to subsample the continuous chain at renewal times related to small sets which control the discretization. Once a finite Markov chain is derived from the MCMC output, general convergence properties on finite state spaces can be exploited for convergence assessment in several directions. Our choice is based on a divergence criterion derived from Kemeny and Snell (1960), which is first evaluated on parallel chains with a stopping time, and then implemented, more efficiently, on two parallel chains only, using Birkhoff's pointwise ergodic theorem for stopping rules. The performance of this criterion is illustrated on three standard examples. ",
"neighbors": [
468,
888,
1372
],
"mask": "Train"
},
{
"node_id": 897,
"label": 3,
"text": "Title: Parallel Markov chain Monte Carlo sampling. \nAbstract: Markov chain Monte Carlo (MCMC) samplers have proved remarkably popular as tools for Bayesian computation. However, problems can arise in their application when the density of interest is high dimensional and strongly correlated. In these circumstances the sampler may be slow to traverse the state space and mixing is poor. In this article we offer a partial solution to this problem. The state space of the Markov chain is augmented to accommodate multiple chains in parallel. Updates to individual chains are based around a genetic style crossover operator acting on `parent' states drawn from the population of chains. This process makes efficient use of gradient information implicitly encoded within the distribution of states across the population. Empirical studies support the claim that the crossover operator acting on a parallel population of chains improves mixing. This is illustrated with an example of sampling a high dimensional posterior probability density from a complex predictive model. By adopting a latent variable approach the methodology is extended to deal with variable selection and model averaging in high dimensions. This is illustrated with an example of knot selection for a spline interpolant. ",
"neighbors": [
157,
1240
],
"mask": "Train"
},
{
"node_id": 898,
"label": 3,
"text": "Title: database \nAbstract: MIT Computational Cognitive Science Technical Report 9701 Abstract We describe variational approximation methods for efficient probabilistic reasoning, applying these methods to the problem of diagnostic inference in the QMR-DT database. The QMR-DT database is a large-scale belief network based on statistical and expert knowledge in internal medicine. The size and complexity of this network render exact probabilistic diagnosis infeasible for all but a small set of cases. This has hindered the development of the QMR- DT network as a practical diagnostic tool and has hindered researchers from exploring and critiquing the diagnostic behavior of QMR. In this paper we describe how variational approximation methods can be applied to the QMR network, resulting in fast diagnostic inference. We evaluate the accuracy of our methods on a set of standard diagnostic cases and compare to stochastic sampling methods. ",
"neighbors": [
107,
108,
1687
],
"mask": "Train"
},
{
"node_id": 899,
"label": 2,
"text": "Title: GA-RBF: A Self-Optimising RBF Network \nAbstract: The effects of a neural network's topology on its performance are well known, yet the question of finding optimal configurations automatically remains largely open. This paper proposes a solution to this problem for RBF networks. A self- optimising approach, driven by an evolutionary strategy, is taken. The algorithm uses output information and a computationally efficient approximation of RBF networks to optimise the K-means clustering process by co-evolving the two determinant parameters of the network's layout: the number of centroids and the centroids' positions. Empirical results demonstrate promise. ",
"neighbors": [
611,
1564,
1565,
1672
],
"mask": "Train"
},
{
"node_id": 900,
"label": 1,
"text": "Title: Evolution, Learning, and Instinct: 100 Years of the Baldwin Effect Using Learning to Facilitate the\nAbstract: This paper describes a hybrid methodology that integrates genetic algorithms and decision tree learning in order to evolve useful subsets of discriminatory features for recognizing complex visual concepts. A genetic algorithm (GA) is used to search the space of all possible subsets of a large set of candidate discrimination features. Candidate feature subsets are evaluated by using C4.5, a decision-tree learning algorithm, to produce a decision tree based on the given features using a limited amount of training data. The classification performance of the resulting decision tree on unseen testing data is used as the fitness of the underlying feature subset. Experimental results are presented to show how increasing the amount of learning significantly improves feature set evolution for difficult visual recognition problems involving satellite and facial image data. In addition, we also report on the extent to which other more subtle aspects of the Baldwin effect are exhibited by the system. ",
"neighbors": [
228,
1207,
1351,
1533,
1583
],
"mask": "Train"
},
{
"node_id": 901,
"label": 0,
"text": "Title: Evaluating Computational Assistance for Crisis Response \nAbstract: In this paper we examine the behavior of a human-computer system for crisis response. As one instance of crisis management, we describe the task of responding to spills and fires involving hazardous materials. We then describe INCA, an intelligent assistant for planning and scheduling in this domain, and its relation to human users. We focus on INCA's strategy of retrieving a case from a case library, seeding the initial schedule, and then helping the user adapt this seed. We also present three hypotheses about the behavior of this mixed-initiative system and some experiments designed to test them. The results suggest that our approach leads to faster response development than user-generated or automatically-generated schedules but without sacrificing solution quality. ",
"neighbors": [
1497,
1553,
1554
],
"mask": "Validation"
},
{
"node_id": 902,
"label": 1,
"text": "Title: Explanations of Empirically Derived Reactive Plans \nAbstract: Given an adequate simulation model of the task environment and payoff function that measures the quality of partially successful plans, competition-based heuristics such as genetic algorithms can develop high performance reactive rules for interesting sequential decision tasks. We have previously described an implemented system, called SAMUEL, for learning reactive plans and have shown that the system can successfully learn rules for a laboratory scale tactical problem. In this paper, we describe a method for deriving explanations to justify the success of such empirically derived rule sets. The method consists of inferring plausible subgoals and then explaining how the reactive rules trigger a sequence of actions (i.e., a stra tegy) to satisfy the subgoals. ",
"neighbors": [
910,
964,
965,
966,
981,
1174
],
"mask": "Test"
},
{
"node_id": 903,
"label": 6,
"text": "Title: Learning Concepts from Sensor Data of a Mobile Robot \nAbstract: Machine learning can be a most valuable tool for improving the flexibility and efficiency of robot applications. Many approaches to applying machine learning to robotics are known. Some approaches enhance the robot's high-level processing, the planning capabilities. Other approaches enhance the low-level processing, the control of basic actions. In contrast, the approach presented in this paper uses machine learning for enhancing the link between the low-level representations of sensing and action and the high-level representation of planning. The aim is to facilitate the communication between the robot and the human user. A hierarchy of concepts is learned from route records of a mobile robot. Perception and action are combined at every level, i.e., the concepts are perceptually anchored. The relational learning algorithm grdt has been developed which completely searches in a hypothesis space, that is restricted by rule schemata, which the user defines in terms of grammars. ",
"neighbors": [
1491,
1672
],
"mask": "Test"
},
{
"node_id": 904,
"label": 3,
"text": "Title: Assessing Convergence of Markov Chain Monte Carlo Algorithms \nAbstract: We motivate the use of convergence diagnostic techniques for Markov Chain Monte Carlo algorithms and review various methods proposed in the MCMC literature. A common notation is established and each method is discussed with particular emphasis on implementational issues and possible extensions. The methods are compared in terms of their interpretability and applicability and recommendations are provided for particular classes of problems.",
"neighbors": [
115,
292,
352,
889,
892,
1372,
1733
],
"mask": "Validation"
},
{
"node_id": 905,
"label": 3,
"text": "Title: Compositional Modeling With DPNs \nAbstract: We motivate the use of convergence diagnostic techniques for Markov Chain Monte Carlo algorithms and review various methods proposed in the MCMC literature. A common notation is established and each method is discussed with particular emphasis on implementational issues and possible extensions. The methods are compared in terms of their interpretability and applicability and recommendations are provided for particular classes of problems.",
"neighbors": [
558,
976,
1287,
1393,
1397
],
"mask": "Train"
},
{
"node_id": 906,
"label": 2,
"text": "Title: Memory-based Time Series Recognition A New Methodology and Real World Applications \nAbstract: We motivate the use of convergence diagnostic techniques for Markov Chain Monte Carlo algorithms and review various methods proposed in the MCMC literature. A common notation is established and each method is discussed with particular emphasis on implementational issues and possible extensions. The methods are compared in terms of their interpretability and applicability and recommendations are provided for particular classes of problems.",
"neighbors": [
74,
1019,
1860
],
"mask": "Train"
},
{
"node_id": 907,
"label": 2,
"text": "Title: Visual Tracking of Moving Objects using a Neural Network Controller \nAbstract: For a target tracking task, the hand-held camera of the anthropomorphic OSCAR-robot manipulator has to track an object which moves arbitrarily on a table. The desired camera-joint mapping is approximated by a feedforward neural network. Through the use of time derivatives of the position of the object and of the manipulator, the controller can inherently predict the next position of the moving target object. In this paper several `anticipative' controllers are described, and successfully applied to track a moving object.",
"neighbors": [
1252
],
"mask": "Validation"
},
{
"node_id": 908,
"label": 2,
"text": "Title: Eclectic Machine Learning \nAbstract: For a target tracking task, the hand-held camera of the anthropomorphic OSCAR-robot manipulator has to track an object which moves arbitrarily on a table. The desired camera-joint mapping is approximated by a feedforward neural network. Through the use of time derivatives of the position of the object and of the manipulator, the controller can inherently predict the next position of the moving target object. In this paper several `anticipative' controllers are described, and successfully applied to track a moving object.",
"neighbors": [
414,
809,
919,
2304
],
"mask": "Test"
},
{
"node_id": 909,
"label": 3,
"text": "Title: Regression Can Build Predictive Causal Models \nAbstract: Covariance information can help an algorithm search for predictive causal models and estimate the strengths of causal relationships. This information should not be discarded after conditional independence constraints are identified, as is usual in contemporary causal induction algorithms. Our fbd algorithm combines covariance information with an effective heuristic to build predictive causal models. We demonstrate that fbd is accurate and efficient. In one experiment we assess fbd's ability to find the best predictors for variables; in another we compare its performance, using many measures, with Pearl and Verma's ic algorithm. And although fbd is based on multiple linear regression, we cite evidence that it performs well on problems that are very difficult for regression algorithms. ",
"neighbors": [
827,
913,
1527,
1894
],
"mask": "Train"
},
{
"node_id": 910,
"label": 1,
"text": "Title: Learning Sequential Decision Rules Using Simulation Models and Competition \nAbstract: The problem of learning decision rules for sequential tasks is addressed, focusing on the problem of learning tactical decision rules from a simple flight simulator. The learning method relies on the notion of competition and employs genetic algorithms to search the space of decision policies. Several experiments are presented that address issues arising from differences between the simulation model on which learning occurs and the target environment on which the decision rules are ultimately tested. ",
"neighbors": [
163,
523,
565,
764,
811,
902,
964,
966,
981,
982,
1117,
1131,
1140,
1221,
1253,
1311,
1432,
1481,
1573,
1589,
1590,
1673
],
"mask": "Train"
},
{
"node_id": 911,
"label": 6,
"text": "Title: Utilizing Prior Concepts for Learning \nAbstract: The inductive learning problem consists of learning a concept given examples and non-examples of the concept. To perform this learning task, inductive learning algorithms bias their learning method. Here we discuss biasing the learning method to use previously learned concepts from the same domain. These learned concepts highlight useful information for other concepts in the domain. We describe a transference bias and present M-FOCL, a Horn clause relational learning algorithm, that utilizes this bias to learn multiple concepts. We provide preliminary empirical evaluation to show the effects of biasing previous information on noise-free and noisy data.",
"neighbors": [
177,
585,
1354
],
"mask": "Validation"
},
{
"node_id": 912,
"label": 2,
"text": "Title: Statistical Ideas for Selecting Network Architectures \nAbstract: Choosing the architecture of a neural network is one of the most important problems in making neural networks practically useful, but accounts of applications usually sweep these details under the carpet. How many hidden units are needed? Should weight decay be used, and if so how much? What type of output units should be chosen? And so on. We address these issues within the framework of statistical theory for model This paper is principally concerned with architecture selection issues for feed-forward neural networks (also known as multi-layer perceptrons). Many of the same issues arise in selecting radial basis function networks, recurrent networks and more widely. These problems occur in a much wider context within statistics, and applied statisticians have been selecting and combining models for decades. Two recent discussions are [4, 5]. References [3, 20, 21, 22] discuss neural networks from a statistical perspective. choice, which provides a number of workable approximate answers.",
"neighbors": [
1149,
1150,
1240,
1241
],
"mask": "Validation"
},
{
"node_id": 913,
"label": 3,
"text": "Title: A Statistical Semantics for Causation Key words: causality, induction, learning \nAbstract: We propose a model-theoretic definition of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covari-ations following standard norms of inductive reasoning. We also establish a complete characterization of the conditions under which such a distinction is possible. Finally, we provide a proof-theoretical procedure for inductive causation and show that, for a large class of data and structures, effective algorithms exist that uncover the direction of causal influences as defined above.",
"neighbors": [
827,
909
],
"mask": "Train"
},
{
"node_id": 914,
"label": 2,
"text": "Title: All-to-all Broadcast on the CNS-1 \nAbstract: This study deals with the all-to-all broadcast on the CNS-1. We determine a lower bound for the run time and present an algorithm meeting this bound. Since this study points out a bottleneck in the network interface, we also analyze the performance of alternative interface designs. Our analyses are based on a run time model of the network. ",
"neighbors": [
272,
1551
],
"mask": "Train"
},
{
"node_id": 915,
"label": 3,
"text": "Title: Abstract \nAbstract: Automated decision making is often complicated by the complexity of the knowledge involved. Much of this complexity arises from the context-sensitive variations of the underlying phenomena. We propose a framework for representing descriptive, context-sensitive knowledge. Our approach attempts to integrate categorical and uncertain knowledge in a network formalism. This paper outlines the basic representation constructs, examines their expressiveness and efficiency, and discusses the potential applications of the framework.",
"neighbors": [
1172,
1294
],
"mask": "Validation"
},
{
"node_id": 916,
"label": 2,
"text": "Title: A comparison of some error estimates for neural network models Summary \nAbstract: We discuss a number of methods for estimating the standard error of predicted values from a multi-layer perceptron. These methods include the delta method based on the Hessian, bootstrap estimators, and the \"sandwich\" estimator. The methods are described and compared in a number of examples. We find that the bootstrap methods perform best, partly because they capture variability due to the choice of starting weights. ",
"neighbors": [
157,
427,
1038,
2373,
2374,
2686
],
"mask": "Train"
},
{
"node_id": 917,
"label": 3,
"text": "Title: Practical Bayesian Inference Using Mixtures of Mixtures \nAbstract: Discrete mixtures of normal distributions are widely used in modeling amplitude fluctuations of electrical potentials at synapses of human, and other animal nervous systems. The usual framework has independent data values y j arising as y j = j + x n 0 +j where the means j come from some discrete prior G() and the unknown x n 0 +j 's and observed x j ; j = 1; : : : ; n 0 are gaussian noise terms. A practically important development of the associated statistical methods is the issue of non-normality of the noise terms, often the norm rather than the exception in the neurological context. We have recently developed models, based on convolutions of Dirichlet process mixtures, for such problems. Explicitly, we model the noise data values x j as arising from a Dirich-let process mixture of normals, in addition to modeling the location prior G() as a Dirichlet process itself. This induces a Dirichlet mixture of mixtures of normals, whose analysis may be developed using Gibbs sampling techniques. We discuss these models and their analysis, and illustrate in the context of neurological response analysis. ",
"neighbors": [
784,
850,
1338
],
"mask": "Train"
},
{
"node_id": 918,
"label": 2,
"text": "Title: Predictive Robot Control with Neural Networks \nAbstract: Neural controllers are able to position the hand-held camera of the (3DOF) anthropomorphic OSCAR-robot manipulator above an object which is arbitrary placed on a table. The desired camera-joint mapping is approximated by feedforward neural networks. However, if the object is moving, the manipulator lags behind because of the required time to preprocess the visual information and to move the manipulator. Through the use of time derivatives of the position of the object and of the manipulator, the controller can inherently predict the next position of the object. In this paper several `predictive' controllers are proposed, and successfully applied to track a moving object.",
"neighbors": [
1252
],
"mask": "Train"
},
{
"node_id": 919,
"label": 2,
"text": "Title: A Generalizing Adaptive Discriminant Network \nAbstract: This paper overviews the AA1 (Adaptive Algorithm 1) model of ASOCS the (Adaptive Self - Organizing Concurrent Systems) approach. It also presents promising empirical generalization results of AA1 with actual data. AA1 is a topologically dynamic network which grows to fit the problem being learned. AA1 generalizes in a self-organizing fashion to a network which seeks to find features which discriminate between concepts. Convergence to a training set is both guaranteed and bounded linearly in time. ",
"neighbors": [
809,
908,
1129
],
"mask": "Validation"
},
{
"node_id": 920,
"label": 2,
"text": "Title: Maximum likelihood source separation for discrete sources \nAbstract: This communication deals with the source separation problem which consists in the separation of a noisy mixture of independent sources without a priori knowledge of the mixture coefficients. In this paper, we consider the maximum likelihood (ML) approach for discrete source signals with known probability distributions. An important feature of the ML approach in Gaussian noise is that the covariance matrix of the additive noise can be treated as a parameter. Hence, it is not necessary to know or to model the spatial structure of the noise. Another striking feature offered in the case of discrete sources is that, under mild assumptions, it is possible to separate more sources than sensors. In this paper, we consider maximization of the likelihood via the Expectation-Maximization (EM) algorithm.",
"neighbors": [
873,
1520
],
"mask": "Validation"
},
{
"node_id": 921,
"label": 2,
"text": "Title: DIAGNOSING AND CORRECTING SYSTEM ANOMALIES WITH A ROBUST CLASSIFIER \nAbstract: If a robust statistical model has been developed to classify the ``health'' of a system, a well-known Taylor series approximation technique forms the basis of a diagnostic/recovery procedure that can be initiated when the system's health degrades or fails altogether. This procedure determines a ranked set of probable causes for the degraded health state, which can be used as a prioritized checklist for isolating system anomalies and quantifying corrective action. The diagnostic/recovery procedure is applicable to any classifier known to be robust; it can be applied to both neural network and traditional parametric pattern classifiers generated by a supervised learning procedure in which an empirical risk/benefit measure is optimized. We describe the procedure mathematically and demonstrate its ability to detect and diagnose the cause(s) of faults in NASA's Deep Space Communications Complex at Goldstone, California. ",
"neighbors": [
1265,
1725
],
"mask": "Train"
},
{
"node_id": 922,
"label": 0,
"text": "Title: Towards Improving Case Adaptability with a Genetic Algorithm \nAbstract: Case combination is a difficult problem in Case Based Reasoning, as sub-cases often exhibit conflicts when merged together. In our previous work we formalized case combination by representing each case as a constraint satisfaction problem, and used the minimum conflicts algorithm to systematically synthesize the global solution. However, we also found instances of the problem in which the minimum conflicts algorithm does not perform case combination efficiently. In this paper we describe those situations in which initially retrieved cases are not easily adaptable, and propose a method by which to improve case adaptability with a genetic algorithm. We introduce a fitness function that maintains as much retrieved case information as possible, while also perturbing a sub-solution to allow subsequent case combination to proceed more efficiently.",
"neighbors": [
580,
923,
1233,
1416
],
"mask": "Validation"
},
{
"node_id": 923,
"label": 0,
"text": "Title: Dynamic Constraint Satisfaction using Case-Based Reasoning Techniques \nAbstract: The Dynamic Constraint Satisfaction Problem (DCSP) formalism has been gaining attention as a valuable and often necessary extension of the static CSP framework. Dynamic Constraint Satisfaction enables CSP techniques to be applied more extensively, since it can be applied in domains where the set of constraints and variables involved in the problem evolves with time. At the same time, the Case-Based Reasoning (CBR) community has been working on techniques by which to reuse existing solutions when solving new problems. We have observed that dynamic constraint satisfaction matches very closely the case-based reasoning process of case adaptation. These observations emerged from our previous work on combining CBR and CSP to achieve a constraint-based adaptation. This paper summarizes our previous results, describes the similarity of the challenges facing both DCSP and case adaptation, and shows how CSP and CBR can together begin to address these chal lenges.",
"neighbors": [
922,
1126,
1233,
1416
],
"mask": "Train"
},
{
"node_id": 924,
"label": 6,
"text": "Title: Quantifying Prior Determination Knowledge using the PAC Learning Model \nAbstract: Prior knowledge, or bias, regarding a concept can speed up the task of learning it. Probably Approximately Correct (PAC) learning is a mathematical model of concept learning that can be used to quantify the speed up due to different forms of bias on learning. Thus far, PAC learning has mostly been used to analyze syntactic bias, such as limiting concepts to conjunctions of boolean prepositions. This paper demonstrates that PAC learning can also be used to analyze semantic bias, such as a domain theory about the concept being learned. The key idea is to view the hypothesis space in PAC learning as that consistent with all prior knowledge, syntactic and semantic. In particular, the paper presents a PAC analysis of determinations, a type of relevance knowledge. The results of the analysis reveal crisp distinctions and relations among different determinations, and illustrate the usefulness of an analysis based on the PAC model. ",
"neighbors": [
672,
1309,
1460,
1480,
1539
],
"mask": "Train"
},
{
"node_id": 925,
"label": 2,
"text": "Title: Learning in the Presence of Prior Knowledge: A Case Study Using Model Calibration \nAbstract: Computational models of natural systems often contain free parameters that must be set to optimize the predictive accuracy of the models. This process| called calibration|can be viewed as a form of supervised learning in the presence of prior knowledge. In this view, the fixed aspects of the model constitute the prior knowledge, and the goal is to learn values for the free parameters. We report on a series of attempts to learn parameter values for a global vegetation model called MAPSS (Mapped Atmosphere-Plant-Soil System) developed by our collaborator, Ron Neilson. Standard machine learning methods do not work with MAPSS, because the constraints introduced by the structure of the model create a very difficult non-linear optimization problem. We developed a new divide-and-conquer approach in which subsets of the parameters are calibrated while others are held constant. This approach succeeds because it is possible to select training examples that exercise only portions of the model. ",
"neighbors": [
1532
],
"mask": "Test"
},
{
"node_id": 926,
"label": 6,
"text": "Title: Virtual Seens and the Frequently Used Dataset \nAbstract: The paper considers the situation in which a learner's testing set contains close approximations of cases which appear in the training set. Such cases can be considered `virtual seens' since they are approximately seen by the learner. Generalisation measures which do not take account of the frequency of virtual seens may be misleading. The paper shows that the 1-NN algorithm can be used to derive a normalising baseline for gen-eralisation statistics. The normalisation process is demonstrated though application to Holte's [1] study in which the generalisation performance of the 1R algorithm was tested against C4.5 on 16 commonly used datasets.",
"neighbors": [
1019,
1112
],
"mask": "Train"
},
{
"node_id": 927,
"label": 0,
"text": "Title: Exemplar-based Music Structure Recognition \nAbstract: We tend to think of what we really know as what we can talk about, and disparage knowledge that we can't verbalize. [Dowling 1989, p. 252] ",
"neighbors": [
1019,
1328
],
"mask": "Train"
},
{
"node_id": 928,
"label": 0,
"text": "Title: Learning to Refine Case Libraries: \nAbstract: Initial Results Abstract. Conversational case-based reasoning (CBR) systems, which incrementally extract a query description through a user-directed conversation, are advertised for their ease of use. However, designing large case libraries that have good performance (i.e., precision and querying efficiency) is difficult. CBR vendors provide guidelines for designing these libraries manually, but the guidelines are difficult to apply. We describe an automated inductive approach that revises conversational case libraries to increase their conformance with design guidelines. Revision increased performance on three conversational case libraries.",
"neighbors": [
256,
983,
1636
],
"mask": "Test"
},
{
"node_id": 929,
"label": 3,
"text": "Title: In: A Mixture Model System for Medical and Machine Diagnosis \nAbstract: Diagnosis of human disease or machine fault is a missing data problem since many variables are initially unknown. Additional information needs to be obtained. The joint probability distribution of the data can be used to solve this problem. We model this with mixture models whose parameters are estimated by the EM algorithm. This gives the benefit that missing data in the database itself can also be handled correctly. The request for new information to refine the diagnosis is performed using the maximum utility principle. Since the system is based on learning it is domain independent and less labor intensive than expert systems or probabilistic networks. An example using a heart disease database is presented.",
"neighbors": [
71,
740,
1559,
1697,
2442
],
"mask": "Test"
},
{
"node_id": 930,
"label": 2,
"text": "Title: BACKPROPAGATION CAN GIVE RISE TO SPURIOUS LOCAL MINIMA EVEN FOR NETWORKS WITHOUT HIDDEN LAYERS \nAbstract: We give an example of a neural net without hidden layers and with a sigmoid transfer function, together with a training set of binary vectors, for which the sum of the squared errors, regarded as a function of the weights, has a local minimum which is not a global minimum. The example consists of a set of 125 training instances, with four weights and a threshold to be learnt. We do not know if substantially smaller binary examples exist. ",
"neighbors": [
805,
1062,
1254
],
"mask": "Train"
},
{
"node_id": 931,
"label": 6,
"text": "Title: MAJORITY VOTE CLASSIFIERS: THEORY AND APPLICATIONS \nAbstract: We give an example of a neural net without hidden layers and with a sigmoid transfer function, together with a training set of binary vectors, for which the sum of the squared errors, regarded as a function of the weights, has a local minimum which is not a global minimum. The example consists of a set of 125 training instances, with four weights and a threshold to be learnt. We do not know if substantially smaller binary examples exist. ",
"neighbors": [
70,
1000,
1053,
1463,
1484
],
"mask": "Test"
},
{
"node_id": 932,
"label": 6,
"text": "Title: Learning an Optimally Accurate Representational System \nAbstract: The multiple extension problem arises because a default theory can use different subsets of its defaults to propose different, mutually incompatible, answers to some queries. This paper presents an algorithm that uses a set of observations to learn a credulous version of this default theory that is (essentially) \"optimally accurate\". In more detail, we can associate a given default theory with a set of related credulous theories R = fR i g, where each R i uses its own total ordering of the defaults to determine which single answer to return for each query. Our goal is to select the credulous theory that has the highest \"expected accuracy\", where each R i 's expected accuracy is the probability that the answer it produces to a query will correspond correctly to the world. Unfortunately, a theory's expected accuracy depends on the distribution of queries, which is usually not known. Moreover, the task of identifying the optimal R opt 2 R, even given that distribution information, is intractable. This paper presents a method, OptAcc, that sidesteps these problems by using a set of samples to estimate the unknown distribution, and by hill-climbing to a local optimum. In particular, given any parameters *; ffi > 0, OptAcc produces an R oa 2 R whose expected accuracy is, with probability at least 1 ffi, within * of a local optimum. Appeared in ECAI Workshop on Theoretical Foundations of Knowledge Representation and Reasoning, ",
"neighbors": [
251,
865,
1505
],
"mask": "Test"
},
{
"node_id": 933,
"label": 4,
"text": "Title: Learning an Optimally Accurate Representational System \nAbstract: Multigrid Q-Learning Charles W. Anderson and Stewart G. Crawford-Hines Technical Report CS-94-121 October 11, 1994 ",
"neighbors": [
483,
875
],
"mask": "Train"
},
{
"node_id": 934,
"label": 1,
"text": "Title: Complexity Compression and Evolution \nAbstract: Compression of information is an important concept in the theory of learning. We argue for the hypothesis that there is an inherent compression pressure towards short, elegant and general solutions in a genetic programming system and other variable length evolutionary algorithms. This pressure becomes visible if the size or complexity of solutions are measured without non-effective code segments called introns. The built in parsimony pressure effects complex fitness functions, crossover probability, generality, maximum depth or length of solutions, explicit parsimony, granularity of fitness function, initialization depth or length, and modulariz-ation. Some of these effects are positive and some are negative. In this work we provide a basis for an analysis of these effects and suggestions to overcome the negative implications in order to obtain the balance needed for successful evolution. An empirical investigation that supports our hypothesis is also presented.",
"neighbors": [
380,
844,
860,
940,
1009,
1353,
1631
],
"mask": "Validation"
},
{
"node_id": 935,
"label": 1,
"text": "Title: Complexity Compression and Evolution \nAbstract: CBR Assisted Explanation of GA Results Computer Science Technical Report number 361 CRCC Technical Report number 63 ",
"neighbors": [
163,
1136
],
"mask": "Train"
},
{
"node_id": 936,
"label": 4,
"text": "Title: XCS Classifier System Reliably Evolves Accurate, Complete, and Minimal Representations for Boolean Functions more complex\nAbstract: Wilson's recent XCS classifier system forms complete mappings of the payoff environment in the reinforcement learning tradition thanks to its accuracy based fitness. According to Wilson's Generalization Hypothesis, XCS has a tendency towards generalization. With the XCS Optimality Hypothesis, I suggest that XCS systems can evolve optimal populations (representations); populations which accurately map all input/action pairs to payoff predictions using the smallest possible set of non-overlapping classifiers. The ability of XCS to evolve optimal populations for boolean multiplexer problems is demonstrated using condensation, a technique in which evolutionary search is suspended by setting the crossover and mutation rates to zero. Condensation is automatically triggered by self-monitoring of performance statistics, and the entire learning process is terminated by autotermination. Combined, these techniques allow a classifier system to evolve optimal representations of boolean functions without any form of supervision. ",
"neighbors": [
1515
],
"mask": "Train"
},
{
"node_id": 937,
"label": 0,
"text": "Title: The RISE System: Conquering Without Separating \nAbstract: Current rule induction systems (e.g. CN2) typically rely on a \"separate and conquer\" strategy, learning each rule only from still-uncovered examples. This results in a dwindling number of examples being available for learning successive rules, adversely affecting the system's accuracy. An alternative is to learn all rules simultaneously, using the entire training set for each. This approach is implemented in the Rise 1.0 system. Empirical comparison of Rise with CN2 suggests that \"conquering without separating\" performs similarly to its counterpart in simple domains, but achieves increasingly substantial gains in accuracy as the domain difficulty grows. ",
"neighbors": [
426,
1234
],
"mask": "Train"
},
{
"node_id": 938,
"label": 1,
"text": "Title: Genetic Programming of Minimal Neural Nets Using Occam's Razor \nAbstract: A genetic programming method is investigated for optimizing both the architecture and the connection weights of multilayer feedforward neural networks. The genotype of each network is represented as a tree whose depth and width are dynamically adapted to the particular application by specifically defined genetic operators. The weights are trained by a next-ascent hillclimb-ing search. A new fitness function is proposed that quantifies the principle of Occam's razor. It makes an optimal trade-off between the error fitting ability and the parsimony of the network. We discuss the results for two problems of differing complexity and study the convergence and scaling properties of the algorithm.",
"neighbors": [
560,
611,
1127,
2196
],
"mask": "Train"
},
{
"node_id": 939,
"label": 2,
"text": "Title: A Simple Neural Network Models Categorical Perception of Facial Expressions \nAbstract: The performance of a neural network that categorizes facial expressions is compared with human subjects over a set of experiments using interpolated imagery. The experiments for both the human subjects and neural networks make use of interpolations of facial expressions from the Pictures of Facial Affect Database [Ekman and Friesen, 1976]. The only difference in materials between those used in the human subjects experiments [Young et al., 1997] and our materials are the manner in which the interpolated images are constructed - image-quality morphs versus pixel averages. Nevertheless, the neural network accurately captures the categorical nature of the human responses, showing sharp transitions in labeling of images along the interpolated sequence. Crucially for a demonstration of categorical perception [Harnad, 1987], the model shows the highest discrimination between transition images at the crossover point. The model also captures the shape of the reaction time curves of the human subjects along the sequences. Finally, the network matches human subjects' judgements of which expressions are being mixed in the images. The main failing of the model is that there are intrusions of neutral responses in some transitions, which are not seen in the human subjects. We attribute this difference to the difference between the pixel average stimuli and the image quality morph stimuli. These results show that a simple neural network classifier, with no access to the biological constraints that are presumably imposed on the human emotion processor, and whose only access to the surrounding culture is the category labels placed by American subjects on the facial expressions, can nevertheless simulate fairly well the human responses to emotional expressions. ",
"neighbors": [
1242
],
"mask": "Train"
},
{
"node_id": 940,
"label": 1,
"text": "Title: Signal Path Oriented Approach for Generation of Dynamic Process Models \nAbstract: The article at hand discusses a tool for automatic generation of structured models for complex dynamic processes by means of genetic programming. In contrast to other techniques which use genetic programming to find an appropriate arithmetic expression in order to describe the input-output behaviour of a process, this tool is based on a block oriented approach with a transparent description of signal paths. A short survey on other techniques for computer based system identification is given and the basic concept of SMOG (Structured MOdel Generator) is described. Furthermore latest extensions of the system are presented in detail, including automatically defined sub-models and quali tative fitness criteria.",
"neighbors": [
163,
934
],
"mask": "Train"
},
{
"node_id": 941,
"label": 1,
"text": "Title: Hyperplane Ranking in Simple Genetic Algorithms \nAbstract: We examine the role of hyperplane ranking during search performed by a simple genetic algorithm. We also develop a metric for measuring the degree of ranking that exists with respect to static measurements taken directly from the function, as well as the measurement of dynamic ranking of hyperplanes during genetic search. We show that the degree of dynamic ranking induced by a simple genetic algorithm is highly correlated with the degree of static ranking that is inherent in the function, especially during the initial genera tions of search.",
"neighbors": [
1441,
1717
],
"mask": "Test"
},
{
"node_id": 942,
"label": 1,
"text": "Title: Genetic Programming Methodology, Parallelization and Applications par \nAbstract: We examine the role of hyperplane ranking during search performed by a simple genetic algorithm. We also develop a metric for measuring the degree of ranking that exists with respect to static measurements taken directly from the function, as well as the measurement of dynamic ranking of hyperplanes during genetic search. We show that the degree of dynamic ranking induced by a simple genetic algorithm is highly correlated with the degree of static ranking that is inherent in the function, especially during the initial genera tions of search.",
"neighbors": [
163,
1153
],
"mask": "Train"
},
{
"node_id": 943,
"label": 1,
"text": "Title: Crossover or Mutation? \nAbstract: Genetic algorithms rely on two genetic operators crossover and mutation. Although there exists a large body of conventional wisdom concerning the roles of crossover and mutation, these roles have not been captured in a theoretical fashion. For example, it has never been theoretically shown that mutation is in some sense \"less powerful\" than crossover or vice versa. This paper provides some answers to these questions by theoretically demonstrating that there are some important characteristics of each operator that are not captured by the other.",
"neighbors": [
728,
793,
1016,
1466,
1646
],
"mask": "Validation"
},
{
"node_id": 944,
"label": 3,
"text": "Title: Bayesian Network Classification with Continuous Attributes: Getting the Best of Both Discretization and Parametric Fitting \nAbstract: In a recent paper, Friedman, Geiger, and Goldszmidt [8] introduced a classifier based on Bayesian networks, called Tree Augmented Naive Bayes (TAN), that outperforms naive Bayes and performs competitively with C4.5 and other state-of-the-art methods. This classifier has several advantages including robustness and polynomial computational complexity. One limitation of the TAN classifier is that it applies only to discrete attributes, and thus, continuous attributes must be prediscretized. In this paper, we extend TAN to deal with continuous attributes directly via parametric (e.g., Gaussians) and semiparametric (e.g., mixture of Gaussians) conditional probabilities. The result is a classifier that can represent and combine both discrete and continuous attributes. In addition, we propose a new method that takes advantage of the modeling language of Bayesian networks in order to represent attributes both in discrete and continuous form simultaneously, and use both versions in the classification. This automates the process of deciding which form of the attribute is most relevant to the classification task. It also avoids the commitment to either a discretized or a (semi)parametric form, since different attributes may correlate better with one version or the other. Our empirical results show that this latter method usually achieves classification performance that is as good as or better than either the purely discrete or the purely continuous TAN models.",
"neighbors": [
1335,
1337
],
"mask": "Train"
},
{
"node_id": 945,
"label": 3,
"text": "Title: Structured Representation of Complex Stochastic Systems \nAbstract: This paper considers the problem of representing complex systems that evolve stochastically over time. Dynamic Bayesian networks provide a compact representation for stochastic processes. Unfortunately, they are often unwieldy since they cannot explicitly model the complex organizational structure of many real life systems: the fact that processes are typically composed of several interacting subprocesses, each of which can, in turn, be further decomposed. We propose a hierarchically structured representation language which extends both dynamic Bayesian networks and the object-oriented Bayesian network framework of [9], and show that our language allows us to describe such systems in a natural and modular way. Our language supports a natural representation for certain system characteristics that are hard to capture using more traditional frameworks. For example, it allows us to represent systems where some processes evolve at a different rate than others, or systems where the processes interact only intermittently. We provide a simple inference mechanism for our representation via translation to Bayesian networks, and suggest ways in which the inference algorithm can exploit the additional structure encoded in our representation. ",
"neighbors": [
62,
327,
788,
1287,
1414
],
"mask": "Test"
},
{
"node_id": 946,
"label": 2,
"text": "Title: Constructive Learning of Recurrent Neural Networks: Limitations of Recurrent Casade Correlation and a Simple Solution \nAbstract: It is often difficult to predict the optimal neural network size for a particular application. Constructive or destructive methods that add or subtract neurons, layers, connections, etc. might offer a solution to this problem. We prove that one method, Recurrent Cascade Correlation, due to its topology, has fundamental limitations in representation and thus in its learning capabilities. It cannot represent with monotone (i.e. sigmoid) and hard-threshold activation functions certain finite state automata. We give a \"preliminary\" approach on how to get around these limitations by devising a simple constructive training method that adds neurons during training while still preserving the powerful fully-recurrent structure. We illustrate this approach by simulations which learn many examples of regular grammars that the ",
"neighbors": [
427,
1179,
1600
],
"mask": "Train"
},
{
"node_id": 947,
"label": 0,
"text": "Title: An Optimal Weighting Criterion of Case Indexing for Both Numeric and Symbolic Attributes \nAbstract: Indexing of cases is an important topic for Memory-Based Reasoning(MBR). One key problem is how to assign weights to attributes of cases. Although several weighting methods have been proposed, some methods cannot handle numeric attributes directly, so it is necessary to discretize numeric values by classification. Furthermore, existing methods have no theoretical background, so little can be said about optimality. We propose a new weighting method based on a statistical technique called Quantification Method II. It can handle both numeric and symbolic attributes in the same framework. Generated attribute weights are optimal in the sense that they maximize the ratio of variance between classes to variance of all cases. Experiments on several benchmark tests show that in many cases, our method obtains higher accuracies than some other weighting methods. The results also indicate that it can distinguish relevant attributes from irrelevant ones, and can tolerate noisy data. ",
"neighbors": [
1328
],
"mask": "Validation"
},
{
"node_id": 948,
"label": 2,
"text": "Title: An Optimal Weighting Criterion of Case Indexing for Both Numeric and Symbolic Attributes \nAbstract: A General Result on the Stabilization of Linear Systems Using Bounded Controls 1 ABSTRACT We present two constructions of controllers that globally stabilize linear systems subject to control saturation. We allow essentially arbitrary saturation functions. The only conditions imposed on the system are the obvious necessary ones, namely that no eigenvalues of the uncontrolled system have positive real part and that the standard stabilizability rank condition hold. One of the constructions is in terms of a \"neural-network type\" one-hidden layer architecture, while the other one is in terms of cascades of linear maps and saturations. ",
"neighbors": [
1281,
1282,
1446
],
"mask": "Train"
},
{
"node_id": 949,
"label": 2,
"text": "Title: Classifying Seismic Signals by Integrating Ensembles of Neural Networks \nAbstract: This paper proposes a classification scheme based on integration of multiple Ensembles of ANNs. It is demonstrated on a classification problem, in which seismic signals of Natural Earthquakes must be distinguished from seismic signals of Artificial Explosions. A Redundant Classification Environment consists of several Ensembles of Neural Networks is created and trained on Bootstrap Sample Sets, using various data representations and architectures. The ANNs within the Ensembles are aggregated (as in Bagging) while the Ensembles are integrated non-linearly, in a signal adaptive manner, using a posterior confidence measure based on the agreement (variance) within the Ensembles. The proposed Integrated Classification Machine achieved 92.1% correct classifications on the seismic test data. Cross Validation evaluations and comparisons indicate that such integration of a collection of ANN's Ensembles is a robust way for handling high dimensional problems with a complex non-stationary signal space as in the current Seismic Classification problem. ",
"neighbors": [
74,
1512,
1608
],
"mask": "Train"
},
{
"node_id": 950,
"label": 3,
"text": "Title: Model Selection for Generalized Linear Models via GLIB, with Application to Epidemiology 1 \nAbstract: 1 This is the first draft of a chapter for Bayesian Biostatistics, edited by Donald A. Berry and Darlene K. Strangl. Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, GN-22, University of Washington, Seattle, WA 98195, USA. Sylvia Richardson is Directeur de Recherche, INSERM Unite 170, 16 avenue Paul Vaillant Couturier, 94807 Villejuif CEDEX, France. Raftery's research was supported by ONR contract no. N-00014-91-J-1074, by the Ministere de la Recherche et de l'Espace, Paris, by the Universite de Paris VI, and by INRIA, Rocquencourt, France. Raftery thanks the latter two institutions, Paul Deheuvels and Gilles Celeux for hearty hospitality during his Paris sabbatical in which part of this chapter was written. The authors are grateful to Christine Montfort for excellent research assistance and to Mariette Gerber, Michel Chavance and David Madigan for helpful discussions. ",
"neighbors": [
84,
1240,
1241
],
"mask": "Test"
},
{
"node_id": 951,
"label": 0,
"text": "Title: Using Case-Based Reasoning to Acquire User Scheduling Preferences that Change over Time \nAbstract: Production/Manufacturing scheduling typically involves the acquisition of user optimization preferences. The ill-structuredness of both the problem space and the desired objectives make practical scheduling problems difficult to formalize and costly to solve, especially when problem configurations and user optimization preferences change over time. This paper advocates an incremental revision framework for improving schedule quality and incorporating user dynamically changing preferences through Case-Based Reasoning. Our implemented system, called CABINS, records situation-dependent tradeoffs and consequences that result from schedule revision to guide schedule improvement. The preliminary experimental results show that CABINS is able to effectively capture both user static and dynamic preferences which are not known to the system and only exist implicitly in a extensional manner in the case base. ",
"neighbors": [
1401,
1554,
2605
],
"mask": "Train"
},
{
"node_id": 952,
"label": 3,
"text": "Title: Some Varieties of Qualitative Probability \nAbstract: ",
"neighbors": [
1064,
1660
],
"mask": "Test"
},
{
"node_id": 953,
"label": 1,
"text": "Title: Behavior Hierarchy for Autonomous Mobile Robots: Fuzzy-behavior modulation and evolution \nAbstract: Realization of autonomous behavior in mobile robots, using fuzzy logic control, requires formulation of rules which are collectively responsible for necessary levels of intelligence. Such a collection of rules can be conveniently decomposed and efficiently implemented as a hierarchy of fuzzy-behaviors. This article describes how this can be done using a behavior-based architecture. A behavior hierarchy and mechanisms of control decision-making are described. In addition, an approach to behavior coordination is described with emphasis on evolution of fuzzy coordination rules using the genetic programming (GP) paradigm. Both conventional GP and steady-state GP are applied to evolve a fuzzy-behavior for sensor-based goal-seeking. The usefulness of the behavior hierarchy, and partial design by GP, is evident in performance results of simulated autonomous navigation. ",
"neighbors": [
972
],
"mask": "Train"
},
{
"node_id": 954,
"label": 3,
"text": "Title: Unsupervised learning of distributions on binary vectors using two layer networks \nAbstract: We present a distribution model for binary vectors, called the influence combination model and show how this model can be used as the basis for unsupervised learning algorithms for feature selection. The model is closely related to the Harmonium model defined by Smolensky [RM86][Ch.6]. In the first part of the paper we analyze properties of this distribution representation scheme. We show that arbitrary distributions of binary vectors can be approximated by the combination model. We show how the weight vectors in the model can be interpreted as high order correlation patterns among the input bits. We compare the combination model with the mixture model and with principle component analysis. In the second part of the paper we present two algorithms for learning the combination model from examples. The first algorithm is based on gradient ascent. Here we give a closed form for this gradient that is significantly easier to compute than the corresponding gradient for the general Boltzmann machine. The second learning algorithm is a greedy method that creates the hidden units and computes their weights one at a time. This method is a variant of projection pursuit density estimation. In the third part of the paper we give experimental results for these learning methods on synthetic data and on natural data of handwritten digit images. ",
"neighbors": [
427,
450,
969,
1461,
1591
],
"mask": "Test"
},
{
"node_id": 955,
"label": 6,
"text": "Title: Separating Formal Bounds from Practical Performance in Learning Systems \nAbstract: We present a distribution model for binary vectors, called the influence combination model and show how this model can be used as the basis for unsupervised learning algorithms for feature selection. The model is closely related to the Harmonium model defined by Smolensky [RM86][Ch.6]. In the first part of the paper we analyze properties of this distribution representation scheme. We show that arbitrary distributions of binary vectors can be approximated by the combination model. We show how the weight vectors in the model can be interpreted as high order correlation patterns among the input bits. We compare the combination model with the mixture model and with principle component analysis. In the second part of the paper we present two algorithms for learning the combination model from examples. The first algorithm is based on gradient ascent. Here we give a closed form for this gradient that is significantly easier to compute than the corresponding gradient for the general Boltzmann machine. The second learning algorithm is a greedy method that creates the hidden units and computes their weights one at a time. This method is a variant of projection pursuit density estimation. In the third part of the paper we give experimental results for these learning methods on synthetic data and on natural data of handwritten digit images. ",
"neighbors": [
109,
560,
1280
],
"mask": "Train"
},
{
"node_id": 956,
"label": 1,
"text": "Title: Modeling Distributed Search via Social Insects \nAbstract: Complex group behavior arises in social insects colonies as the integration of the actions of simple and redundant individual insects [Adler and Gordon, 1992, Oster and Wilson, 1978]. Furthermore, the colony can act as an information center to expedite foraging [Brown, 1989]. We apply these lessons from natural systems to model collective action and memory in a computational agent society. Collective action can expedite search in combinatorial optimization problems [Dorigo et al., 1996]. Collective memory can improve learning in multi-agent systems [Garland and Alterman, 1996]. Our collective adaptation integrates the simplicity of collective action with the pattern detection of collective memory to significantly improve both the gathering and processing of knowledge. As a test of the role of the society as an information center, we examine the ability of the society to distribute task allocation without any omnipotent centralized control. ",
"neighbors": [
995,
1178,
1231,
2598
],
"mask": "Test"
},
{
"node_id": 957,
"label": 6,
"text": "Title: ANNEALED THEORIES OF LEARNING \nAbstract: We study annealed theories of learning boolean functions using a concept class of finite cardinality. The naive annealed theory can be used to derive a universal learning curve bound for zero temperature learning, similar to the inverse square root bound from the Vapnik-Chervonenkis theory. Tighter, nonuniversal learning curve bounds are also derived. A more refined annealed theory leads to still tighter bounds, which in some cases are very similar to results previously obtained using one-step replica symmetry breaking. ",
"neighbors": [
967
],
"mask": "Train"
},
{
"node_id": 958,
"label": 1,
"text": "Title: The Evolution of Memory and Mental Models Using Genetic Programming build internal representations of their\nAbstract: This paper applies genetic programming their successive actions. The results show to the evolution of intelligent agents that",
"neighbors": [
1409
],
"mask": "Train"
},
{
"node_id": 959,
"label": 1,
"text": "Title: Numerical techniques for efficient sonar bearing and range searching in the near field using genetic algorithms \nAbstract: This article describes a numerical method that may be used to efficiently locate and track underwater sonar targets in the near-field, with both bearing and range estimation, for the case of very large passive arrays. The approach used has no requirement for a priori knowledge about the source and uses only limited information about the receiver array shape. The role of sensor position uncertainty and the consequence of targets always being in the near-field are analysed and the problems associated with the manipulation of large matrices inherent in conventional eigenvalue type algorithms noted. A simpler numerical approach is then presented which reduces the problem to that of search optimization. When using this method the location of a target corresponds to finding the position of the maximum weighted sum of the output from all sensors. Since this search procedure can be dealt with using modern stochastic optimization methods, such as the genetic algorithm, the operational requirement that an acceptable accuracy be achieved in real time can usually be met. The array studied here consists of 225 elements positioned along a flexible cable towed behind a ship with 3.4m between sensors, giving an effective aperture of 761.6m. For such a long array, the far field assumption used in most beam-forming algorithms is no longer appropriate. The waves emitted by the targets then have to be considered as curved rather than plane. It is shown that, for simulated data, if no significant noise ",
"neighbors": [
793,
1130
],
"mask": "Validation"
},
{
"node_id": 960,
"label": 5,
"text": "Title: Proceedings of the First International Workshop on Intelligent Adaptive Systems (IAS-95) Constructive Induction-based Learning Agents:\nAbstract: This paper introduces a new type of intelligent agent called a constructive induction-based learning agent (CILA). This agent differs from other adaptive agents because it has the ability to not only learn how to assist a user in some task, but also to incrementally adapt its knowledge representation space to better fit the given learning task. The agents ability to autonomously make problem-oriented modifications to the originally given representation space is due to its constructive induction (CI) learning method. Selective induction (SI) learning methods, and agents based on these methods, rely on a good representation space. A good representation space has no misclassification noise, inter-correlated attributes or irrelevant attributes. Our proposed CILA has methods for overcoming all of these problems. In agent domains with poor representations, the CI-based learning agent will learn more accurate rules and be more useful than an SI-based learning agent. This paper gives an architecture for a CI-based learning agent and gives an empirical comparison of a CI and SI for a set of six abstract domains involving DNF-type (disjunctive normal form) descriptions. ",
"neighbors": [
378,
1292,
1301
],
"mask": "Train"
},
{
"node_id": 961,
"label": 1,
"text": "Title: CFS-C: A Package of Domain Independent Subroutines for Implementing Classifier Systems in Arbitrary, User-Defined Environments. \nAbstract: This paper introduces a new type of intelligent agent called a constructive induction-based learning agent (CILA). This agent differs from other adaptive agents because it has the ability to not only learn how to assist a user in some task, but also to incrementally adapt its knowledge representation space to better fit the given learning task. The agents ability to autonomously make problem-oriented modifications to the originally given representation space is due to its constructive induction (CI) learning method. Selective induction (SI) learning methods, and agents based on these methods, rely on a good representation space. A good representation space has no misclassification noise, inter-correlated attributes or irrelevant attributes. Our proposed CILA has methods for overcoming all of these problems. In agent domains with poor representations, the CI-based learning agent will learn more accurate rules and be more useful than an SI-based learning agent. This paper gives an architecture for a CI-based learning agent and gives an empirical comparison of a CI and SI for a set of six abstract domains involving DNF-type (disjunctive normal form) descriptions. ",
"neighbors": [
163,
523,
1515
],
"mask": "Train"
},
{
"node_id": 962,
"label": 2,
"text": "Title: Using Many-Particle Decomposition to get a Parallel Self-Organising Map \nAbstract: We propose a method for decreasing the computational complexity of self-organising maps. The method uses a partitioning of the neurons into disjoint clusters. Teaching of the neurons occurs on a cluster-basis instead of on a neuron-basis. For teaching an N-neuron network with N 0 samples, the computational complexity decreases from O(N 0 N) to O(N 0 log N). Furthermore, we introduce a measure for the amount of order in a self-organising map, and show that the introduced algorithm behaves as well as the original algorithm.",
"neighbors": [
745,
829
],
"mask": "Train"
},
{
"node_id": 963,
"label": 5,
"text": "Title: Cooperation of Data-driven and Model-based Induction Methods for Relational Learning \nAbstract: Inductive learning in relational domains has been shown to be intractable in general. Many approaches to this task have been suggested nevertheless; all in some way restrict the hypothesis space searched. They can be roughly divided into two groups: data-driven, where the restriction is encoded into the algorithm, and model-based, where the restrictions are made more or less explicit with some form of declarative bias. This paper describes Incy, an inductive learner that seeks to combine aspects of both approaches. Incy is initially data-driven, using examples and background knowledge to put forth and specialize hypotheses based on the \"connectivity\" of the data at hand. It is model-driven in that hypotheses are abstracted into rule models, which are used both for control decisions in the data-driven phase and for model-guided induction. Key Words: Inductive learning in relational domains, cooperation of data-driven and model-guided methods, implicit and declarative bias. ",
"neighbors": [
344,
1519
],
"mask": "Train"
},
{
"node_id": 964,
"label": 1,
"text": "Title: Simulation-Assisted Learning by Competition: Effects of Noise Differences Between Training Model and Target Environment \nAbstract: The problem of learning decision rules for sequential tasks is addressed, focusing on the problem of learning tactical plans from a simple flight simulator where a plane must avoid a missile. The learning method relies on the notion of competition and employs genetic algorithms to search the space of decision policies. Experiments are presented that address issues arising from differences between the simulation model on which learning occurs and the target environment on which the decision rules are ultimately tested. Specifically, either the model or the target environment may contain noise. These experiments examine the effect of learning tactical plans without noise and then testing the plans in a noisy environment, and the effect of learning plans in a noisy simulator and then testing the plans in a noise-free environment. Empirical results show that, while best result are obtained when the training model closely matches the target environment, using a training environment that is more noisy than the target environment is better than using using a training environment that has less noise than the target environment. ",
"neighbors": [
902,
910,
965,
966,
1140,
1432,
1481,
1673
],
"mask": "Train"
},
{
"node_id": 965,
"label": 1,
"text": "Title: Improving Tactical Plans with Genetic Algorithms \nAbstract: ",
"neighbors": [
177,
902,
964,
966,
981,
1060,
1140,
1253,
1432,
1481,
1590,
1673
],
"mask": "Train"
},
{
"node_id": 966,
"label": 1,
"text": "Title: Using a Genetic Algorithm to Learn Strategies for Collision Avoidance and Local Navigation \nAbstract: Navigation through obstacles such as mine fields is an important capability for autonomous underwater vehicles. One way to produce robust behavior is to perform projective planning. However, real-time performance is a critical requirement in navigation. What is needed for a truly autonomous vehicle are robust reactive rules that perform well in a wide variety of situations, and that also achieve real-time performance. In this work, SAMUEL, a learning system based on genetic algorithms, is used to learn high-performance reactive strategies for navigation and collision avoidance. ",
"neighbors": [
163,
902,
910,
964,
965,
1131,
1140,
1481,
1673
],
"mask": "Train"
},
{
"node_id": 967,
"label": 6,
"text": "Title: Rigorous Learning Curve Bounds from Statistical Mechanics \nAbstract: In this paper we introduce and investigate a mathematically rigorous theory of learning curves that is based on ideas from statistical mechanics. The advantage of our theory over the well-established Vapnik-Chervonenkis theory is that our bounds can be considerably tighter in many cases, and are also more reflective of the true behavior (functional form) of learning curves. This behavior can often exhibit dramatic properties such as phase transitions, as well as power law asymptotics not explained by the VC theory. The disadvantages of our theory are that its application requires knowledge of the input distribution, and it is limited so far to finite cardinality function classes. We illustrate our results with many concrete examples of learning curve bounds derived from our theory. ",
"neighbors": [
56,
57,
109,
306,
322,
689,
847,
848,
957,
1032,
1394,
1400,
1561
],
"mask": "Test"
},
{
"node_id": 968,
"label": 2,
"text": "Title: Using Prior Knowledge in an NNPDA to Learn Context-Free Languages \nAbstract: Although considerable interest has been shown in language inference and automata induction using recurrent neural networks, success of these models has mostly been limited to regular languages. We have previously demonstrated that Neural Network Pushdown Automaton (NNPDA) model is capable of learning deterministic context-free languages (e.g., a n b n and parenthesis languages) from examples. However, the learning task is computationally intensive. In this paper we discuss some ways in which a priori knowledge about the task and data could be used for efficient learning. We also observe that such knowledge is often an experimental prerequisite for learning nontrivial languages (eg. a n b n cb m a m ).",
"neighbors": [
1285
],
"mask": "Validation"
},
{
"node_id": 969,
"label": 3,
"text": "Title: Learning Stochastic Feedforward Networks \nAbstract: Connectionist learning procedures are presented for \"sigmoid\" and \"noisy-OR\" varieties of stochastic feedforward network. These networks are in the same class as the \"belief networks\" used in expert systems. They represent a probability distribution over a set of visible variables using hidden variables to express correlations. Conditional probability distributions can be exhibited by stochastic simulation for use in tasks such as classification. Learning from empirical data is done via a gradient-ascent method analogous to that used in Boltzmann machines, but due to the feedforward nature of the connections, the negative phase of Boltzmann machine learning is unnecessary. Experimental results show that, as a result, learning in a sigmoid feedforward network can be faster than in a Boltzmann machine. These networks have other advantages over Boltzmann machines in pattern classification and decision making applications, and provide a link between work on connectionist learning and work on the representation of expert knowledge. ",
"neighbors": [
954
],
"mask": "Validation"
},
{
"node_id": 970,
"label": 1,
"text": "Title: Generality versus Size in Genetic Programming \nAbstract: Genetic Programming (GP) uses variable size representations as programs. Size becomes an important and interesting emergent property of the structures evolved by GP. The size of programs can be both a controlling and a controlled factor in GP search. Size influences the efficiency of the search process and is related to the generality of solutions. This paper analyzes the size and generality issues in standard GP and GP using subroutines and addresses the question whether such an analysis can help control the search process. We relate the size, generalization and modularity issues for programs evolved to control an agent in a dynamic and non-deterministic environment, as exemplified by the Pac-Man game.",
"neighbors": [
567,
974,
1378,
1396,
1533
],
"mask": "Train"
},
{
"node_id": 971,
"label": 3,
"text": "Title: Decision-Theoretic Foundations for Causal Reasoning \nAbstract: We present a definition of cause and effect in terms of decision-theoretic primitives and thereby provide a principled foundation for causal reasoning. Our definition departs from the traditional view of causation in that causal assertions may vary with the set of decisions available. We argue that this approach provides added clarity to the notion of cause. Also in this paper, we examine the encoding of causal relationships in directed acyclic graphs. We describe a special class of influence diagrams, those in canonical form, and show its relationship to Pearl's representation of cause and effect. Finally, we show how canonical form facilitates counterfactual reasoning.",
"neighbors": [
1326,
1527,
1602,
2166
],
"mask": "Train"
},
{
"node_id": 972,
"label": 1,
"text": "Title: On Genetic Programming of Fuzzy Rule-Based Systems for Intelligent Control \nAbstract: Fuzzy logic and evolutionary computation have proven to be convenient tools for handling real-world uncertainty and designing control systems, respectively. An approach is presented that combines attributes of these paradigms for the purpose of developing intelligent control systems. The potential of the genetic programming paradigm (GP) for learning rules for use in fuzzy logic controllers (FLCs) is evaluated by focussing on the problem of discovering a controller for mobile robot path tracking. Performance results of incomplete rule-bases compare favorably to those of a complete FLC designed by the usual trial-and-error approach. A constrained syntactic representation supported by structure-preserving genetic operators is also introduced. ",
"neighbors": [
953
],
"mask": "Test"
},
{
"node_id": 973,
"label": 3,
"text": "Title: Interval Censored Survival Data: A Review of Recent Progress \nAbstract: We review estimation in interval censoring models, including nonparametric estimation of a distribution function and estimation of regression models. In the non-parametric setting, we describe computational procedures and asymptotic properties of the nonparametric maximum likelihood estimators. In the regression setting, we focus on the proportional hazards, the proportional odds and the accelerated failure time semiparametric regression models. Particular emphasis is given to calculation of the Fisher information for the regression parameters. We also discuss computation of the regression parameter estimators via profile likelihood or maximization of the semi-parametric likelihood, distributional results for the maximum likelihood estimators, and estimation of (asymptotic) variances. Some further problems and open questions are also reviewed. ",
"neighbors": [
802,
993
],
"mask": "Test"
},
{
"node_id": 974,
"label": 1,
"text": "Title: Balancing Accuracy and Parsimony in Genetic Programming 1 \nAbstract: Genetic programming is distinguished from other evolutionary algorithms in that it uses tree representations of variable size instead of linear strings of fixed length. The flexible representation scheme is very important because it allows the underlying structure of the data to be discovered automatically. One primary difficulty, however, is that the solutions may grow too big without any improvement of their generalization ability. In this paper we investigate the fundamental relationship between the performance and complexity of the evolved structures. The essence of the parsimony problem is demonstrated empirically by analyzing error landscapes of programs evolved for neural network synthesis. We consider genetic programming as a statistical inference problem and apply the Bayesian model-comparison framework to introduce a class of fitness functions with error and complexity terms. An adaptive learning method is then presented that automatically balances the model-complexity factor to evolve parsimonious programs without losing the diversity of the population needed for achieving the desired training accuracy. The effectiveness of this approach is empirically shown on the induction of sigma-pi neural networks for solving a real-world medical diagnosis problem as well as benchmark tasks. ",
"neighbors": [
844,
970
],
"mask": "Train"
},
{
"node_id": 975,
"label": 2,
"text": "Title: State Reconstruction for Determining Predictability in Driven Nonlinear Acoustical Systems \nAbstract: Genetic programming is distinguished from other evolutionary algorithms in that it uses tree representations of variable size instead of linear strings of fixed length. The flexible representation scheme is very important because it allows the underlying structure of the data to be discovered automatically. One primary difficulty, however, is that the solutions may grow too big without any improvement of their generalization ability. In this paper we investigate the fundamental relationship between the performance and complexity of the evolved structures. The essence of the parsimony problem is demonstrated empirically by analyzing error landscapes of programs evolved for neural network synthesis. We consider genetic programming as a statistical inference problem and apply the Bayesian model-comparison framework to introduce a class of fitness functions with error and complexity terms. An adaptive learning method is then presented that automatically balances the model-complexity factor to evolve parsimonious programs without losing the diversity of the population needed for achieving the desired training accuracy. The effectiveness of this approach is empirically shown on the induction of sigma-pi neural networks for solving a real-world medical diagnosis problem as well as benchmark tasks. ",
"neighbors": [
74,
76,
608,
668,
1079
],
"mask": "Test"
},
{
"node_id": 976,
"label": 3,
"text": "Title: Space-efficient inference in dynamic probabilistic networks \nAbstract: Dynamic probabilistic networks (DPNs) are a useful tool for modeling complex stochastic processes. The simplest inference task in DPNs is monitoring | that is, computing a posterior distribution for the state variables at each time step given all observations up to that time. Recursive, constant-space algorithms are well-known for monitoring in DPNs and other models. This paper is concerned with hindsight | that is, computing a posterior distribution given both past and future observations. Hindsight is an essential subtask of learning DPN models from data. Existing algorithms for hindsight in DPNs use O(SN ) space and time, where N is the total length of the observation sequence and S is the state space size for each time step. They are therefore impractical for hindsight in complex models with long observation sequences. This paper presents an O(S log N ) space, O(SN log N ) time hindsight algorithm. We demonstrates the effectiveness of the algorithm in two real-world DPN learning problems. We also discuss the possibility of an O(S)-space, O(SN )-time algorithm. ",
"neighbors": [
905,
1268,
1287,
1393
],
"mask": "Test"
},
{
"node_id": 977,
"label": 6,
"text": "Title: Pessimistic and Optimistic Induction \nAbstract: Learning methods vary in the optimism or pessimism with which they regard the informativeness of learned knowledge. Pessimism is implicit in hypothesis testing, where we wish to draw cautious conclusions from experimental evidence. However, this paper demonstrates that optimism in the utility of derived rules may be the preferred bias for learning systems themselves. We examine the continuum between naive pessimism and naive optimism in the context of a decision tree learner that prunes rules based on stringent (i.e., pessimistic) or weak (i.e., optimistic) tests of their significance. Our experimental results indicate that in most cases optimism is preferred, but particularly in cases of sparse training data and high noise. This work generalizes earlier findings by Fisher and Schlimmer (1988) and Schaffer (1992), and we discuss its relevance to unsupervised learning, small disjuncts, and other issues. ",
"neighbors": [
1234
],
"mask": "Train"
},
{
"node_id": 978,
"label": 2,
"text": "Title: Hierarchical Recurrent Neural Networks for Long-Term Dependencies \nAbstract: We have already shown that extracting long-term dependencies from sequential data is difficult, both for deterministic dynamical systems such as recurrent networks, and probabilistic models such as hidden Markov models (HMMs) or input/output hidden Markov models (IOHMMs). In practice, to avoid this problem, researchers have used domain specific a-priori knowledge to give meaning to the hidden or state variables representing past context. In this paper, we propose to use a more general type of a-priori knowledge, namely that the temporal dependencies are structured hierarchically. This implies that long-term dependencies are represented by variables with a long time scale. This principle is applied to a recurrent network which includes delays and multiple time scales. Experiments confirm the advantages of such structures. A similar approach is proposed for HMMs and IOHMMs. ",
"neighbors": [
664,
1593,
1825,
1954
],
"mask": "Train"
},
{
"node_id": 979,
"label": 2,
"text": "Title: FLAT MINIMA Neural Computation 9(1):1-42 (1997) \nAbstract: We present a new algorithm for finding low complexity neural networks with high generalization capability. The algorithm searches for a \"flat\" minimum of the error function. A flat minimum is a large connected region in weight-space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to \"simple\" networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require Gaussian assumptions and does not depend on a \"good\" weight prior instead we have a prior over input/output functions, thus taking into account net architecture and training set. Although our algorithm requires the computation of second order derivatives, it has backprop's order of complexity. Automatically, it effectively prunes units, weights, and input lines. Various experiments with feedforward and recurrent nets are described. In an application to stock market prediction, flat minimum search outperforms (1) conventional backprop, (2) weight decay, (3) \"optimal brain surgeon\" / \"optimal brain damage\". We also provide pseudo code of the algorithm (omitted from the NC-version). ",
"neighbors": [
68,
157,
766,
879,
1825,
1845
],
"mask": "Train"
},
{
"node_id": 980,
"label": 2,
"text": "Title: Some Topics in Neural Networks and Control \nAbstract: We present a new algorithm for finding low complexity neural networks with high generalization capability. The algorithm searches for a \"flat\" minimum of the error function. A flat minimum is a large connected region in weight-space where the error remains approximately constant. An MDL-based, Bayesian argument suggests that flat minima correspond to \"simple\" networks and low expected overfitting. The argument is based on a Gibbs algorithm variant and a novel way of splitting generalization error into underfitting and overfitting error. Unlike many previous approaches, ours does not require Gaussian assumptions and does not depend on a \"good\" weight prior instead we have a prior over input/output functions, thus taking into account net architecture and training set. Although our algorithm requires the computation of second order derivatives, it has backprop's order of complexity. Automatically, it effectively prunes units, weights, and input lines. Various experiments with feedforward and recurrent nets are described. In an application to stock market prediction, flat minimum search outperforms (1) conventional backprop, (2) weight decay, (3) \"optimal brain surgeon\" / \"optimal brain damage\". We also provide pseudo code of the algorithm (omitted from the NC-version). ",
"neighbors": [
1488
],
"mask": "Train"
},
{
"node_id": 981,
"label": 0,
"text": "Title: AN ENHANCER FOR REACTIVE PLANS \nAbstract: This paper describes our method for improving the comprehensibility, accuracy, and generality of reactive plans. A reactive plan is a set of reactive rules. Our method involves two phases: (1) formulate explanations of execution traces, and then (2) generate new reactive rules from the explanations. Since the explanation phase has been previously described, the primary focus of this paper is the rule generation phase. This latter phase consists of taking a subset of the explanations and using these explanations to generate a set of new reactive rules to add to the original set. The particular subset of the explanations that is chosen yields rules that provide new domain knowledge for handling knowledge gaps in the original rule set. The original rule set, in a complimentary manner, provides expertise to fill the gaps where the domain knowledge provided by the new rules is incomplete.",
"neighbors": [
902,
910,
965,
1140
],
"mask": "Train"
},
{
"node_id": 982,
"label": 1,
"text": "Title: Evolutionary Neural Networks for Value Ordering in Constraint Satisfaction Problems \nAbstract: Technical Report AI94-218 May 1994 Abstract A new method for developing good value-ordering strategies in constraint satisfaction search is presented. Using an evolutionary technique called SANE, in which individual neurons evolve to cooperate and form a neural network, problem-specific knowledge can be discovered that results in better value-ordering decisions than those based on problem-general heuristics. A neural network was evolved in a chronological backtrack search to decide the ordering of cars in a resource-limited assembly line. The network required 1/30 of the backtracks of random ordering and 1/3 of the backtracks of the maximization of future options heuristic. The SANE approach should extend well to other domains where heuristic information is either difficult to discover or problem-specific. ",
"neighbors": [
163,
247,
910
],
"mask": "Train"
},
{
"node_id": 983,
"label": 0,
"text": "Title: Refining Conversational Case Libraries \nAbstract: Conversational case-based reasoning (CBR) shells (e.g., Inference's CBR Express) are commercially successful tools for supporting the development of help desk and related applications. In contrast to rule-based expert systems, they capture knowledge as cases rather than more problematic rules, and they can be incrementally extended. However, rather than eliminate the knowledge engineering bottleneck, they refocus it on case engineering, the task of carefully authoring cases according to library design guidelines to ensure good performance. Designing complex libraries according to these guidelines is difficult; software is needed to assist users with case authoring. We describe an approach for revising case libraries according to design guidelines, its implementation in Clire, and empirical results showing that, under some conditions, this approach can improve conversational CBR performance.",
"neighbors": [
256,
887,
928,
1002,
1302,
1531,
1636,
1735
],
"mask": "Train"
},
{
"node_id": 984,
"label": 0,
"text": "Title: A Computational Account of Movement Learning and its Impact on the Speed-Accuracy Tradeoff \nAbstract: We present a computational model of movement skill learning. The types of skills addressed are a class of trajectory following movements involving multiple accelerations, decelerations and changes in direction and lasting more than a few seconds. These skills are acquired through observation and improved through practice. We also review the speed-accuracy tradeoff|one of the most robust phenomena in human motor behavior. We present two speed-accuracy tradeoff experiments where the model's performance fits human behavior quite well. ",
"neighbors": [
1048
],
"mask": "Train"
},
{
"node_id": 985,
"label": 3,
"text": "Title: Combining Symbolic and Connectionist Learning Methods to Refine Certainty-Factor Rule-Bases \nAbstract: We present a computational model of movement skill learning. The types of skills addressed are a class of trajectory following movements involving multiple accelerations, decelerations and changes in direction and lasting more than a few seconds. These skills are acquired through observation and improved through practice. We also review the speed-accuracy tradeoff|one of the most robust phenomena in human motor behavior. We present two speed-accuracy tradeoff experiments where the model's performance fits human behavior quite well. ",
"neighbors": [
159,
1102,
2038,
2066
],
"mask": "Train"
},
{
"node_id": 986,
"label": 0,
"text": "Title: Improving accuracy by combining rule-based and case-based reasoning \nAbstract: An architecture is presented for combining rule-based and case-based reasoning. The architecture is intended for domains that are understood reasonably well, but still imperfectly. It uses a set of rules, which are taken to be only approximately correct, to obtain a preliminary answer for a given problem; it then draws analogies from cases to handle exceptions to the rules. Having rules together with cases not only increases the architecture's domain coverage, it also allows innovative ways of doing case-based reasoning: the same rules that are used for rule-based reasoning are also used by the case-based component to do case indexing and case adaptation. The architecture was applied to the task of name pronunciation, and, with minimal knowledge engineering, was found to perform almost at the level of the best commercial systems. Moreover, its accuracy was found to exceed what it could have achieved with rules or cases alone, thus demonstrating the accuracy improvement afforded by combining rule-based and case-based reasoning. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories of Cambridge, Massachusetts; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories. All rights reserved. ",
"neighbors": [
136,
862,
1642,
1644,
2484,
2605,
2614,
2616
],
"mask": "Train"
},
{
"node_id": 987,
"label": 3,
"text": "Title: Stacked Density Estimation \nAbstract: Technical Report No. 97-36, Information and Computer Science Department, University of California, Irvine ",
"neighbors": [
74,
1240,
1608
],
"mask": "Train"
},
{
"node_id": 988,
"label": 4,
"text": "Title: A COMPARISON OF Q-LEARNING AND CLASSIFIER SYSTEMS \nAbstract: Reinforcement Learning is a class of problems in which an autonomous agent acting in a given environment improves its behavior by progressively maximizing a function calculated just on the basis of a succession of scalar responses received from the environment. Q-learning and classifier systems (CS) are two methods among the most used to solve reinforcement learning problems. Notwithstanding their popularity and their shared goal, they have been in the past often considered as two different models. In this paper we first show that the classifier system, when restricted to a sharp simplification called discounted max very simple classifier system (D MAX - VSCS), boils down to tabular Q-learning. It follows that D MAX -VSCS converges to the optimal policy as proved by Watkins & Dayan (1992), and that it can draw profit from the results of experimental and theoretical works dedicated to improve Q-learning and to facilitate its use in concrete applications. In the second part of the paper, we show that three of the restrictions we need to impose to the CS for deriving its equivalence with Q-learning, that is, no internal states, no don't care symbols, and no structural changes, turn out so essential as to be recently rediscovered and reprogrammed by Q-learning adepts. Eventually, we sketch further similarities among ongoing work within both research contexts. The main contribution of the paper is therefore to make explicit the strong similarities existing between Q-learning and classifier systems, and to show that experience gained with research within one domain can be useful to direct future research in the other one.",
"neighbors": [
1515
],
"mask": "Train"
},
{
"node_id": 989,
"label": 2,
"text": "Title: Finding Compact and Sparse Distributed Representations of Visual Images \nAbstract: Some recent work has investigated the dichotomy between compact coding using dimensionality reduction and sparse distributed coding in the context of understanding biological information processing. We introduce an artificial neural network which self organises on the basis of simple Hebbian learning and negative feedback of activation and show that it is capable of both forming compact codings of data distributions and also of identifying filters most sensitive to sparse distributed codes. The network is extremely simple and its biological relevance is investigated via its response to a set of images which are typical of everyday life. However, an analysis of the network's identification of the filter for sparse coding reveals that this coding may not be globally optimal and that there exists an innate limiting factor which cannot be transcended. ",
"neighbors": [
1068,
1418
],
"mask": "Train"
},
{
"node_id": 990,
"label": 2,
"text": "Title: Shattering all sets of k points in \"general position\" requires (k 1)=2 parameters \nAbstract: For classes of concepts defined by certain classes of analytic functions depending on n parameters, there are nonempty open sets of samples of length 2n + 2 which cannot be shattered. A slighly weaker result is also proved for piecewise-analytic functions. The special case of neural networks is discussed.",
"neighbors": [
58,
805,
1774
],
"mask": "Test"
},
{
"node_id": 991,
"label": 0,
"text": "Title: Systematic Evaluation of Design Decisions in CBR Systems \nAbstract: Two important goals in the evaluation of an AI theory or model are to assess the merit of the design decisions in the performance of an implemented computer system and to analyze the impact in the performance when the system faces problem domains with different characteristics. This is particularly difficult in case-based reasoning systems because such systems are typically very complex, as are the tasks and domains in which they operate. We present a methodology for the evaluation of case-based reasoning systems through systematic empirical experimentation over a range of system configurations and environmental conditions, coupled with rigorous statistical analysis of the results of the experiments. This methodology enables us to understand the behavior of the system in terms of the theory and design of the computational model, to select the best system configuration for a given domain, and to predict how the system will behave in response to changing domain and problem characteristics. A case study of a mul-tistrategy case-based and reinforcement learning system which performs autonomous robotic navigation is presented as an example. ",
"neighbors": [
318,
858,
1084,
1368
],
"mask": "Train"
},
{
"node_id": 992,
"label": 0,
"text": "Title: Adapting Abstract Knowledge \nAbstract: For a case-based reasoner to use its knowledge flexibly, it must be equipped with a powerful case adapter. A case-based reasoner can only cope with variation in the form of the problems it is given to the extent that its cases in memory can be efficiently adapted to fit a wide range of new situations. In this paper, we address the task of adapting abstract knowledge about planning to fit specific planning situations. First we show that adapting abstract cases requires reconciling incommensurate representations of planning situations. Next, we describe a representation system, a memory organization, and an adaptation process tailored to this requirement. Our approach is implemented in brainstormer, a planner that takes abstract advice. ",
"neighbors": [
1354
],
"mask": "Train"
},
{
"node_id": 993,
"label": 3,
"text": "Title: Efficient Estimation for the Cox Model with Interval Censoring \nAbstract: The maximum likelihood estimator (MLE) for the proportional hazards model with current status data is studied. It is shown that the MLE for the regression parameter is asymptotically normal with p n-convergence rate and achieves the information bound, even though the MLE for the baseline cumulative hazard function only converges at n 1=3 rate. Estimation of the asymptotic variance matrix for the MLE of the regression parameter is also considered. To prove our main results, we also establish a general theorem showing that the MLE of the finite dimensional parameter in a class of semiparametric models is asymptotically efficient even though the MLE of the infinite dimensional parameter converges at a rate slower than The results are illustrated by applying them to a data set from a tumoriginicity study. 1. Introduction In many survival analysis problems, we are interested in the p",
"neighbors": [
973
],
"mask": "Validation"
},
{
"node_id": 994,
"label": 0,
"text": "Title: Role of Stories 1 \nAbstract: PO Box 600 Wellington New Zealand Tel: +64 4 471 5328 Fax: +64 4 495 5232 Internet: Tech.Reports@comp.vuw.ac.nz Technical Report CS-TR-92/4 October 1992 Abstract People often give advice by telling stories. Stories both recommend a course of action and exemplify general conditions in which that recommendation is appropriate. A computational model of advice taking using stories must address two related problems: determining the story's recommendations and appropriateness conditions, and showing that these obtain in the new situation. In this paper, we present an efficient solution to the second problem based on caching the results of the first. Our proposal has been implemented in brainstormer, a planner that takes abstract advice. ",
"neighbors": [
1354
],
"mask": "Train"
},
{
"node_id": 995,
"label": 1,
"text": "Title: Evolving a Team \nAbstract: PO Box 600 Wellington New Zealand Tel: +64 4 471 5328 Fax: +64 4 495 5232 Internet: Tech.Reports@comp.vuw.ac.nz Technical Report CS-TR-92/4 October 1992 Abstract People often give advice by telling stories. Stories both recommend a course of action and exemplify general conditions in which that recommendation is appropriate. A computational model of advice taking using stories must address two related problems: determining the story's recommendations and appropriateness conditions, and showing that these obtain in the new situation. In this paper, we present an efficient solution to the second problem based on caching the results of the first. Our proposal has been implemented in brainstormer, a planner that takes abstract advice. ",
"neighbors": [
415,
956,
1178,
1230,
1231,
1232,
1495,
1690,
1971,
1985,
2139,
2673
],
"mask": "Train"
},
{
"node_id": 996,
"label": 3,
"text": "Title: Reparameterisation Issues in Mixture Modelling and their bearing on MCMC algorithms \nAbstract: There is increasing need for efficient estimation of mixture distributions, especially following the explosion in the use of these as modelling tools in many applied fields. We propose in this paper a Bayesian noninformative approach for the estimation of normal mixtures which relies on a reparameterisation of the secondary components of the mixture in terms of divergence from the main component. As well as providing an intuitively appealing representation at the modelling stage, this reparameterisation has important bearing on both the prior distribution and the performance of MCMC algorithms. We compare two possible reparameterisations extending Mengersen and Robert (1996) and show that the reparameterisation which does not link the secondary components together is associated with poor convergence properties of MCMC algorithms. ",
"neighbors": [
161,
1015
],
"mask": "Train"
},
{
"node_id": 997,
"label": 5,
"text": "Title: A Common LISP Hypermedia Server \nAbstract: A World-Wide Web (WWW) server was implemented in Common LISP in order to facilitate exploratory programming in the global hypermedia domain and to provide access to complex research programs, particularly artificial intelligence systems. The server was initially used to provide interfaces for document retrieval and for email servers. More advanced applications include interfaces to systems for inductive rule learning and natural-language question answering. Continuing research seeks to more fully generalize automatic form-processing techniques developed for email servers to operate seamlessly over the Web. The conclusions argue that presentation-based interfaces and more sophisticated form processing should be moved into the clients in order to reduce the load on servers and provide more advanced interaction models for users. ",
"neighbors": [
1271
],
"mask": "Train"
},
{
"node_id": 998,
"label": 3,
"text": "Title: Accounting for Model Uncertainty in Survival Analysis Improves Predictive Performance \nAbstract: Survival analysis is concerned with finding models to predict the survival of patients or to assess the efficacy of a clinical treatment. A key part of the model-building process is the selection of the predictor variables. It is standard to use a stepwise procedure guided by a series of significance tests to select a single model, and then to make inference conditionally on the selected model. However, this ignores model uncertainty, which can be substantial. We review the standard Bayesian model averaging solution to this problem and extend it to survival analysis, introducing partial Bayes factors to do so for the Cox proportional hazards model. In two examples, taking account of model uncertainty enhances predictive performance, to an extent that could be clinically useful.",
"neighbors": [
84,
347,
1240,
1241
],
"mask": "Train"
},
{
"node_id": 999,
"label": 3,
"text": "Title: The out-of-bootstrap method for model averaging and selection \nAbstract: We propose a bootstrap-based method for model averaging and selection that focuses on training points that are left out of individual bootstrap samples. This information can be used to estimate optimal weighting factors for combining estimates from different bootstrap samples, and also for finding the best subsets the linear model setting. These proposals provide alternatives to Bayesian approaches to model averaging and selection, requiring less computation and fewer subjective choices. ",
"neighbors": [
70,
347,
1240,
1463,
1512
],
"mask": "Validation"
},
{
"node_id": 1000,
"label": 6,
"text": "Title: PREDICTION GAMES AND ARCING ALGORITHMS \nAbstract: Technical Report 504 December 19, 1997 Statistics Department University of California Berkeley, CA. (4720 Abstract The theory behind the success of adaptive reweighting and combining algorithms (arcing) such as Adaboost (Freund and Schapire [1995].[1996]) and others in reducing generalization error has not been well understood. By formulating prediction, both classification and regression, as a game where one player makes a selection from instances in the training set and the other a convex linear combination of predictors from a finite set, existing arcing algorithms are shown to be algorithms for finding good game strategies. An optimal game strategy finds a combined predictor that minimizes the maximum of the error over the training set. A bound on the generalization error for the combined predictors in terms of their maximum error is proven that is sharper than bounds to date. Arcing algorithms are described that converge to the optimal strategy. Schapire et.al. [1997] offered an explanation of why Adaboost works in terms of its ability to reduce the margin. Comparing Adaboost to our optimal arcing algorithm shows that their explanation is not valid and that the answer lies elsewhere. In this situation the VC-type bounds are misleading. Some empirical results are given to explore the situation. ",
"neighbors": [
569,
931,
1185
],
"mask": "Train"
},
{
"node_id": 1001,
"label": 0,
"text": "Title: Is analogical problem solving always analogical? The case for imitation. Second draft Is analogical problem\nAbstract: HCRL Technical Report 97 By ",
"neighbors": [
1354
],
"mask": "Train"
},
{
"node_id": 1002,
"label": 0,
"text": "Title: A Model-Based Approach for Supporting Dialogue Inferencing in a Conversational Case-Based Reasoner \nAbstract: Conversational case-based reasoning (CCBR) is a form of interactive case-based reasoning where users input a partial problem description (in text). The CCBR system responds with a ranked solution display, which lists the solutions of stored cases whose problem descriptions best match the user's, and a ranked question display, which lists the unanswered questions in these cases. Users interact with these displays, either refining their problem description by answering selected questions, or selecting a solution to apply. CCBR systems should support dialogue inferencing; they should infer answers to questions that are implied by the problem description. Otherwise, questions will be listed that the user believes they have already answered. The standard approach to dialogue inferencing allows case library designers to insert rules that define implications between the problem description and unanswered questions. However, this approach imposes substantial knowledge engineering requirements. We introduce an alternative approach whereby an intelligent assistant guides the designer in defining a model of their case library, from which implication rules are derived. We detail this approach, its benefits, and explain how it can be supported through an integration with Parka-DB, a fast relational database system. We will evaluate our approach in the context of our CCBR system, named NaCoDAE. This paper appeared at the 1998 AAAI Spring Symposium on Multimodal Reasoning, and is NCARAI TR AIC-97-023. We introduce an integrated reasoning approach in which a model-based reasoning component performs an important inferencing role in a conversational case-based reasoning (CCBR) system named NaCoDAE (Breslow & Aha, 1997) (Figure 1). CCBR is a form of case-based reasoning where users enter text queries describing a problem and the system assists in eliciting refinements of it (Aha & Breslow, 1997). Cases have three components: ",
"neighbors": [
983,
1735
],
"mask": "Train"
},
{
"node_id": 1003,
"label": 6,
"text": "Title: Learning Conjunctions of Horn Clauses \nAbstract: ",
"neighbors": [
672,
786,
791,
1004,
1343,
1364,
1469,
1897,
2028,
2146,
2182,
2483
],
"mask": "Train"
},
{
"node_id": 1004,
"label": 6,
"text": "Title: Learning Read-Once Formulas with Queries \nAbstract: A read-once formula is a boolean formula in which each variable occurs at most once. Such formulas are also called -formulas or boolean trees. This paper treats the problem of exactly identifying an unknown read-once formula using specific kinds of queries. The main results are a polynomial time algorithm for exact identification of monotone read-once formulas using only membership queries, and a polynomial time algorithm for exact identification of general read-once formulas using equivalence and membership queries (a protocol based on the notion of a minimally adequate teacher [1]). Our results improve on Valiant's previous results for read-once formulas [26]. We also show that no polynomial time algorithm using only membership queries or only equivalence queries can exactly identify all read-once formulas.",
"neighbors": [
672,
786,
791,
1003,
1343,
1364,
1469,
2146,
2350,
2483
],
"mask": "Test"
},
{
"node_id": 1005,
"label": 2,
"text": "Title: Distributed Patterns as Hierarchical Structures \nAbstract: Recursive Auto-Associative Memory (RAAM) structures show promise as a general representation vehicle that uses distributed patterns. However training is often difficult, which explains, at least in part, why only relatively small networks have been studied. We show a technique for transforming any collection of hierarchical structures into a set of training patterns for a sequential RAAM which can be effectively trained using a simple (Elman-style) recurrent network. Tr aining produces a set of distributed patterns corresponding to the structures. ",
"neighbors": [
1313
],
"mask": "Train"
},
{
"node_id": 1006,
"label": 6,
"text": "Title: Learning Probabilistic Automata with Variable Memory Length \nAbstract: We propose and analyze a distribution learning algorithm for variable memory length Markov processes. These processes can be described by a subclass of probabilistic finite automata which we name Probabilistic Finite Suffix Automata. The learning algorithm is motivated by real applications in man-machine interaction such as handwriting and speech recognition. Conventionally used fixed memory Markov and hidden Markov models have either severe practical or theoretical drawbacks. Though general hardness results are known for learning distributions generated by sources with similar structure, we prove that our algorithm can indeed efficiently learn distributions generated by our more restricted sources. In Particular, we show that the KL-divergence between the distribution generated by the target source and the distribution generated by our hypothesis can be made small with high confidence in polynomial time and sample complexity. We demonstrate the applicability of our algorithm by learning the structure of natural English text and using our hy pothesis for the correction of corrupted text.",
"neighbors": [
242,
472,
574,
650,
876,
1025,
2040,
2360
],
"mask": "Test"
},
{
"node_id": 1007,
"label": 5,
"text": "Title: Applications of a logical discovery engine \nAbstract: The clausal discovery engine claudien is presented. claudien discovers regularities in data and is a representative of the inductive logic programming paradigm. As such, it represents data and regularities by means of first order clausal theories. Because the search space of clausal theories is larger than that of attribute value representation, claudien also accepts as input a declarative specification of the language bias, which determines the set of syntactically well-formed regularities. Whereas other papers on claudien focuss on the semantics or logical problem specification of claudien, on the discovery algorithm, or the PAC-learning aspects, this paper wants to illustrate the power of the resulting technique. In order to achieve this aim, we show how claudien can be used to learn 1) integrity constraints in databases, 2) functional dependencies and determinations, 3) properties of sequences, 4) mixed quantitative and qualitative laws, 5) reverse engineering, and 6) classification rules. ",
"neighbors": [
344,
837,
1919,
2217,
2426
],
"mask": "Test"
},
{
"node_id": 1008,
"label": 5,
"text": "Title: Induction of decision trees using RELIEFF \nAbstract: In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies between them. Greedy search prevents current inductive machine learning algorithms to detect significant dependencies between the attributes. Recently, Kira and Rendell developed the RELIEF algorithm for estimating the quality of attributes that is able to detect dependencies between attributes. We show strong relation between RELIEF's estimates and impurity functions, that are usually used for heuristic guidance of inductive learning algorithms. We propose to use RELIEFF, an extended version of RELIEF, instead of myopic impurity functions. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems. Results show the advantage of the presented approach to inductive learning and open a wide rang of possibilities for using RELIEFF. ",
"neighbors": [
1011,
1486,
1569
],
"mask": "Train"
},
{
"node_id": 1009,
"label": 1,
"text": "Title: Induction of decision trees using RELIEFF \nAbstract: An investigation into the dynamics of Genetic Programming applied to chaotic time series prediction is reported. An interesting characteristic of adaptive search techniques is their ability to perform well in many problem domains while failing in others. Because of Genetic Programming's flexible tree structure, any particular problem can be represented in myriad forms. These representations have variegated effects on search performance. Therefore, an aspect of fundamental engineering significance is to find a representation which, when acted upon by Genetic Programming operators, optimizes search performance. We discover, in the case of chaotic time series prediction, that the representation commonly used in this domain does not yield optimal solutions. Instead, we find that the population converges onto one \"accurately replicating\" tree before other trees can be explored. To correct for this premature convergence we make a simple modification to the crossover operator. In this paper we review previous work with GP time series prediction, pointing out an anomalous result related to overlearning, and report the improvement effected by our modified crossover operator. ",
"neighbors": [
934,
1079,
2175
],
"mask": "Train"
},
{
"node_id": 1010,
"label": 5,
"text": "Title: Linear Space Induction in First Order Logic with RELIEFF \nAbstract: Current ILP algorithms typically use variants and extensions of the greedy search. This prevents them to detect significant relationships between the training objects. Instead of myopic impurity functions, we propose the use of the heuristic based on RELIEF for guidance of ILP algorithms. At each step, in our ILP-R system, this heuristic is used to determine a beam of candidate literals. The beam is then used in an exhaustive search for a potentially good conjunction of literals. From the efficiency point of view we introduce interesting declarative bias which enables us to keep the growth of the training set, when introducing new variables, within linear bounds (linear with respect to the clause length). This bias prohibits cross-referencing of variables in variable dependency tree. The resulting system has been tested on various artificial problems. The advantages and deficiencies of our approach are discussed. ",
"neighbors": [
877,
1011,
1061,
1182,
1569,
1578,
1651
],
"mask": "Train"
},
{
"node_id": 1011,
"label": 5,
"text": "Title: Discretization of continuous attributes using ReliefF \nAbstract: Many existing learning algorithms expect the attributes to be discrete. Discretization of continuous attributes might be difficult task even for domain experts. We have tried the non-myopic heuristic measure ReliefF for discretization and compared it with well known dissimilarity measure and discretizations by experts. An extensive testing with several learning algorithms on six real world databases has shown that none of the discretizations has clear advantage over the others. ",
"neighbors": [
1008,
1010,
1569
],
"mask": "Test"
},
{
"node_id": 1012,
"label": 4,
"text": "Title: TDLeaf(): Combining Temporal Difference learning with game-tree search. \nAbstract: In this paper we present TDLeaf(), a variation on the TD() algorithm that enables it to be used in conjunction with minimax search. We present some experiments in both chess and backgammon which demonstrate its utility and provide comparisons with TD() and another less radical variant, TD-directed(). In particular, our chess program, KnightCap, used TDLeaf() to learn its evaluation function while playing on the Free Internet Chess Server (FICS, fics.onenet.net). It improved from a 1650 rating to a 2100 rating in just 308 games. We discuss some of the reasons for this success and the relationship between our results and Tesauro's results in backgammon. ",
"neighbors": [
295,
565,
882
],
"mask": "Train"
},
{
"node_id": 1013,
"label": 3,
"text": "Title: A SEQUENTIAL METROPOLIS-HASTINGS ALGORITHM \nAbstract: This paper deals with the asymptotic properties of the Metropolis-Hastings algorithm, when the distribution of interest is unknown, but can be approximated by a sequential estimator of its density. We prove that, under very simple conditions, the rate of convergence of the Metropolis-Hastings algorithm is the same as that of the sequential estimator when the latter is introduced as the reversible measure for the Metropolis-Hastings Kernel. This problem is a natural extension of previous a work on a new simulated annealing algorithm with a sequential estimator of the energy. ",
"neighbors": [
1156
],
"mask": "Train"
},
{
"node_id": 1014,
"label": 2,
"text": "Title: Viewpoint invariant face recognition using independent component analysis and attractor networks \nAbstract: We have explored two approaches to recognizing faces across changes in pose. First, we developed a representation of face images based on independent component analysis (ICA) and compared it to a principal component analysis (PCA) representation for face recognition. The ICA basis vectors for this data set were more spatially local than the PCA basis vectors and the ICA representation had greater invariance to changes in pose. Second, we present a model for the development of viewpoint invariant responses to faces from visual experience in a biological system. The temporal continuity of natural visual experience was incorporated into an attractor network model by Hebbian learning following a lowpass temporal filter on unit activities. When combined with the temporal filter, a basic Hebbian update rule became a generalization of Griniasty et al. (1993), which associates temporally proximal input patterns into basins of attraction. The system acquired rep resentations of faces that were largely independent of pose.",
"neighbors": [
576,
676,
1056,
1091
],
"mask": "Validation"
},
{
"node_id": 1015,
"label": 3,
"text": "Title: Bayesian curve fitting using multivariate normal mixtures \nAbstract: Problems of regression smoothing and curve fitting are addressed via predictive inference in a flexible class of mixture models. Multi-dimensional density estimation using Dirichlet mixture models provides the theoretical basis for semi-parametric regression methods in which fitted regression functions may be deduced as means of conditional predictive distributions. These Bayesian regression functions have features similar to generalised kernel regression estimates, but the formal analysis addresses problems of multivariate smoothing parameter estimation and the assessment of uncertainties about regression functions naturally. Computations are based on multi-dimensional versions of existing Markov chain simulation analysis of univariate Dirichlet mixture models. ",
"neighbors": [
852,
996,
1338
],
"mask": "Train"
},
{
"node_id": 1016,
"label": 1,
"text": "Title: An Analysis of the Interacting Roles of Population Size and Crossover in Genetic Algorithms \nAbstract: In this paper we present some theoretical and empirical results on the interacting roles of population size and crossover in genetic algorithms. We summarize recent theoretical results on the disruptive effect of two forms of multi-point crossover: n-point crossover and uniform crossover. We then show empirically that disruption analysis alone is not sufficient for selecting appropriate forms of crossover. However, by taking into account the interacting effects of population size and crossover, a general picture begins to emerge. The implications of these results on implementation issues and performance are discussed, and several directions for further research are suggested. ",
"neighbors": [
727,
728,
856,
943,
1070,
1110,
1205,
1305,
1466,
1670,
1729
],
"mask": "Train"
},
{
"node_id": 1017,
"label": 2,
"text": "Title: Using Neural Networks for Descriptive Statistical Analysis of Educational Data \nAbstract: In this paper we discuss the methodological issues of using a class of neural networks called Mixture Density Networks (MDN) for discriminant analysis. MDN models have the advantage of having a rigorous probabilistic interpretation, and they have proven to be a viable alternative as a classification procedure in discrete domains. We will address both the classification and interpretive aspects of discriminant analysis, and compare the approach to the traditional method of linear discrimin- ants as implemented in standard statistical packages. We show that the MDN approach adopted performs well in both aspects. Many of the observations made are not restricted to the particular case at hand, and are applicable to most applications of discriminant analysis in educational research. fl URL: http://www.cs.Helsinki.FI/research/cosco/ ",
"neighbors": [
74,
157,
1574
],
"mask": "Train"
},
{
"node_id": 1018,
"label": 1,
"text": "Title: Simulated Annealing for Hard Satisfiability Problems \nAbstract: Satisfiability (SAT) refers to the task of finding a truth assignment that makes an arbitrary boolean expression true. This paper compares a simulated annealing algorithm (SASAT) with GSAT (Selman et al., 1992), a greedy algorithm for solving satisfiability problems. GSAT can solve problem instances that are extremely difficult for traditional satisfiability algorithms. Results suggest that SASAT scales up better as the number of variables increases, solving at least as many hard SAT problems with less effort. The paper then presents an ablation study that helps to explain the relative advantage of SASAT over GSAT. Finally, an improvement to the basic SASAT algorithm is examined, based on a random walk suggested by Selman et al. (1993). ",
"neighbors": [
1136,
1139,
1142
],
"mask": "Train"
},
{
"node_id": 1019,
"label": 6,
"text": "Title: Bibliography \"SMART: Support Management Automated Reasoning Technology for COMPAQ Customer Service,\" \"Instance-Based Learning Algorithms,\" Machine\nAbstract: Satisfiability (SAT) refers to the task of finding a truth assignment that makes an arbitrary boolean expression true. This paper compares a simulated annealing algorithm (SASAT) with GSAT (Selman et al., 1992), a greedy algorithm for solving satisfiability problems. GSAT can solve problem instances that are extremely difficult for traditional satisfiability algorithms. Results suggest that SASAT scales up better as the number of variables increases, solving at least as many hard SAT problems with less effort. The paper then presents an ablation study that helps to explain the relative advantage of SASAT over GSAT. Finally, an improvement to the basic SASAT algorithm is examined, based on a random walk suggested by Selman et al. (1993). ",
"neighbors": [
853,
862,
906,
926,
927,
1256,
1290
],
"mask": "Validation"
},
{
"node_id": 1020,
"label": 3,
"text": "Title: Error-Based and Entropy-Based Discretization of Continuous Features \nAbstract: We present a comparison of error-based and entropy-based methods for discretization of continuous features. Our study includes both an extensive empirical comparison as well as an analysis of scenarios where error minimization may be an inappropriate discretization criterion. We present a discretization method based on the C4.5 decision tree algorithm and compare it to an existing entropy-based discretization algorithm, which employs the Minimum Description Length Principle, and a recently proposed error-based technique. We evaluate these discretization methods with respect to C4.5 and Naive-Bayesian classifiers on datasets from the UCI repository and analyze the computational complexity of each method. Our results indicate that the entropy-based MDL heuristic outperforms error minimization on average. We then analyze the shortcomings of error-based approaches in comparison to entropy-based methods. ",
"neighbors": [
430,
1322,
1328,
1329,
1337,
2577
],
"mask": "Test"
},
{
"node_id": 1021,
"label": 2,
"text": "Title: Lemma 2.3 The system is reachable and observable and realizes the same input/output behavior as\nAbstract: Here we show a similar construction for multiple-output systems, with some modifications. Let = (A; B; C) s be a discrete-time sign-linear system with state space IR n and p outputs. Perform a change of ; where A 1 (n 1 fi n 1 ) is invertible and A 2 (n 2 fi n 2 ) is nilpotent. If (A; B) is a reachable pair and (A; C) is an observable pair, then is minimal in the sense that any other sign-linear system with the same input/output behavior has dimension at least n. But, if n 1 < n, then det A = 0 and is not observable and hence not canonical. Let us find another system ~ (necessarily not sign-linear) which has the same input/output behavior as , but is canonical. Let i be the relative degree of the ith row of the Markov sequence A, and = minf i : i = 1; : : : ; pg. Let the initial state be x. There is a difference between the case when the smallest relative degree is greater or equal to n 2 and the case when < n 2 . Roughly speaking, when n 2 the outputs of the sign-linear system give us information about sign (Cx), sign (CAx), : : : , sign (CA 1 x), which are the first outputs of the sys tem. After that, we can use the inputs and outputs to learn only about x 1 (the first n 1 components of x). When < n 2 , we may be able to use some controls to learn more about x 2 (the last n 2 components of x) before time n 2 when the nilpotency of A 2 has finally Lemma 2.4 Two states x and z are indistinguishable for if and only if (x) = (z). Proof. In the case n 2 , we have only the equations x 1 = z 1 and the equality of the 's. The first ` output terms for are exactly the terms of . So these equalities are satisfied if and only if the first ` output terms coincide for x and z, for any input. Equality of everything but the first n 1 components is equivalent to the first n 2 output terms coinciding for x and z, since the jth row of the qth output, for initial state x, for example, is either sign (c j A q x) if j > q, or sign (c j A q x + + A j j u q j +1 + ) if j q in which case we may use the control u q j +1 to identify c j A q x (using Remark 3.3 in [1]). ",
"neighbors": [
1464
],
"mask": "Train"
},
{
"node_id": 1022,
"label": 2,
"text": "Title: j \nAbstract: So applying Corollary 4.3 to the second equation in (47), we conclude that From (38), we then get jg(y n + ~ k( y (51), we obtain jy n + ~ k( From (39) we see that the right-hand side of (54) is bounded by . Since the system _ y = A 1 y k( y )b 2 jyj ev N : (55) Now, suppose lim sup t!1 jy(t)j = > 0. Then jyj ev 2. Since j k(y)j Ljyj, we have and using (56) and (57), we obtain j~yj ev 2(~-1 + -2 )L + -2 ffi : (58) (Note that if the right-hand side of (58) is 1 , then the inequality is trivial since we know from (52) that j~yj ev 1 .) From (53), (56), and (58), we have -2 ffi + N ffi > N . However, from (55) we see that (60) still holds. So we established (60) in all cases. From (40) we then get jyj ev 2 Taking the lim sup t!1 of the left-hand side of (61), we have 1 2 + N(\"-2 + 1)ffi i.e. 2 N(\"-2 + 1)ffi. Substituting this into (58) and (61), we get j~yj ev ffi, and jyj ev 2 N(\"-2 + 1)ffi . So, if we take N = 2 N(\"-2 + 1)(1 + 2L(~-1 + -2 )) + -2 ; the conclusion follows. To complete the proof, we need to deal with the general case of m > 1 inputs. This is done by induction on m, as in the proof in [14], and will be omitted here. 2 [1] Fuller, A.T., \"In the large stability of relay and saturated control systems with linear controllers,\" Int. J. Control, 10(1969): 457-480. [2] Gutman, P-O., and P. Hagander, \"A new design of constrained controllers for linear systems,\" IEEE Transactions on Automat. Contr. AC-30(1985): 22-23. [3] Kosut, R.L., \"Design of linear systems with saturating linear control and bounded states,\" IEEE Trans. Au-tom. Control AC-28(1983): 121-124. [4] Krikelis, N.J., and S.K. Barkas, \"Design of tracking systems subject to actuator saturation and integrator wind-up,\" Int. J. Control 39(1984): 667-682. [5] Schmitendorf, W.E. and B.R. Barmish, \"Null controllability of linear systems with constrained controls,\" SIAM J. Control and Opt. 18(1980): 327-345. [6] Slemrod, M., \"Feedback stabilization of a linear control system in Hilbert space,\" Math. Control Signals Systems 2(1989): 265-285. [7] Slotine, J-J.E., and W. Li, Applied Nonlinear Control, Prentice-Hall, Englewood Cliffs, 1991. [8] Sontag, E.D., \"An algebraic approach to bounded controllability of linear systems,\" Int. J. Control 39(1984): 181-188. [9] Sontag, E.D., \"Remarks on stabilization and input-to-state stability,\" Proc. IEEE CDC, Tampa, Dec. 1989, IEEE Publications, 1989, pp. 1376-1378. [10] Sontag, E.D., Mathematical Control Theory: Deterministic Finite Dimensional Systems, Springer, New York, 1990. [11] Sontag, E.D., and H.J. Sussmann, \"Nonlinear output feedback design for linear systems with saturating controls,\" Proc. IEEE CDC, Honolulu, Dec. 1990, IEEE Publications, 1990, pp. 3414-3416. [12] Sussmann, H. J. and Y. Yang, \"On the stabilizability of multiple integrators by means of bounded feedback controls\" Proc. IEEE CDC, Brighton, UK, Dec. 1991, IEEE Publications, 1991: 70-73. [13] Teel, A.R., \"Global stabilization and restricted tracking for multiple integrators with bounded controls,\" Systems and Control Letters 18(1992): 165-171. [14] Yang, Y., H.J. Sussmann and E.D. Sontag, \"Stabilization of linear systems with bounded controls,\" Proc. June 1992 NOLCOS, Bordeaux, (M. Fliess, Ed.), IFAC Publications, pp. 15-20. [15] Yang, Y., Global Stabilization of Linear Systems with Bounded Feedback, Ph. D. Thesis, Mathematics Department, Rutgers University, 1993. j~yj ev M (~-1 + -2 ) + ffi-2 ; (50)",
"neighbors": [
1282
],
"mask": "Test"
},
{
"node_id": 1023,
"label": 2,
"text": "Title: Data Reconciliation and Gross Error Detection for Dynamic Systems \nAbstract: Gross error detection plays a vital role in parameter estimation and data reconciliation for both dynamic and steady state systems. In particular, recent advances in process optimization now allow data reconciliation of dynamic systems and appropriate problem formulations need to be considered for them. Data errors due to either miscalibrated or faulty sensors or just random events nonrepresentative of the underlying statistical distribution, can induce heavy biases in the parameter estimates and in the reconciled data. In this paper we concentrate on robust estimators and exploratory statistical methods which allow us to detect the gross errors as the data reconciliation is performed. These robust methods have the property of being insensitive to departures from ideal statistical distributions and therefore are insensitive to the presence of outliers. Once the regression is done, the outliers can be detected readily by using exploratory statistical techniques. An important feature for performance of the optimization algorithm and uniqueness of the reconciled data is the ability to classify the variables according to their observability and redundancy properties. Here an observable variable is an unmeasured quantity which can be estimated from the measured variables through the physical model while a nonredundant variable is a measured variable which cannot be estimated other than through its measurements. Variable classification can be used as an aid to design instrumentation schemes. In this ",
"neighbors": [
878,
1090
],
"mask": "Train"
},
{
"node_id": 1024,
"label": 6,
"text": "Title: On learning hierarchical classifications \nAbstract: Many significant real-world classification tasks involve a large number of categories which are arranged in a hierarchical structure; for example, classifying documents into subject categories under the library of congress scheme, or classifying world-wide-web documents into topic hierarchies. We investigate the potential benefits of using a given hierarchy over base classes to learn accurate multi-category classifiers for these domains. First, we consider the possibility of exploiting a class hierarchy as prior knowledge that can help one learn a more accurate classifier. We explore the benefits of learning category-discriminants in a hard top-down fashion and compare this to a soft approach which shares training data among sibling categories. In doing so, we verify that hierarchies have the potential to improve prediction accuracy. But we argue that the reasons for this can be subtle. Sometimes, the improvement is only because using a hierarchy happens to constrain the expressiveness of a hypothesis class in an appropriate manner. However, various controlled experiments show that in other cases the performance advantage associated with using a hierarchy really does seem to be due to the prior knowledge it encodes. ",
"neighbors": [
74,
1053,
1335,
2338
],
"mask": "Validation"
},
{
"node_id": 1025,
"label": 6,
"text": "Title: Machine Learning 27(1):51-68, 1997. Predicting nearly as well as the best pruning of a decision tree \nAbstract: Many algorithms for inferring a decision tree from data involve a two-phase process: First, a very large decision tree is grown which typically ends up \"over-fitting\" the data. To reduce over-fitting, in the second phase, the tree is pruned using one of a number of available methods. The final tree is then output and used for classification on test data. In this paper, we suggest an alternative approach to the pruning phase. Using a given unpruned decision tree, we present a new method of making predictions on test data, and we prove that our algorithm's performance will not be \"much worse\" (in a precise technical sense) than the predictions made by the best reasonably small pruning of the given decision tree. Thus, our procedure is guaranteed to be competitive (in terms of the quality of its predictions) with any pruning algorithm. We prove that our procedure is very efficient and highly robust. Our method can be viewed as a synthesis of two previously studied techniques. First, we apply Cesa-Bianchi et al.'s [4] results on predicting using \"expert advice\" (where we view each pruning as an \"expert\") to obtain an algorithm that has provably low prediction loss, but that is com-putationally infeasible. Next, we generalize and apply a method developed by Buntine [3], [2] and Willems, Shtarkov and Tjalkens [20], [21] to derive a very efficient implementation of this procedure. ",
"neighbors": [
453,
569,
876,
1006,
1238,
1290,
1388,
1449,
1712
],
"mask": "Train"
},
{
"node_id": 1026,
"label": 6,
"text": "Title: Noise-Tolerant Parallel Learning of Geometric Concepts \nAbstract: We present several efficient parallel algorithms for PAC-learning geometric concepts in a constant-dimensional space that are robust even against malicious misclassification noise of any rate less than 1=2. In particular we consider the class of geometric concepts defined by a polynomial number of (d 1)-dimensional hyperplanes against an arbitrary distribution where each hyperplane has a slope from a set of known slopes, and the class of geometric concepts defined by a polynomial number of (d 1)-dimensional hyperplanes (of unrestricted slopes) against a product distribution. Next we define a complexity measure of any set S of (d1)-dimensional surfaces that we call the variant of S and prove that the class of geometric concepts defined by surfaces of polynomial variant can be efficiently learned in parallel under a product distribution (even under malicious misclassifi-cation noise). Finally, we describe how boosting techniques can be used so that our algorithms' de pendence on * and ffi does not depend on d.",
"neighbors": [
1105,
1433
],
"mask": "Train"
},
{
"node_id": 1027,
"label": 6,
"text": "Title: Pessimistic decision tree pruning based on tree size \nAbstract: In this work we develop a new criteria to perform pessimistic decision tree pruning. Our method is theoretically sound and is based on theoretical concepts such as uniform convergence and the Vapnik-Chervonenkis dimension. We show that our criteria is very well motivated, from the theory side, and performs very well in practice. The accuracy of the new criteria is comparable to that of the current method used in C4.5.",
"neighbors": [
56,
322,
378,
638,
1322,
1336,
1388
],
"mask": "Train"
},
{
"node_id": 1028,
"label": 2,
"text": "Title: NETWORKS, FUNCTION DETERMINES FORM \nAbstract: Report SYCON-92-03 ABSTRACT This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function ; if the two nets have equal behaviors as \"black boxes\" then necessarily they must have the same number of neurons and |except at most for sign reversals at each node| the same weights. ",
"neighbors": [
206,
1037,
1042,
1435,
1610
],
"mask": "Test"
},
{
"node_id": 1029,
"label": 3,
"text": "Title: Subregion-Adaptive Integration of Functions Having a Dominant Peak \nAbstract: Report SYCON-92-03 ABSTRACT This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function ; if the two nets have equal behaviors as \"black boxes\" then necessarily they must have the same number of neurons and |except at most for sign reversals at each node| the same weights. ",
"neighbors": [
1090,
2456
],
"mask": "Train"
},
{
"node_id": 1030,
"label": 1,
"text": "Title: Solving Combinatorial Problems Using Evolutionary Algorithms \nAbstract: Report SYCON-92-03 ABSTRACT This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function ; if the two nets have equal behaviors as \"black boxes\" then necessarily they must have the same number of neurons and |except at most for sign reversals at each node| the same weights. ",
"neighbors": [
163,
1136,
1796,
1946
],
"mask": "Train"
},
{
"node_id": 1031,
"label": 2,
"text": "Title: Protein Secondary Structure Modelling with Probabilistic Networks (Extended Abstract) \nAbstract: In this paper we study the performance of probabilistic networks in the context of protein sequence analysis in molecular biology. Specifically, we report the results of our initial experiments applying this framework to the problem of protein secondary structure prediction. One of the main advantages of the probabilistic approach we describe here is our ability to perform detailed experiments where we can experiment with different models. We can easily perform local substitutions (mutations) and measure (probabilistically) their effect on the global structure. Window-based methods do not support such experimentation as readily. Our method is efficient both during training and during prediction, which is important in order to be able to perform many experiments with different networks. We believe that probabilistic methods are comparable to other methods in prediction quality. In addition, the predictions generated by our methods have precise quantitative semantics which is not shared by other classification methods. Specifically, all the causal and statistical independence assumptions are made explicit in our networks thereby allowing biologists to study and experiment with different causal models in a convenient manner. ",
"neighbors": [
258,
1328
],
"mask": "Train"
},
{
"node_id": 1032,
"label": 6,
"text": "Title: Algorithmic Stability and Sanity-Check Bounds for Leave-One-Out Cross-Validation \nAbstract: In this paper we prove sanity-check bounds for the error of the leave-one-out cross-validation estimate of the generalization error: that is, bounds showing that the worst-case error of this estimate is not much worse than that of the training error estimate. The name sanity-check refers to the fact that although we often expect the leave-one-out estimate to perform considerably better than the training error estimate, we are here only seeking assurance that its performance will not be considerably worse. Perhaps surprisingly, such assurance has been given only for rather limited cases in the prior literature on cross-validation. Any nontrivial bound on the error of leave-one-out must rely on some notion of algorithmic stability. Previous bounds relied on the rather strong notion of hypothesis stability, whose application was primarily limited to nearest-neighbor and other local algorithms. Here we introduce the new and weaker notion of error stability, and apply it to obtain sanity-check bounds for leave-one-out for other classes of learning algorithms, including training error minimization procedures and Bayesian algorithms. We also provide lower bounds demonstrating the necessity of error stability for proving bounds on the error of the leave-one-out estimate, and the fact that for training error minimization algorithms, in the worst case such bounds must still depend on the Vapnik-Chervonenkis dimension of the hypothesis class. ",
"neighbors": [
591,
847,
848,
967,
1335
],
"mask": "Train"
},
{
"node_id": 1033,
"label": 0,
"text": "Title: Observation and Generalisation in a Simulated Robot World \nAbstract: This paper describes a program which observes the behaviour of actors in a simulated world and uses these observations as guides to conducting experiments. An experiment is a sequence of actions carried out by an actor in order to support or weaken the case for a generalisation of a concept. A generalisation is attempted when the program observes a state of the world which is similar to a some previous state. A partial matching algorithm is used to find substitutions which enable the two states to be unified. The generalisation of the two states is their unifier. ",
"neighbors": [
1174
],
"mask": "Test"
},
{
"node_id": 1034,
"label": 1,
"text": "Title: 1 GP-COM: A Distributed, Component-Based Genetic Programming System in C++ \nAbstract: Widespread adoption of Genetic Programming techniques as a domain-independent problem solving tool depends on a good underlying software structure. A system is presented that mirrors the conceptual makeup of a GP system. Consisting of a loose collection of software components, each with strict interface definitions and roles, the system maximises flexibility and minimises effort when applied to a new problem domain. ",
"neighbors": [
1178,
1730
],
"mask": "Validation"
},
{
"node_id": 1035,
"label": 1,
"text": "Title: An Empirical Investigation of Multi-Parent Recombination Operators in Evolution Strategies \nAbstract: Widespread adoption of Genetic Programming techniques as a domain-independent problem solving tool depends on a good underlying software structure. A system is presented that mirrors the conceptual makeup of a GP system. Consisting of a loose collection of software components, each with strict interface definitions and roles, the system maximises flexibility and minimises effort when applied to a new problem domain. ",
"neighbors": [
714,
1216,
1218
],
"mask": "Validation"
},
{
"node_id": 1036,
"label": 1,
"text": "Title: Adaptive Behavior in Competing Co-Evolving Species \nAbstract: Co-evolution of competitive species provides an interesting testbed to study the role of adaptive behavior because it provides unpredictable and dynamic environments. In this paper we experimentally investigate some arguments for the co-evolution of different adaptive protean behaviors in competing species of predators and preys. Both species are implemented as simulated mobile robots (Kheperas) with infrared proximity sensors, but the predator has an additional vision module whereas the prey has a maximum speed set to twice that of the predator. Different types of variability during life for neurocontrollers with the same architecture and genetic length are compared. It is shown that simple forms of pro-teanism affect co-evolutionary dynamics and that preys rather exploit noisy controllers to generate random trajectories, whereas predators benefit from directional-change controllers to improve pursuit behavior.",
"neighbors": [
538,
712,
1662
],
"mask": "Train"
},
{
"node_id": 1037,
"label": 2,
"text": "Title: OBSERVABILITY IN RECURRENT NEURAL NETWORKS \nAbstract: Report SYCON-92-07rev ABSTRACT We obtain a characterization of observability for a class of nonlinear systems which appear in neural networks research. ",
"neighbors": [
427,
1028,
1042,
1470,
1610
],
"mask": "Validation"
},
{
"node_id": 1038,
"label": 2,
"text": "Title: Brief Papers Computing Second Derivatives in Feed-Forward Networks: A Review \nAbstract: The calculation of second derivatives is required by recent training and analysis techniques of connectionist networks, such as the elimination of superfluous weights, and the estimation of confidence intervals both for weights and network outputs. We here review and develop exact and approximate algorithms for calculating second derivatives. For networks with jwj weights, simply writing the full matrix of second derivatives requires O(jwj 2 ) operations. For networks of radial basis units or sigmoid units, exact calculation of the necessary intermediate terms requires of the order of 2h + 2 backward/forward-propagation passes where h is the number of hidden units in the network. We also review and compare three approximations (ignoring some components of the second derivative, numerical differentiation, and scoring). Our algorithms apply to arbitrary activation functions, networks, and error functions (for instance, with connections that skip layers, or radial basis functions, or cross-entropy error and Softmax units, etc.). ",
"neighbors": [
157,
916,
1196
],
"mask": "Train"
},
{
"node_id": 1039,
"label": 0,
"text": "Title: Functional Programming by Analogy \nAbstract: In this paper we describe how the principles of problem solving by analogy can be applied to the domain of functional program synthesis. For this reason, we treat programs as syntactical structures. We discuss two different methods to handle these structures: (a) a graph metric for determining the distance between two program schemes, and (b) the Structure Mapping Engine (an existing system to examine analogical processing). Furthermore we show experimental results and discuss them. ",
"neighbors": [
1354
],
"mask": "Validation"
},
{
"node_id": 1040,
"label": 0,
"text": "Title: Learning from Examples: Reminding or Heuristic Switching? \nAbstract: In this paper we describe how the principles of problem solving by analogy can be applied to the domain of functional program synthesis. For this reason, we treat programs as syntactical structures. We discuss two different methods to handle these structures: (a) a graph metric for determining the distance between two program schemes, and (b) the Structure Mapping Engine (an existing system to examine analogical processing). Furthermore we show experimental results and discuss them. ",
"neighbors": [
1354
],
"mask": "Test"
},
{
"node_id": 1041,
"label": 2,
"text": "Title: The Potential of Prototype Styles of Generalization \nAbstract: There are many ways for a learning system to generalize from training set data. This paper presents several generalization styles using prototypes in an attempt to provide accurate generalization on training set data for a wide variety of applications. These generalization styles are efficient in terms of time and space, and lend themselves well to massively parallel architectures. Empirical results of generalizing on several real-world applications are given, and these results indicate that the prototype styles of generalization presented have potential to provide accurate generalization for many applications. ",
"neighbors": [
1321
],
"mask": "Train"
},
{
"node_id": 1042,
"label": 2,
"text": "Title: Recurrent Neural Networks: Some Systems-Theoretic Aspects \nAbstract: This paper provides an exposition of some recent research regarding system-theoretic aspects of continuous-time recurrent (dynamic) neural networks with sigmoidal activation functions. The class of systems is introduced and discussed, and a result is cited regarding their universal approximation properties. Known characterizations of controllability, ob-servability, and parameter identifiability are reviewed, as well as a result on minimality. Facts regarding the computational power of recurrent nets are also mentioned. fl Supported in part by US Air Force Grant AFOSR-94-0293",
"neighbors": [
206,
1028,
1037,
1043,
1435
],
"mask": "Train"
},
{
"node_id": 1043,
"label": 2,
"text": "Title: Complete Controllability of Continuous-Time Recurrent Neural Networks \nAbstract: This paper presents a characterization of controllability for the class of control systems commonly called (continuous-time) recurrent neural networks. The characterization involves a simple condition on the input matrix, and is proved when the activation function is the hyperbolic tangent.",
"neighbors": [
1042,
1435
],
"mask": "Train"
},
{
"node_id": 1044,
"label": 2,
"text": "Title: Word Perfect Corp. LIA: A Location-Independent Transformation for ASOCS Adaptive Algorithm 2 \nAbstract: Most Artificial Neural Networks (ANNs) have a fixed topology during learning, and often suffer from a number of shortcomings as a result. ANNs that use dynamic topologies have shown ability to overcome many of these problems. Adaptive Self Organizing Concurrent Systems (ASOCS) are a class of learning models with inherently dynamic topologies. This paper introduces Location-Independent Transformations (LITs) as a general strategy for implementing learning models that use dynamic topologies efficiently in parallel hardware. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient support for adding and deleting nodes dynamically during learning. In particular, this paper presents the Location - Independent ASOCS (LIA) model as a LIT for ASOCS Adaptive Algorithm 2. The description of LIA gives formal definitions for LIA algorithms. Because LIA implements basic ASOCS mechanisms, these definitions provide a formal description of basic ASOCS mechanisms in general, in addition to LIA. ",
"neighbors": [
809,
812,
814,
1341,
1365
],
"mask": "Validation"
},
{
"node_id": 1045,
"label": 4,
"text": "Title: Spurious Solutions to the Bellman Equation \nAbstract: Reinforcement learning algorithms often work by finding functions that satisfy the Bellman equation. This yields an optimal solution for prediction with Markov chains and for controlling a Markov decision process (MDP) with a finite number of states and actions. This approach is also frequently applied to Markov chains and MDPs with infinite states. We show that, in this case, the Bellman equation may have multiple solutions, many of which lead to erroneous predictions and policies (Baird, 1996). Algorithms and conditions are presented that guarantee a single, optimal solution to the Bellman equation.",
"neighbors": [
471,
1540
],
"mask": "Train"
},
{
"node_id": 1046,
"label": 0,
"text": "Title: Model-Based Learning of Structural Indices to Design Cases \nAbstract: A major issue in case-basedsystems is retrieving the appropriate cases from memory to solve a given problem. This implies that a case should be indexed appropriately when stored in memory. A case-based system, being dynamic in that it stores cases for reuse, needs to learn indices for the new knowledge as the system designers cannot envision that knowledge. Irrespective of the type of indexing (structural or functional), a hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary and learning the right level of generalization. In this paper we show how structure-behavior-function (SBF) models help in learning structural indices to design cases in the domain of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design provides both the vocabulary for structural indexing of design cases and the inductive biases for index generalization. We further discuss how model-based learning can be integrated with similarity-based learning (that uses prior design cases) for learning the level of index generalization. ",
"neighbors": [
612,
1121,
1344,
1345,
2706
],
"mask": "Test"
},
{
"node_id": 1047,
"label": 0,
"text": "Title: GIT-CC-92/60 A Model-Based Approach to Analogical Reasoning and Learning in Design \nAbstract: A major issue in case-basedsystems is retrieving the appropriate cases from memory to solve a given problem. This implies that a case should be indexed appropriately when stored in memory. A case-based system, being dynamic in that it stores cases for reuse, needs to learn indices for the new knowledge as the system designers cannot envision that knowledge. Irrespective of the type of indexing (structural or functional), a hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary and learning the right level of generalization. In this paper we show how structure-behavior-function (SBF) models help in learning structural indices to design cases in the domain of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design provides both the vocabulary for structural indexing of design cases and the inductive biases for index generalization. We further discuss how model-based learning can be integrated with similarity-based learning (that uses prior design cases) for learning the level of index generalization. ",
"neighbors": [
603,
1344,
1345,
1348,
1354,
1355,
1420
],
"mask": "Train"
},
{
"node_id": 1048,
"label": 0,
"text": "Title: Learning to Classify Observed Motor Behavior \nAbstract: We present a representational format for observed movements The representation has a temporal structure relating components of a single complex movement. We also present OXBOW, an unsupervised learning system, which constructs classes of these movements. Empirical results indicate that the system builds abstract movement concepts with appropriate component structure allowing it to predict the latter portions of a partially observed movement.",
"neighbors": [
984
],
"mask": "Test"
},
{
"node_id": 1049,
"label": 5,
"text": "Title: Machine Learning and Inference \nAbstract: Constructive induction divides the problem of learning an inductive hypothesis into two intertwined searches: onefor the best representation space, and twofor the best hypothesis in that space. In data-driven constructive induction (DCI), a learning system searches for a better representation space by analyzing the input examples (data). The presented data-driven constructive induction method combines an AQ-type learning algorithm with two classes of representation space improvement operators: constructors, and destructors. The implemented system, AQ17-DCI, has been experimentally applied to a GNP prediction problem using a World Bank database. The results show that decision rules learned by AQ17-DCI outperformed the rules learned in the original representation space both in predictive accuracy and rule simplicity.",
"neighbors": [
1292,
1329,
1498
],
"mask": "Train"
},
{
"node_id": 1050,
"label": 6,
"text": "Title: Extracting Support Data for a Given Task \nAbstract: We report a novel possibility for extracting a small subset of a data base which contains all the information necessary to solve a given classification task: using the Support Vector Algorithm to train three different types of handwritten digit classifiers, we observed that these types of classifiers construct their decision surface from strongly overlapping small ( 4%) subsets of the data base. This finding opens up the possibility of compressing data bases significantly by disposing of the data which is not important for the solution of a given task. In addition, we show that the theory allows us to predict the classifier that will have the best generalization ability, based solely on performance on the training set and characteristics of the learning machines. This finding is important for cases where the amount of available data is limited. ",
"neighbors": [
1171,
1306,
1389,
1492,
1499,
1591
],
"mask": "Train"
},
{
"node_id": 1051,
"label": 2,
"text": "Title: Interpreting neuronal population activity by reconstruction: A unified framework with application to hippocampal place cells \nAbstract: Physical variables such as the orientation of a line in the visual field or the location of the body in space are coded as activity levels in populations of neurons. Reconstruction or decoding is an inverse problem in which the physical variables are estimated from observed neural activity. Reconstruction is useful first in quantifying how much information about the physical variables is present in the population, and second, in providing insight into how the brain might use distributed representations in solving related computational problems such as visual object recognition and spatial navigation. Two classes of reconstruction methods, namely, probabilistic or Bayesian methods and basis function methods, are discussed. They include important existing methods as special cases, such as population vector coding, optimal linear estimation and template matching. As a representative example for the reconstruction problem, different methods were applied to multi-electrode spike train data from hippocampal place cells in freely moving rats. The reconstruction accuracy of the trajectories of the rats was compared for the different methods. Bayesian methods were especially accurate when a continuity constraint was enforced, and the best errors were within a factor of two of the the information-theoretic limit on how accurate any reconstruction can be, which were comparable with the intrinsic experimental errors in position tracking. In addition, the reconstruction analysis uncovered some interesting aspects of place cell activity, such as the tendency for erratic jumps of the reconstructed trajectory when the animal stopped running. In general, the theoretical values of the minimal achievable reconstruction errors quantify how accurately a physical variable is encoded in the neuronal population in the sense of mean square error, regardless of the method used for reading out the information. One related result is that the theoretical accuracy is independent of the width of the Gaussian tuning function only in two dimensions. Finally, all the reconstruction methods considered in this paper can be implemented by a unified neural network architecture, which the brain could feasibly use to solve related problems. ",
"neighbors": [
1052,
2576
],
"mask": "Train"
},
{
"node_id": 1052,
"label": 2,
"text": "Title: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory \nAbstract: The head-direction (HD) cells found in the limbic system in freely moving rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be accurately controlled by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information. ",
"neighbors": [
600,
832,
1051,
1066
],
"mask": "Train"
},
{
"node_id": 1053,
"label": 6,
"text": "Title: Bias Plus Variance Decomposition for Zero-One Loss Functions \nAbstract: We present a bias-variance decomposition of expected misclassification rate, the most commonly used loss function in supervised classification learning. The bias-variance decomposition for quadratic loss functions is well known and serves as an important tool for analyzing learning algorithms, yet no decomposition was offered for the more commonly used zero-one (misclassification) loss functions until the recent work of Kong & Dietterich (1995) and Breiman (1996). Their decomposition suffers from some major shortcomings though (e.g., potentially negative variance), which our decomposition avoids. We show that, in practice, the naive frequency-based estimation of the decomposition terms is by itself biased and show how to correct for this bias. We illustrate the decomposition on various algorithms and datasets from the UCI repository.",
"neighbors": [
853,
931,
1024,
1191,
1197,
1463,
1568,
1607
],
"mask": "Test"
},
{
"node_id": 1054,
"label": 1,
"text": "Title: Evolving Sorting Networks using Genetic Programming and Rapidly Reconfigurable Field-Programmable Gate Arrays Convergent Design, L.L.C.\nAbstract: This paper describes ongoing work involving the use of the X i l i n x X C 6 2 1 6 r a p i d l y reconfigurable field-programmable gate ar ray to evol ve sorting n e t w o r k s u s i n g g e n e t i c programming. We successfully evolved a network for sorting seven items that employs two fewer steps than the sorting network described in a l962 patent and that has the same number of steps as the seven-sorter devised by Floyd and Knuth subsequent to the patent. ",
"neighbors": [
1249
],
"mask": "Validation"
},
{
"node_id": 1055,
"label": 2,
"text": "Title: Parsimonious Least Norm Approximation \nAbstract: A theoretically justifiable fast finite successive linear approximation algorithm is proposed for obtaining a parsimonious solution to a corrupted linear system Ax = b + p, where the corruption p is due to noise or error in measurement. The proposed linear-programming-based algorithm finds a solution x by parametrically minimizing the number of nonzero elements in x and the error k Ax b p k 1 . Numerical tests on a signal-processing-based example indicate that the proposed method is comparable to a method that parametrically minimizes the 1-norm of the solution x and the error k Ax b p k 1 , and that both methods are superior, by orders of magnitude, to solutions obtained by least squares as well by combinatorially choosing an optimal solution with a specific number of nonzero elements. ",
"neighbors": [
607,
1059,
1284
],
"mask": "Train"
},
{
"node_id": 1056,
"label": 2,
"text": "Title: Learning Viewpoint Invariant Face Representations from Visual Experience by Temporal Association \nAbstract: In natural visual experience, different views of an object or face tend to appear in close temporal proximity. A set of simulations is presented which demonstrate how viewpoint invariant representations of faces can be developed from visual experience by capturing the temporal relationships among the input patterns. The simulations explored the interaction of temporal smoothing of activity signals with Hebbian learning (Foldiak, 1991) in both a feed-forward system and a recurrent system. The recurrent system was a generalization of a Hopfield network with a lowpass temporal filter on all unit activities. Following training on sequences of graylevel images of faces as they changed pose, multiple views of a given face fell into the same basin of attraction, and the system acquired representations of faces that were approximately viewpoint invariant. ",
"neighbors": [
476,
676,
1014
],
"mask": "Train"
},
{
"node_id": 1057,
"label": 2,
"text": "Title: Submitted to the Future Generation Computer Systems special issue on Data Mining. Using Neural Networks\nAbstract: Neural networks have been successfully applied in a wide range of supervised and unsupervised learning applications. Neural-network methods are not commonly used for data-mining tasks, however, because they often produce incomprehensible models and require long training times. In this article, we describe neural-network learning algorithms that are able to produce comprehensible models, and that do not require excessive training times. Specifically, we discuss two classes of approaches for data mining with neural networks. The first type of approach, often called rule extraction, involves extracting symbolic models from trained neural networks. The second approach is to directly learn simple, easy-to-understand networks. We argue that, given the current state of the art, neural-network methods deserve a place in the tool boxes of data-mining specialists. ",
"neighbors": [
627,
631,
1307,
1359,
1484
],
"mask": "Validation"
},
{
"node_id": 1058,
"label": 2,
"text": "Title: Statistical Theory of Overtraining Is Cross-Validation Asymptotically Effective? \nAbstract: A statistical theory for overtraining is proposed. The analysis treats realizable stochastic neural networks, trained with Kullback-Leibler loss in the asymptotic case. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the optimal stopping time. Considering cross-validation stopping we answer the question: In what ratio the examples should be divided into training and testing sets in order to obtain the optimum performance. In the non-asymptotic region cross-validated early stopping always decreases the generalization error. Our large scale simulations done on a CM5 are in nice agreement with our analytical findings. ",
"neighbors": [
1211,
2454
],
"mask": "Train"
},
{
"node_id": 1059,
"label": 2,
"text": "Title: Mathematical Programming in Data Mining \nAbstract: Mathematical programming approaches to three fundamental problems will be described: feature selection, clustering and robust representation. The feature selection problem considered is that of discriminating between two sets while recognizing irrelevant and redundant features and suppressing them. This creates a lean model that often generalizes better to new unseen data. Computational results on real data confirm improved generalization of leaner models. Clustering is exemplified by the unsupervised learning of patterns and clusters that may exist in a given database and is a useful tool for knowledge discovery in databases (KDD). A mathematical programming formulation of this problem is proposed that is theoretically justifiable and computationally implementable in a finite number of steps. A resulting k-Median Algorithm is utilized to discover very useful survival curves for breast cancer patients from a medical database. Robust representation is concerned with minimizing trained model degradation when applied to new problems. A novel approach is proposed that purposely tolerates a small error in the training process in order to avoid overfitting data that may contain errors. Examples of applications of these concepts are given.",
"neighbors": [
1055
],
"mask": "Train"
},
{
"node_id": 1060,
"label": 1,
"text": "Title: An Overview of Genetic Algorithms Part 1, Fundamentals \nAbstract: Mathematical programming approaches to three fundamental problems will be described: feature selection, clustering and robust representation. The feature selection problem considered is that of discriminating between two sets while recognizing irrelevant and redundant features and suppressing them. This creates a lean model that often generalizes better to new unseen data. Computational results on real data confirm improved generalization of leaner models. Clustering is exemplified by the unsupervised learning of patterns and clusters that may exist in a given database and is a useful tool for knowledge discovery in databases (KDD). A mathematical programming formulation of this problem is proposed that is theoretically justifiable and computationally implementable in a finite number of steps. A resulting k-Median Algorithm is utilized to discover very useful survival curves for breast cancer patients from a medical database. Robust representation is concerned with minimizing trained model degradation when applied to new problems. A novel approach is proposed that purposely tolerates a small error in the training process in order to avoid overfitting data that may contain errors. Examples of applications of these concepts are given.",
"neighbors": [
163,
237,
965,
1136,
1523,
1890,
2039
],
"mask": "Train"
},
{
"node_id": 1061,
"label": 5,
"text": "Title: Stochastic search in inductive concept learning \nAbstract: Concept learning can be viewed as search of the space of concept descriptions. The hypothesis language determines the search space. In standard inductive learning algorithms, the structure of the search space is determined by generalization/specialization operators. Algorithms perform locally optimal search by using a hill-climbing and/or a beam-search strategy. To overcome this limitation, concept learning can be viewed as stochastic search of the space of concept descriptions. The proposed stochastic search method is based on simulated annealing which is known as a successful means for solving combinatorial optimization problems. The stochastic search method, implemented in a rule learning system ATRIS, is based on a compact and efficient representation of the problem and the appropriate operators for structuring the search space. Furthermore, by heuristic pruning of the search space, the method enables also handling of imperfect data. The paper introduces the stochastic search method, describes the ATRIS learning algorithm and gives results of the experiments. ",
"neighbors": [
378,
426,
585,
1010,
1244,
1578,
1651
],
"mask": "Train"
},
{
"node_id": 1062,
"label": 6,
"text": "Title: Exponentially many local minima for single neurons \nAbstract: We show that for a single neuron with the logistic function as the transfer function the number of local minima of the error function based on the square loss can grow exponentially in the dimension.",
"neighbors": [
930,
1254,
1323,
2651
],
"mask": "Train"
},
{
"node_id": 1063,
"label": 1,
"text": "Title: An Analysis of the Effects of Neighborhood Size and Shape on Local Selection Algorithms \nAbstract: The increasing availability of finely-grained parallel architectures has resulted in a variety of evolutionary algorithms (EAs) in which the population is spatially distributed and local selection algorithms operate in parallel on small, overlapping neighborhoods. The effects of design choices regarding the particular type of local selection algorithm as well as the size and shape of the neighborhood are not particularly well understood and are generally tested empirically. In this paper we extend the techniques used to more formally analyze selection methods for sequential EAs and apply them to local neighborhood models, resulting in a much clearer understanding of the effects of neighborhood size and shape.",
"neighbors": [
1065,
1136,
1153,
1628
],
"mask": "Train"
},
{
"node_id": 1064,
"label": 3,
"text": "Title: Incremental Tradeoff Resolution in Qualitative Probabilistic Networks \nAbstract: Qualitative probabilistic reasoning in a Bayesian network often reveals tradeoffs: relationships that are ambiguous due to competing qualitative influences. We present two techniques that combine qualitative and numeric probabilistic reasoning to resolve such tradeoffs, inferring the qualitative relationship between nodes in a Bayesian network. The first approach incrementally marginalizes nodes that contribute to the ambiguous qualitative relationships. The second approach evaluates approximate Bayesian networks for bounds of probability distributions, and uses these bounds to determinate qualitative relationships in question. This approach is also incremental in that the algorithm refines the state spaces of random variables for tighter bounds until the qualitative relationships are resolved. Both approaches provide systematic methods for tradeoff resolution at potentially lower computational cost than application of purely numeric methods. ",
"neighbors": [
332,
623,
952,
1937
],
"mask": "Train"
},
{
"node_id": 1065,
"label": 1,
"text": "Title: A Survey of Parallel Genetic Algorithms \nAbstract: IlliGAL Report No. 97003 May 1997 ",
"neighbors": [
163,
1063,
1106,
1153,
1279,
1305
],
"mask": "Validation"
},
{
"node_id": 1066,
"label": 2,
"text": "Title: The Rectified Gaussian Distribution \nAbstract: A simple but powerful modification of the standard Gaussian distribution is studied. The variables of the rectified Gaussian are constrained to be nonnegative, enabling the use of nonconvex energy functions. Two multimodal examples, the competitive and cooperative distributions, illustrate the representational power of the rectified Gaussian. Since the cooperative distribution can represent the translations of a pattern, it demonstrates the potential of the rectified Gaussian for modeling pattern manifolds.",
"neighbors": [
36,
1052
],
"mask": "Validation"
},
{
"node_id": 1067,
"label": 2,
"text": "Title: A Fast Fixed-Point Algorithm for Independent Component Analysis \nAbstract: This paper will appear in Neural Computation, 9:1483-1492, 1997. Abstract We introduce a novel fast algorithm for Independent Component Analysis, which can be used for blind source separation and feature extraction. It is shown how a neural network learning rule can be transformed into a txed-point iteration, which provides an algorithm that is very simple, does not depend on any user-detned parameters, and is fast to converge to the most accurate solution allowed by the data. The algorithm tnds, one at a time, all non-Gaussian independent components, regardless of their probability distributions. The computations can be performed either in batch mode or in a semi-adaptive manner. The convergence of the algorithm is rigorously proven, and the convergence speed is shown to be cubic. Some comparisons to gradient based algorithms are made, showing that the new algorithm is usually 10 to 100 times faster, sometimes giving the solution in just a few iterations.",
"neighbors": [
570,
576,
834,
839,
1801,
1814
],
"mask": "Validation"
},
{
"node_id": 1068,
"label": 2,
"text": "Title: Neuronal Goals: Efficient Coding and Coincidence Detection \nAbstract: Barlow's seminal work on minimal entropy codes and unsupervised learning is reiterated. In particular, the need to transmit the probability of events is put in a practical neuronal framework for detecting suspicious events. A variant of the BCM learning rule [15] is presented together with some mathematical results suggesting optimal minimal entropy coding. ",
"neighbors": [
359,
726,
989,
2499,
2500
],
"mask": "Train"
},
{
"node_id": 1069,
"label": 1,
"text": "Title: Extended Selection Mechanisms in Genetic Algorithms \nAbstract: ",
"neighbors": [
163,
422,
793,
1096,
1455,
1685
],
"mask": "Train"
},
{
"node_id": 1070,
"label": 1,
"text": "Title: Extended Selection Mechanisms in Genetic Algorithms \nAbstract: A Genetic Algorithm Tutorial Darrell Whitley Technical Report CS-93-103 (Revised) November 10, 1993 ",
"neighbors": [
163,
793,
1016,
1153
],
"mask": "Train"
},
{
"node_id": 1071,
"label": 5,
"text": "Title: Machine Learning and Inference \nAbstract: Constructive induction divides the problem of learning an inductive hypothesis into two intertwined searches: onefor the best representation space, and twofor the best hypothesis in that space. In data-driven constructive induction (DCI), a learning system searches for a better representation space by analyzing the input examples (data). The presented data-driven constructive induction method combines an AQ-type learning algorithm with two classes of representation space improvement operators: constructors, and destructors. The implemented system, AQ17-DCI, has been experimentally applied to a GNP prediction problem using a World Bank database. The results show that decision rules learned by AQ17-DCI outperformed the rules learned in the original representation space both in predictive accuracy and rule simplicity.",
"neighbors": [
1292,
1329,
1498
],
"mask": "Test"
},
{
"node_id": 1072,
"label": 2,
"text": "Title: The Nonlinear PCA Learning Rule and Signal Separation Mathematical Analysis \nAbstract: Constructive induction divides the problem of learning an inductive hypothesis into two intertwined searches: onefor the best representation space, and twofor the best hypothesis in that space. In data-driven constructive induction (DCI), a learning system searches for a better representation space by analyzing the input examples (data). The presented data-driven constructive induction method combines an AQ-type learning algorithm with two classes of representation space improvement operators: constructors, and destructors. The implemented system, AQ17-DCI, has been experimentally applied to a GNP prediction problem using a World Bank database. The results show that decision rules learned by AQ17-DCI outperformed the rules learned in the original representation space both in predictive accuracy and rule simplicity.",
"neighbors": [
354,
839,
1520
],
"mask": "Validation"
},
{
"node_id": 1073,
"label": 5,
"text": "Title: An adaptation of Relief for attribute estimation in regression \nAbstract: Heuristic measures for estimating the quality of attributes mostly assume the independence of attributes so in domains with strong dependencies between attributes their performance is poor. Relief and its extension ReliefF are capable of correctly estimating the quality of attributes in classification problems with strong dependencies between attributes. By exploiting local information provided by different contexts they provide a global view. We present the analysis of Reli-efF which lead us to its adaptation to regression (continuous class) problems. The experiments on artificial and real-world data sets show that Re-gressional ReliefF correctly estimates the quality of attributes in various conditions, and can be used for non-myopic learning of the regression trees. Regressional ReliefF and ReliefF provide a unified view on estimating the attribute quality in regression and classification.",
"neighbors": [
314,
1112,
1182,
1569,
1636
],
"mask": "Train"
},
{
"node_id": 1074,
"label": 6,
"text": "Title: Inductive Logic Programming \nAbstract: A new research area, Inductive Logic Programming, is presently emerging. While inheriting various positive characteristics of the parent subjects of Logic Programming and Machine Learning, it is hoped that the new area will overcome many of the limitations of its forebears. The background to present developments within this area is discussed and various goals and aspirations for the increasing body of researchers are identified. Inductive Logic Programming needs to be based on sound principles from both Logic and Statistics. On the side of statistical justification of hypotheses we discuss the possible relationship between Algorithmic Complexity theory and Probably-Approximately-Correct (PAC) Learning. In terms of logic we provide a unifying framework for Muggleton and Buntine's Inverse Resolution (IR) and Plotkin's Relative Least General Generali-sation (RLGG) by rederiving RLGG in terms of IR. This leads to a discussion of the feasibility of extending the RLGG framework to allow for the invention of new predicates, previously discussed only within the context of IR. ",
"neighbors": [
109,
1174
],
"mask": "Train"
},
{
"node_id": 1075,
"label": 2,
"text": "Title: Bayesian Learning in Feed Forward Neural Networks \nAbstract: Bayesian methods are applicable to complex modeling tasks. In this review, the principles of Bayesian inference are presented and applied to neural network models. Several approximate implementations are discussed, and their advantages over conventional fre-quentist model training and selection are outlined. It is argued that Bayesian methods are preferable to traditional approaches, although empirical evidence for this is still sparse. ",
"neighbors": [
157,
1340
],
"mask": "Validation"
},
{
"node_id": 1076,
"label": 3,
"text": "Title: Learning Belief Networks from Data: An Information Theory Based Approach \nAbstract: This paper presents an efficient algorithm for learning Bayesian belief networks from databases. The algorithm takes a database as input and constructs the belief network structure as output. The construction process is based on the computation of mutual information of attribute pairs. Given a data set that is large enough, this algorithm can generate a belief network very close to the underlying model, and at the same time, enjoys the time When the data set has a normal DAG-Faithful (see Section 3.2) probability distribution, the algorithm guarantees that the structure of a perfect map [Pearl, 1988] of the underlying dependency model is generated. To evaluate this algorithm, we present the experimental results on three versions of the well-known ALARM network database, which has 37 attributes and 10,000 records. The results show that this algorithm is accurate and efficient. The proof of correctness and the analysis of complexity of O N( ) 4 on conditional independence (CI) tests.",
"neighbors": [
1078,
1086,
2461
],
"mask": "Train"
},
{
"node_id": 1077,
"label": 1,
"text": "Title: A Search Space Analysis of the Job Shop Scheduling Problem \nAbstract: A computational study for the Job Shop Scheduling Problem is presented. Thereby emphasis is put on the structure of the solution space as it appears for adaptive search. A statistical analysis of the search spaces reveals the impacts of inherent properties of the problem on adaptive heuristics. ",
"neighbors": [
1153
],
"mask": "Train"
},
{
"node_id": 1078,
"label": 3,
"text": "Title: An Algorithm for Bayesian Belief Network Construction from Data \nAbstract: This paper presents an efficient algorithm for constructing Bayesian belief networks from databases. The algorithm takes a database and an attributes ordering (i.e., the causal attributes of an attribute should appear earlier in the order) as input and constructs a belief network structure as output. The construction process is based on the computation of mutual information of attribute pairs. Given a data set which is large enough and has a DAG-Isomorphic probability distribution, this algorithm guarantees that the perfect map [1] of the underlying dependency tests. To evaluate this algorithm, we present the experimental results on three versions of the well-known ALARM network database, which has 37 attributes and 10,000 records. The correctness proof and the analysis of computational complexity are also presented. We also discuss the features of our work and relate it to previous works. model is generated, and at the same time, enjoys the time complexity of O N( ) 2 on conditional independence (CI)",
"neighbors": [
1076,
1086,
2461
],
"mask": "Train"
},
{
"node_id": 1079,
"label": 2,
"text": "Title: Nonlinear Prediction of Chaotic Time Series Using Support Vector Machines \nAbstract: A novel method for regression has been recently proposed by V. Vapnik et al. [8, 9]. The technique, called Support Vector Machine (SVM), is very well founded from the mathematical point of view and seems to provide a new insight in function approximation. We implemented the SVM and tested it on the same data base of chaotic time series that was used in [1] to compare the performances of different approximation techniques, including polynomial and rational approximation, local polynomial techniques, Radial Basis Functions, and Neural Networks. The SVM performs better than the approaches presented in [1]. We also study, for a particular time series, the variability in performance with respect to the few free parameters of SVM.",
"neighbors": [
611,
864,
975,
1009,
1103,
1315,
1389,
1718,
1724
],
"mask": "Train"
},
{
"node_id": 1080,
"label": 2,
"text": "Title: A Multi-Chip Module Implementation of a Neural Network \nAbstract: The requirement for dense interconnect in artificial neural network systems has led researchers to seek high-density interconnect technologies. This paper reports an implementation using multi-chip modules (MCMs) as the interconnect medium. The specific system described is a self-organizing, parallel, and dynamic learning model which requires a dense interconnect technology for effective implementation; this requirement is fulfilled by exploiting MCM technology. The ideas presented in this paper regarding an MCM implementation of artificial neural networks are versatile and can be adapted to apply to other neural network and connectionist models. ",
"neighbors": [
809,
812,
814,
1129,
1321
],
"mask": "Train"
},
{
"node_id": 1081,
"label": 5,
"text": "Title: Specialization of Recursive Predicates \nAbstract: When specializing a recursive predicate in order to exclude a set of negative examples without excluding a set of positive examples, it may not be possible to specialize or remove any of the clauses in a refutation of a negative example without excluding any positive exam ples. A previously proposed solution to this problem is to apply program transformation in order to obtain non-recursive target predicates from recursive ones. However, the application of this method prevents recursive specializations from being found. In this work, we present the algorithm spectre ii which is not limited to specializing non-recursive predicates. The key idea upon which the algorithm is based is that it is not enough to specialize or remove clauses in refutations of negative examples in order to obtain correct specializations, but it is sometimes necessary to specialize clauses that appear only in refutations of positive examples. In contrast to its predecessor spectre, the new algorithm is not limited to specializing clauses defining one predicate only, but may specialize clauses defining multiple predicates. Furthermore, the positive and negative examples are no longer required to be instances of the same predicate. It is proven that the algorithm produces a correct specialization when all positive examples are logical consequences of the original program, there is a finite number of derivations of positive and negative examples and when no positive and negative examples have the same sequence of input clauses in their refutations.",
"neighbors": [
521,
1082,
1259
],
"mask": "Train"
},
{
"node_id": 1082,
"label": 5,
"text": "Title: Specialization of Logic Programs by Pruning SLD-Trees \nAbstract: program w.r.t. positive and negative examples can be viewed as the problem of pruning an SLD-tree such that all refutations of negative examples and no refutations of positive examples are excluded. It is shown that the actual pruning can be performed by applying unfolding and clause removal. The algorithm spectre is presented, which is based on this idea. The input to the algorithm is, besides a logic program and positive and negative examples, a computation rule, which determines the shape of the SLD-tree that is to be pruned. It is shown that the generality of the resulting specialization is dependent on the computation rule, and experimental results are presented from using three different computation rules. The experiments indicate that the computation rule should be formulated so that the number of applications of unfolding is kept as low as possible. The algorithm, which uses a divide-and-conquer method, is also compared with a covering algorithm. The experiments show that a higher predictive accuracy can be achieved if the focus is on discriminating positive from negative examples rather than on achieving a high coverage of positive examples only. ",
"neighbors": [
521,
1081,
1259,
2312
],
"mask": "Train"
},
{
"node_id": 1083,
"label": 3,
"text": "Title: Inference in Cognitive Maps \nAbstract: Cognitive mapping is a qualitative decision modeling technique developed over twenty years ago by political scientists, which continues to see occasional use in social science and decision-aiding applications. In this paper, I show how cognitive maps can be viewed in the context of more recent formalisms for qualitative decision modeling, and how the latter provide a firm semantic foundation that can facilitate the development of more powerful inference procedures as well as extensions in expressiveness for models of this sort. ",
"neighbors": [
1660
],
"mask": "Train"
},
{
"node_id": 1084,
"label": 0,
"text": "Title: Continuous Case-Based Reasoning \nAbstract: Case-based reasoning systems have traditionally been used to perform high-level reasoning in problem domains that can be adequately described using discrete, symbolic representations. However, many real-world problem domains, such as autonomous robotic navigation, are better characterized using continuous representations. Such problem domains also require continuous performance, such as online sensorimotor interaction with the environment, and continuous adaptation and learning during the performance task. This article introduces a new method for continuous case-based reasoning, and discusses its application to the dynamic selection, modification, and acquisition of robot behaviors in an autonomous navigation system, SINS (Self-Improving Navigation System). The computer program and the underlying method are systematically evaluated through statistical analysis of results from several empirical studies. The article concludes with a general discussion of case-based reasoning issues addressed by this research. ",
"neighbors": [
858,
991,
2035
],
"mask": "Validation"
},
{
"node_id": 1085,
"label": 5,
"text": "Title: Reports of the GMU Machine Learning and Inference HOW DID AQ FACE THE EAST-WEST CHALLENGE?\nAbstract: The East-West Challenge is the title of the second international competition of machine learning programs, organized in the Fall 1994 by Donald Michie, Stephen Muggleton, David Page and Ashwin Srinivasan from Oxford University. The goal of the competition was to solve the TRAINS problems, that is to discover the simplest classification rules for train-like structured objects. The rule complexity was judged by a Prolog program that counted the number of various components in the rule expressed in the from of Prolog Horn clauses. There were 65 entries from several countries submitted to the competition. The GMU teams entry was generated by three members of the AQ family of learning programs: AQ-DT, INDUCE and AQ17-HCI. The paper analyses the results obtained by these programs and compares them to those obtained by other learning programs. It also presents ideas for further research that were inspired by the competition. One of these ideas is a challenge to the machine learning community to develop a measure of knowledge complexity that would adequately capture the cognitive complexity of knowledge. A preliminary measure of such cognitive complexity, called Ccomplexity, different from the Prolog-complexity (P-complexity) used in the competition, is briefly discussed. The authors thank Professors Donald Michie, Steve Muggleton, David Page and Ashwin Srinivasan for organizing the West-East Challenge competition of machine learning programs, which provided us with a stimulating challenge for our learning programs and inspired new ideas for improving them. The authors also thank Nabil Allkharouf and Ali Hadjarian for their help and suggestions in the efforts to solve problems posed by the competition. This research was conducted in the Center for Machine Learning and Inference at George Mason University. The Center's research is supported in part by the Advanced Research Projects Agency under Grant No. N00014-91-J-1854, administered by the Office of Naval Research, and Grant No. F49620-92-J-0549, administered by the Air Force Office of Scientific Research, in part by the Office of Naval Research under Grant No. N00014-91-J-1351, and in part by the National Science Foundation under Grants No. IRI-9020266, CDA-9309725 and DMI-9496192. ",
"neighbors": [
1292
],
"mask": "Test"
},
{
"node_id": 1086,
"label": 3,
"text": "Title: An Algorithm for the Construction of Bayesian Network Structures from Data \nAbstract: Previous algorithms for the construction of Bayesian belief network structures from data have been either highly dependent on conditional independence (CI) tests, or have required an ordering on the nodes to be supplied by the user. We present an algorithm that integrates these two approaches - CI tests are used to generate an ordering on the nodes from the database which is then used to recover the underlying Bayesian network structure using a non CI based method. Results of preliminary evaluation of the algorithm on two networks (ALARM and LED) are presented. We also discuss some algo rithm performance issues and open problems.",
"neighbors": [
1076,
1078,
1240,
1527,
1545,
1582,
1641
],
"mask": "Train"
},
{
"node_id": 1087,
"label": 3,
"text": "Title: The covariance inflation criterion for adaptive model selection \nAbstract: We propose a new criterion for model selection in prediction problems. The covariance inflation criterion adjusts the training error by the average covariance of the predictions and responses, when the prediction rule is applied to permuted versions of the dataset. This criterion can be applied to general prediction problems (for example regression or classification), and to general prediction rules (for example stepwise regression, tree-based models and neural nets). As a byproduct we obtain a measure of the effective number of parameters used by an adaptive procedure. We relate the covariance inflation criterion to other model selection procedures and illustrate its use in some regression and classification problems. We also revisit the conditional bootstrap approach to model selection. ",
"neighbors": [
1512
],
"mask": "Train"
},
{
"node_id": 1088,
"label": 2,
"text": "Title: SUPERVISED COMPETITIVE LEARNING FOR FINDING POSITIONS OF RADIAL BASIS FUNCTIONS \nAbstract: This paper introduces the magnetic neural gas (MNG) algorithm, which extends unsupervised competitive learning with class information to improve the positioning of radial basis functions. The basic idea of MNG is to discover heterogeneous clusters (i.e., clusters with data from different classes) and to migrate additional neurons towards them. The discovery is effected by a heterogeneity coefficient associated with each neuron and the migration is guided by introducing a kind of magnetic effect. The performance of MNG is tested on a number of data sets, including the thyroid data set. Results demonstrate promise. ",
"neighbors": [
1565
],
"mask": "Validation"
},
{
"node_id": 1089,
"label": 0,
"text": "Title: Modeling Analogical Problem Solving in a Production System Architecture \nAbstract: This research is supported by a National Science Foundation Fellowship awarded to Dario Salvucci and Office of Naval Research grant N00014-96-1-0491 awarded to John Anderson. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the National Science Foundation, the Office of Naval Research, or the United States government. ",
"neighbors": [
1354
],
"mask": "Validation"
},
{
"node_id": 1090,
"label": 2,
"text": "Title: Inference in Dynamic Error-in-Variable-Measurement Problems \nAbstract: Efficient algorithms have been developed for estimating model parameters from measured data, even in the presence of gross errors. In addition to point estimates of parameters, however, assessments of uncertainty are needed. Linear approximations provide standard errors, but these can be misleading when applied to models that are substantially nonlinear. To overcome this difficulty, \"profiling\" methods have been developed for the case in which the regressor variables are error free. In this paper we extend profiling methods to Error-in-Variable-Measurement (EVM) models. We use Laplace's method to integrate out the incidental parameters associated with the measurement errors, and then apply profiling methods to obtain approximate confidence contours for the parameters. This approach is computationally efficient, requiring few function evaluations, and can be applied to large scale problems. It is useful when the certain measurement errors (e.g., input variables) are relatively small, but not so small that they can be ignored. ",
"neighbors": [
1023,
1029
],
"mask": "Test"
},
{
"node_id": 1091,
"label": 2,
"text": "Title: Implicit learning in 3D object recognition: The importance of temporal context \nAbstract: A novel architecture and set of learning rules for cortical self-organization is proposed. The model is based on the idea that multiple information channels can modulate one another's plasticity. Features learned from bottom-up information sources can thus be influenced by those learned from contextual pathways, and vice versa. A maximum likelihood cost function allows this scheme to be implemented in a biologically feasible, hierarchical neural circuit. In simulations of the model, we first demonstrate the utility of temporal context in modulating plasticity. The model learns a representation that categorizes people's faces according to identity, independent of viewpoint, by taking advantage of the temporal continuity in image sequences. In a second set of simulations, we add plasticity to the contextual stream and explore variations in the architecture. In this case, the model learns a two-tiered representation, starting with a coarse view-based clustering and proceeding to a finer clustering of more specific stimulus features. This model provides a tenable account of how people may perform 3D object recognition in a hierarchical, bottom-up fashion. ",
"neighbors": [
476,
676,
1014,
1288
],
"mask": "Train"
},
{
"node_id": 1092,
"label": 6,
"text": "Title: Pruning Adaptive Boosting ICML-97 Final Draft \nAbstract: The boosting algorithm AdaBoost, developed by Freund and Schapire, has exhibited outstanding performance on several benchmark problems when using C4.5 as the \"weak\" algorithm to be \"boosted.\" Like other ensemble learning approaches, AdaBoost constructs a composite hypothesis by voting many individual hypotheses. In practice, the large amount of memory required to store these hypotheses can make ensemble methods hard to deploy in applications. This paper shows that by selecting a subset of the hypotheses, it is possible to obtain nearly the same levels of performance as the entire set. The results also provide some insight into the behavior of AdaBoost.",
"neighbors": [
569,
1484
],
"mask": "Validation"
},
{
"node_id": 1093,
"label": 2,
"text": "Title: The role of afferent excitatory and lateral inhibitory synaptic plasticity in visual cortical ocular dominance\nAbstract: The boosting algorithm AdaBoost, developed by Freund and Schapire, has exhibited outstanding performance on several benchmark problems when using C4.5 as the \"weak\" algorithm to be \"boosted.\" Like other ensemble learning approaches, AdaBoost constructs a composite hypothesis by voting many individual hypotheses. In practice, the large amount of memory required to store these hypotheses can make ensemble methods hard to deploy in applications. This paper shows that by selecting a subset of the hypotheses, it is possible to obtain nearly the same levels of performance as the entire set. The results also provide some insight into the behavior of AdaBoost.",
"neighbors": [
122,
355,
1659,
2085,
2228
],
"mask": "Train"
},
{
"node_id": 1094,
"label": 2,
"text": "Title: Plasticity in cortical neuron properties: Modeling the effects of an NMDA antagonist and a GABA\nAbstract: Infusion of a GABA agonist (Reiter & Stryker, 1988) and infusion of an NMDA receptor antagonist (Bear et al., 1990), in the primary visual cortex of kittens during monocular deprivation, shifts ocular dominance toward the closed eye, in the cortical region near the infusion site. This reverse ocular dominance shift has been previously modeled by variants of a covariance synaptic plasticity rule (Bear et al., 1990; Clothiaux et al., 1991; Miller et al., 1989; Reiter & Stryker, 1988). Kasamatsu et al. (1997, 1998) showed that infusion of an NMDA receptor antagonist in adult cat primary visual cortex changes ocular dominance distribution, reduces binocularity, and reduces orientation and direction selectivity. This paper presents a novel account of the effects of these pharmacological treatments, based on the EXIN synaptic plasticity rules (Marshall, 1995), which include both an instar afferent excitatory and an outstar lateral inhibitory rule. Functionally, the EXIN plasticity rules enhance the efficiency, discrimination, and context-sensitivity of a neural network's representation of perceptual patterns (Marshall, 1995; Marshall & Gupta, 1998). The EXIN model decreases lateral inhibition from neurons outside the infusion site (control regions) to neurons inside the infusion region, during monocular deprivation. In the model, plasticity in afferent pathways to neurons affected by the pharmacological treatments is assumed to be blocked , as opposed to previous models (Bear et al., 1990; Miller et al., 1989; Reiter & Stryker, 1988), in which afferent pathways from the open eye to neurons in the infusion region are weakened . The proposed model is consistent with results suggesting that long-term plasticity can be blocked by NMDA antagonists or by postsynaptic hyperpolarization (Bear et al., 1990; Dudek & Bear, 1992; Goda & Stevens, 1996; Kirkwood et al., 1993). Since the role of plasticity in lateral inhibitory pathways in producing cortical plasticity has not received much attention, several predictions are made based on the EXIN lateral inhibitory plasticity rule. ",
"neighbors": [
355,
1659,
2085,
2228
],
"mask": "Validation"
},
{
"node_id": 1095,
"label": 6,
"text": "Title: Learning Unions of Boxes with Membership and Equivalence Queries \nAbstract: We present two algorithms that use membership and equivalence queries to exactly identify the concepts given by the union of s discretized axis-parallel boxes in d-dimensional discretized Euclidean space where each coordinate can have n discrete values. The first algorithm receives at most sd counterexamples and uses time and membership queries polynomial in s and log n for d any constant. Further, all equivalence queries made can be formulated as the union of O(sd log s) axis-parallel boxes. Next, we introduce a new complexity measure that better captures the complexity of a union of boxes than simply the number of boxes and dimensions. Our new measure, , is the number of segments in the target polyhedron where a segment is a maximum portion of one of the sides of the polyhedron that lies entirely inside or entirely outside each of the other halfspaces defining the polyhedron. We then present an improvement of our first algorithm that uses time and queries polynomial in and log n. The hypothesis class used here is decision trees of height at most 2sd. Further we can show that the time and queries used by this algorithm are polynomial in d and log n for s any constant thus generalizing the exact learnability of DNF formulas with a constant number of terms. In fact, this single algorithm is efficient for either s or d constant. ",
"neighbors": [
792,
798,
1360,
1433,
1456
],
"mask": "Validation"
},
{
"node_id": 1096,
"label": 1,
"text": "Title: Pruning backpropagation neural networks using modern stochastic optimization techniques \nAbstract: Approaches combining genetic algorithms and neural networks have received a great deal of attention in recent years. As a result, much work has been reported in two major areas of neural network design: training and topology optimization. This paper focuses on the key issues associated with the problem of pruning a multilayer perceptron using genetic algorithms and simulated annealing. The study presented considers a number of aspects associated with network training that may alter the behavior of a stochastic topology optimizer. Enhancements are discussed that can improve topology searches. Simulation results for the two mentioned stochastic optimization methods applied to nonlinear system identification are presented and compared with a simple random search. ",
"neighbors": [
1069
],
"mask": "Train"
},
{
"node_id": 1097,
"label": 3,
"text": "Title: Belief Propagation and Revision in Networks with Loops \nAbstract: Local belief propagation rules of the sort proposed by Pearl (1988) are guaranteed to converge to the optimal beliefs for singly connected networks. Recently, a number of researchers have empirically demonstrated good performance of these same algorithms on networks with loops, but a theoretical understanding of this performance has yet to be achieved. Here we lay a foundation for an understanding of belief propagation in networks with loops. For networks with a single loop, we derive an analytical relationship between the steady state beliefs in the loopy network and the true posterior probability. Using this relationship we show a category of networks for which the MAP estimate obtained by belief update and by belief revision can be proven to be optimal (although the beliefs will be incorrect). We show how nodes can use local information in the messages they receive in order to correct the steady state beliefs. Furthermore we prove that for all networks with a single loop, the MAP estimate obtained by belief revision at convergence is guaranteed to give the globally optimal sequence of states. The result is independent of the length of the cycle and the size of the state space. For networks with multiple loops, we introduce the concept of a \"balanced network\" and show simulation results comparing belief revision and update in such networks. We show that the Turbo code structure is balanced and present simulations on a toy Turbo code problem indicating the decoding obtained by belief revision at convergence is significantly more likely to be correct. This report describes research done at the Center for Biological and Computational Learning and the Department of Brain and Cognitive Sciences of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. YW was also supported by NEI R01 EY11005 to E. H. Adelson ",
"neighbors": [
1393
],
"mask": "Train"
},
{
"node_id": 1098,
"label": 1,
"text": "Title: Scheduling Maintenance of Electrical Power Transmission Networks Using Genetic Programming \nAbstract: Previous work showed the combination of a Genetic Algorithm using an order or permutation chromosome combined with hand coded \"Greedy\" Optimizers can readily produce an optimal schedule for a four node test problem [ Langdon, 1995 ] . Following this the same GA has been used to find low cost schedules for the South Wales region of the UK high voltage power network. This paper describes the evolution of the best known schedule for the base South Wales problem using Genetic Programming starting from the hand coded heuris tics used with the GA.",
"neighbors": [
163,
343,
1305,
1911
],
"mask": "Train"
},
{
"node_id": 1099,
"label": 1,
"text": "Title: A Methodology for Processing Problem Constraints in Genetic Programming \nAbstract: Search mechanisms of artificial intelligence combine two elements: representation, which determines the search space, and a search mechanism, which actually explores the space. Unfortunately, many searches may explore redundant and/or invalid solutions. Genetic programming refers to a class of evolutionary algorithms based on genetic algorithms but utilizing a parameterized representation in the form of trees. These algorithms perform searches based on simulation of nature. They face the same problems of redundant/invalid subspaces. These problems have just recently been addressed in a systematic manner. This paper presents a methodology devised for the public domain genetic programming tool lil-gp. This methodology uses data typing and semantic information to constrain the representation space so that only valid, and possibly unique, solutions will be explored. The user enters problem-specific constraints, which are transformed into a normal set. This set is checked for feasibility, and subsequently it is used to limit the space being explored. The constraints can determine valid, possibly unique space. Moreover, they can also be used to exclude subspaces the user considers uninteresting, using some problem-specific knowledge. A simple example is followed thoroughly to illustrate the constraint language, transformations, and the normal set. Experiments with boolean 11-multiplexer illustrate practical applications of the method to limit redundant space exploration by utilizing problem-specific knowledge. fl Supported by a grant from NASA/JSC: NAG 9-847.",
"neighbors": [
163,
1178
],
"mask": "Train"
},
{
"node_id": 1100,
"label": 2,
"text": "Title: Observability of Linear Systems with Saturated Outputs \nAbstract: In this paper, we present necessary and sufficient conditions for observability of the class of output-saturated systems. These are linear systems whose output passes through a saturation function before it can be measured.",
"neighbors": [
1464
],
"mask": "Train"
},
{
"node_id": 1101,
"label": 0,
"text": "Title: On Reasoning from Data \nAbstract: In this paper, we present necessary and sufficient conditions for observability of the class of output-saturated systems. These are linear systems whose output passes through a saturation function before it can be measured.",
"neighbors": [
843,
1111,
1328
],
"mask": "Train"
},
{
"node_id": 1102,
"label": 5,
"text": "Title: Automated Refinement of First-Order Horn-Clause Domain Theories \nAbstract: Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, Forte (First-Order Revision of Theories from Examples), which refines first-order Horn-clause theories by integrating a variety of different revision techniques into a coherent whole. Forte uses these techniques within a hill-climbing framework, guided by a global heuristic. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including propositional theory refinement, first-order induction, and inverse resolution. Forte is demonstrated in several domains, including logic programming and qualitative modelling. ",
"neighbors": [
136,
985,
1174,
1370,
1413,
1479
],
"mask": "Validation"
},
{
"node_id": 1103,
"label": 2,
"text": "Reference: [39] Yoda, M. (1994).
Predicting the Tokyo stock market. In Deboeck, G.J. (Ed.) (1994). Trading on the Edge. New York: Wiley., 66-79. VITA Graduate School Southern Illinois University Daniel Nikolaev Nikovski Date of Birth: April 13, 1969 606 West College Street, Apt.4, Rm. 6, Carbondale, Illinois 62901 150 Hristo Botev Boulevard, Apt. 54, 4004 Plovdiv, Bulgaria Technical University - Sofia, Bulgaria Engineer of Computer Systems and Control Thesis Title: Adaptive Computation Techniques for Time Series Analysis Major Professor: Dr. Mehdi Zargham \nAbstract: Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, Forte (First-Order Revision of Theories from Examples), which refines first-order Horn-clause theories by integrating a variety of different revision techniques into a coherent whole. Forte uses these techniques within a hill-climbing framework, guided by a global heuristic. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including propositional theory refinement, first-order induction, and inverse resolution. Forte is demonstrated in several domains, including logic programming and qualitative modelling. ",
"neighbors": [
74,
427,
611,
1079,
1579
],
"mask": "Train"
},
{
"node_id": 1104,
"label": 6,
"text": "Title: Feature Generation for Sequence Categorization \nAbstract: The problem of sequence categorization is to generalize from a corpus of labeled sequences procedures for accurately labeling future unlabeled sequences. The choice of representation of sequences can have a major impact on this task, and in the absence of background knowledge a good representation is often not known and straightforward representations are often far from optimal. We propose a feature generation method (called FGEN) that creates Boolean features that check for the presence or absence of heuristically selected collections of subsequences. We show empirically that the representation computed by FGEN improves the accuracy of two commonly used learning systems (C4.5 and Ripper) when the new features are added to existing representations of sequence data. We show the superiority of FGEN across a range of tasks selected from three domains: DNA sequences, Unix command sequences, and English text. ",
"neighbors": [
1260,
1262
],
"mask": "Test"
},
{
"node_id": 1105,
"label": 6,
"text": "Title: PAC Learning Intersections of Halfspaces with Membership Queries (Extended Abstract) \nAbstract: ",
"neighbors": [
591,
798,
1026,
2356
],
"mask": "Validation"
},
{
"node_id": 1106,
"label": 1,
"text": "Title: Genetic Algorithms for Combinatorial Optimization: The Assembly Line Balancing Problem \nAbstract: Genetic algorithms are one example of the use of a random element within an algorithm for combinatorial optimization. We consider the application of the genetic algorithm to a particular problem, the Assembly Line Balancing Problem. A general description of genetic algorithms is given, and their specialized use on our test-bed problems is discussed. We carry out extensive computational testing to find appropriate values for the various parameters associated with this genetic algorithm. These experiments underscore the importance of the correct choice of a scaling parameter and mutation rate to ensure the good performance of a genetic algorithm. We also describe a parallel implementation of the genetic algorithm and give some comparisons between the parallel and serial implementations. Both versions of the algorithm are shown to be effective in producing good solutions for problems of this type (with appropriately chosen parameters). ",
"neighbors": [
163,
1065,
1305
],
"mask": "Train"
},
{
"node_id": 1107,
"label": 0,
"text": "Title: The Possible Contribution of AI to the Avoidance of Crises and Wars: Using CBR Methods\nAbstract: This paper presents the application of Case-Based Reasoning methods to the KOSIMO data base of international conflicts. A Case-Based Reasoning tool - VIE-CBR has been deveolped and used for the classification of various outcome variables, like political, military, and territorial outcome, solution modalities, and conflict intensity. In addition, the case retrieval algorithms are presented as an interactive, user-modifiable tool for intelli gently searching the conflict data base for precedent cases. ",
"neighbors": [
1328,
1617
],
"mask": "Test"
},
{
"node_id": 1108,
"label": 3,
"text": "Title: Confidence as Higher Order Uncertainty proposed for handling higher order uncertainty, including the Bayesian approach,\nAbstract: ",
"neighbors": [
1503,
1504,
1506,
1507
],
"mask": "Train"
},
{
"node_id": 1109,
"label": 6,
"text": "Title: Inductive Bias in Case-Based Reasoning Systems \nAbstract: In order to learn more about the behaviour of case-based reasoners as learning systems, we form-alise a simple case-based learner as a PAC learning algorithm, using the case-based representation hCB; i. We first consider a `naive' case-based learning algorithm CB1( H ) which learns by collecting all available cases into the case-base and which calculates similarity by counting the number of features on which two problem descriptions agree. We present results concerning the consistency of this learning algorithm and give some partial results regarding its sample complexity. We are able to characterise CB1( H ) as a `weak but general' learning algorithm. We then consider how the sample complexity of case-based learning can be reduced for specific classes of target concept by the application of inductive bias, or prior knowledge of the class of target concepts. Following recent work demonstrating how case-based learning can be improved by choosing a similarity measure appropriate to the concept being learnt, we define a second case-based learning `algorithm' CB2 which learns using the best possible similarity measure that might be inferred for the chosen target concept. While CB2 is not an executable learning strategy (since the chosen similarity measure is defined in terms of a priori knowledge of the actual target concept) it allows us to assess in the limit the maximum possible contribution of this approach to case-based learning. Also, in addition to illustrating the role of inductive bias, the definition of CB2 simplifies the general problem of establishing which functions might be represented in the form hCB; i. Reasoning about the case-based representation in this special case has therefore been a little more straight-forward than in the general case of CB1( H ), allowing more substantial results regarding representable functions and sample complexity to be presented for CB2. In assessing these results, we are forced to conclude that case-based learning is not the best approach to learning the chosen concept space (the space of monomial functions). We discuss, however, how our study has demonstrated, in the context of case-based learning, the operation of concepts well known in machine learning such as inductive bias and the trade-off between computational complexity and sample complexity. ",
"neighbors": [
1328,
1570,
2151
],
"mask": "Train"
},
{
"node_id": 1110,
"label": 1,
"text": "Title: On The State of Evolutionary Computation \nAbstract: In the past few years the evolutionary computation landscape has been rapidly changing as a result of increased levels of interaction between various research groups and the injection of new ideas which challenge old tenets. The effect has been simultaneously exciting, invigorating, annoying, and bewildering to the old-timers as well as the new-comers to the field. Emerging out of all of this activity are the beginnings of some structure, some common themes, and some agreement on important open issues. We attempt to summarize these emergent properties in this paper. ",
"neighbors": [
163,
793,
1016,
1728,
1729
],
"mask": "Train"
},
{
"node_id": 1111,
"label": 0,
"text": "Title: Towards a Better Understanding of Memory-Based Reasoning Systems \nAbstract: We quantify both experimentally and analytically the performance of memory-based reasoning (MBR) algorithms. To start gaining insight into the capabilities of MBR algorithms, we compare an MBR algorithm using a value difference metric to a popular Bayesian classifier. These two approaches are similar in that they both make certain independence assumptions about the data. However, whereas MBR uses specific cases to perform classification, Bayesian methods summarize the data probabilistically. We demonstrate that a particular MBR system called Pebls works comparatively well on a wide range of domains using both real and artificial data. With respect to the artificial data, we consider distributions where the concept classes are separated by functional discriminants, as well as time-series data generated by Markov models of varying complexity. Finally, we show formally that Pebls can learn (in the limit) natural concept classes that the Bayesian classifier cannot learn, and that it will attain perfect accuracy whenever ",
"neighbors": [
258,
1101,
1328,
1339,
1412,
1570,
2292
],
"mask": "Test"
},
{
"node_id": 1112,
"label": 2,
"text": "Title: Flexible Metric Nearest Neighbor Classiflcation \nAbstract: The K-nearest-neighbor decision rule assigns an object of unknown class to the plurality class among the K labeled \"training\" objects that are closest to it. Closeness is usually deflned in terms of a metric distance on the Euclidean space with the input measurement variables as axes. The metric chosen to deflne this distance can strongly efiect performance. An optimal choice depends on the problem at hand as characterized by the respective class distributions on the input measurement space, and within a given problem, on the location of the unknown object in that space. In this paper new types of K-nearest-neighbor procedures are described that estimate the local relevance of each input variable, or their linear combinations, for each individual point to be classifled. This information is then used to separately customize the metric used to deflne distance from that object in flnding its nearest neighbors. These procedures are a hybrid between regular K-nearest-neighbor methods and treestructured recursive partitioning techniques popular in statistics and machine learning.",
"neighbors": [
101,
719,
926,
1073,
1403,
1512,
1618,
2415
],
"mask": "Test"
},
{
"node_id": 1113,
"label": 1,
"text": "Title: Staged Hybrid Genetic Search for Seismic Data Imaging \nAbstract: Seismic data interpretation problems are typically solved using computationally intensive local search methods which often result in inferior solutions. Here, a traditional hybrid genetic algorithm is compared with different staged hybrid genetic algorithms on the geophysical imaging static corrections problem. The traditional hybrid genetic algorithm used here applied local search to every offspring produced by genetic search. The staged hybrid genetic algorithms were designed to temporally separate the local and genetic search components into distinct phases so as to minimize interference between the two search methods. The results show that some staged hybrid genetic algorithms produce higher quality solutions while using significantly less computational time for this problem. ",
"neighbors": [
163,
1153,
1380
],
"mask": "Test"
},
{
"node_id": 1114,
"label": 1,
"text": "Title: Using Genetic Algorithms to Explore Pattern Recognition in the Immune System COMMENTS WELCOME \nAbstract: This paper describes an immune system model based on binary strings. The purpose of the model is to study the pattern recognition processes and learning that take place at both the individual and species levels in the immune system. The genetic algorithm (GA) is a central component of the model. The paper reports simulation experiments on two pattern recognition problems that are relevant to natural immune systems. Finally, it reviews the relation between the model and explicit fitness sharing techniques for genetic algorithms, showing that the immune system model implements a form of implicit fitness sharing. ",
"neighbors": [
163,
602,
1117,
1261,
1371,
1588,
1603,
1696
],
"mask": "Train"
},
{
"node_id": 1115,
"label": 2,
"text": "Title: The locally linear Nested Network for robot manipulation \nAbstract: We present a method for accurate representation of high-dimensional unknown functions from random samples drawn from its input space. The method builds representations of the function by recursively splitting the input space in smaller subspaces, while in each of these subspaces a linear approximation is computed. The representations of the function at all levels (i.e., depths in the tree) are retained during the learning process, such that a good generalisation is available as well as more accurate representations in some subareas. Therefore, fast and accurate learning are combined in this method. ",
"neighbors": [
820,
1252
],
"mask": "Validation"
},
{
"node_id": 1116,
"label": 2,
"text": "Title: Equivalence of Linear Boltzmann Chains and Hidden Markov Models sequence L, is: where Z(; A;\nAbstract: Several authors have made a link between hidden Markov models for time series and energy-based models (Luttrell 1989, Williams 1990, Saul and Jordan 1995). Saul and Jordan (1995) discuss a linear Boltzmann chain model with state-state transition energies A ii 0 (going from state i to state i 0 ) and symbol emission energies B ij , under which the probability of an entire state fi l ; j l g L Whilst any HMM can be written as a linear Boltzmann chain by setting exp(A ii 0 ) = a ii 0 , exp(B ij ) = b ij and exp( i ) = i , not all linear Boltzmann chains can be represented as HMMs (Saul and Jordan 1995). However, the difference between the two models is minimal. To be precise, if the final hidden ",
"neighbors": [
31,
611,
708,
736,
1593
],
"mask": "Train"
},
{
"node_id": 1117,
"label": 1,
"text": "Title: A Coevolutionary Approach to Learning Sequential Decision Rules \nAbstract: We present a coevolutionary approach to learning sequential decision rules which appears to have a number of advantages over non-coevolutionary approaches. The coevolutionary approach encourages the formation of stable niches representing simpler sub-behaviors. The evolutionary direction of each subbehavior can be controlled independently, providing an alternative to evolving complex behavior using intermediate training steps. Results are presented showing a significant learning rate speedup over a non-coevolutionary approach in a simulated robot domain. In addition, the results suggest the coevolutionary approach may lead to emer gent problem decompositions.",
"neighbors": [
247,
562,
910,
1114,
1225,
1261,
1588,
1603,
2089,
2332
],
"mask": "Train"
},
{
"node_id": 1118,
"label": 2,
"text": "Title: Adapting Bias by Gradient Descent: An Incremental Version of Delta-Bar-Delta \nAbstract: Appropriate bias is widely viewed as the key to efficient learning and generalization. I present a new algorithm, the Incremental Delta-Bar-Delta (IDBD) algorithm, for the learning of appropriate biases based on previous learning experience. The IDBD algorithm is developed for the case of a simple, linear learning system|the LMS or delta rule with a separate learning-rate parameter for each input. The IDBD algorithm adjusts the learning-rate parameters, which are an important form of bias for this system. Because bias in this approach is adapted based on previous learning experience, the appropriate testbeds are drifting or non-stationary learning tasks. For particular tasks of this type, I show that the IDBD algorithm performs better than ordinary LMS and in fact finds the optimal learning rates. The IDBD algorithm extends and improves over prior work by Jacobs and by me in that it is fully incremental and has only a single free parameter. This paper also extends previous work by presenting a derivation of the IDBD algorithm as gradient descent in the space of learning-rate parameters. Finally, I offer a novel interpretation of the IDBD algorithm as an incremental form of hold-one-out cross validation. ",
"neighbors": [
134,
1540
],
"mask": "Train"
},
{
"node_id": 1119,
"label": 2,
"text": "Title: Adaptive Parameter Pruning in Neural Networks \nAbstract: Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization. An open problem in the pruning methods known today (OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This paper presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. The results of extensive experimentation indicate that lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required. Results of statistical significance tests comparing autoprune to the new method lprune as well as to backpropagation with early stopping are given for 14 different problems. ",
"neighbors": [
881,
1203,
2405
],
"mask": "Train"
},
{
"node_id": 1120,
"label": 2,
"text": "Title: ICSIM: An Object-Oriented Connectionist Simulator gives an overview of the simulator. Its main concepts, the\nAbstract: ICSIM is a connectionist net simulator being developed at ICSI and written in Sather. It is object-oriented to meet the requirements for flexibility and reuse of homogeneous and structured connectionist nets and to allow the user to encapsulate efficient customized implementations perhaps running on dedicated hardware. Nets are composed by combining off-the-shelf library classes and if necessary by specializing some of their behaviour. General user interface classes allow a uniform or customized graphic presentation of the nets being modeled. ",
"neighbors": [
1677,
2275
],
"mask": "Train"
},
{
"node_id": 1121,
"label": 0,
"text": "Title: Generic Teleological Mechanisms and their Use in Case Adaptation \nAbstract: In experience-based (or case-based) reasoning, new problems are solved by retrieving and adapting the solutions to similar problems encountered in the past. An important issue in experience-based reasoning is to identify different types of knowledge and reasoning useful for different classes of case-adaptation tasks. In this paper, we examine a class of non-routine case-adaptation tasks that involve patterned insertions of new elements in old solutions. We describe a model-based method for solving this task in the context of the design of physical devices. The method uses knowledge of generic teleological mechanisms (GTMs) such as cascading. Old designs are adapted to meet new functional specifications by accessing and instantiating the appropriate GTM. The Kritik2 system evaluates the computational feasibility and sufficiency of this method for design adaptation. ",
"neighbors": [
540,
643,
806,
1046,
1138,
1344,
1640
],
"mask": "Validation"
},
{
"node_id": 1122,
"label": 0,
"text": "Title: A Comparative Utility Analysis of Case-Based Reasoning and Control-Rule Learning Systems \nAbstract: The utility problem in learning systems occurs when knowledge learned in an attempt to improve a system's performance degrades performance instead. We present a methodology for the analysis of utility problems which uses computational models of problem solving systems to isolate the root causes of a utility problem, to detect the threshold conditions under which the problem will arise, and to design strategies to eliminate it. We present models of case-based reasoning and control-rule learning systems and compare their performance with respect to the swamping utility problem. Our analysis suggests that case-based reasoning systems are more resistant to the utility problem than control-rule learning systems. 1",
"neighbors": [
578,
594,
717,
799,
1194,
1534
],
"mask": "Train"
},
{
"node_id": 1123,
"label": 0,
"text": "Title: MAC/FAC: A Model of Similarity-based Retrieval \nAbstract: We present a model of similarity-based retrieval which attempts to capture three psychological phenomena: (1) people are extremely good at judging similarity and analogy when given items to compare. (2) Superficial remindings are much more frequent than structural remindings. (3) People sometimes experience and use purely structural analogical re-mindings. Our model, called MAC/FAC (for \"many are called but few are chosen\") consists of two stages. The first stage (MAC) uses a computationally cheap, non-structural matcher to filter candidates from a pool of memory items. That is, we redundantly encode structured representations as content vectors, whose dot product yields an estimate of how well the corresponding structural representations will match. The second stage (FAC) uses SME to compute a true structural match between the probe and output from the first stage. MAC/FAC has been fully implemented, and we show that it is capable of modeling patterns of access found in psychological data. ",
"neighbors": [
75,
539,
541,
1176,
1188,
1354,
1483,
1674,
1680
],
"mask": "Train"
},
{
"node_id": 1124,
"label": 6,
"text": "Title: ON-LINE LEARNING OF LINEAR FUNCTIONS \nAbstract: We present an algorithm for the on-line learning of linear functions which is optimal to within a constant factor with respect to bounds on the sum of squared errors for a worst case sequence of trials. The bounds are logarithmic in the number of variables. Furthermore, the algorithm is shown to be optimally robust with respect to noise in the data (again to within a constant factor). Key words. Machine learning; computational learning theory; on-line learning; linear functions; worst-case loss bounds; adaptive filter theory. Subject classifications. 68T05. ",
"neighbors": [
453,
1566,
1567
],
"mask": "Validation"
},
{
"node_id": 1125,
"label": 0,
"text": "Title: Constructive Similarity Assessment: Using Stored Cases to Define New Situations \nAbstract: A fundamental issue in case-based reasoning is similarity assessment: determining similarities and differences between new and retrieved cases. Many methods have been developed for comparing input case descriptions to the cases already in memory. However, the success of such methods depends on the input case description being sufficiently complete to reflect the important features of the new situation, which is not assured. In case-based explanation of anomalous events during story understanding, the anomaly arises because the current situation is incompletely understood; consequently, similarity assessment based on matches between known current features and old cases is likely to fail because of gaps in the current case's description. Our solution to the problem of gaps in a new case's description is an approach that we call constructive similarity assessment. Constructive similarity assessment treats similarity assessment not as a simple comparison between fixed new and old cases, but as a process for deciding which types of features should be investigated in the new situation and, if the features are borne out by other knowledge, added to the description of the current case. Constructive similarity assessment does not merely compare new cases to old: using prior cases as its guide, it dynamically carves augmented descriptions of new cases out of memory. ",
"neighbors": [
166,
817,
818,
857,
1483,
1496
],
"mask": "Validation"
},
{
"node_id": 1126,
"label": 0,
"text": "Title: Towards A Computer Model of Memory Search Strategy Learning \nAbstract: Much recent research on modeling memory processes has focused on identifying useful indices and retrieval strategies to support particular memory tasks. Another important question concerning memory processes, however, is how retrieval criteria are learned. This paper examines the issues involved in modeling the learning of memory search strategies. It discusses the general requirements for appropriate strategy learning and presents a model of memory search strategy learning applied to the problem of retrieving relevant information for adapting cases in case-based reasoning. It discusses an implementation of that model, and, based on the lessons learned from that implementation, points towards issues and directions in refining the model. ",
"neighbors": [
580,
583,
923,
1212,
1497,
2371,
2372,
2489
],
"mask": "Train"
},
{
"node_id": 1127,
"label": 1,
"text": "Title: Recombination Operator, its Correlation to the Fitness Landscape and Search Performance \nAbstract: The author reserves all other publication and other rights in association with the copyright in the thesis, and except as hereinbefore provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatever without the author's prior written permission. ",
"neighbors": [
163,
728,
793,
938,
1153
],
"mask": "Train"
},
{
"node_id": 1128,
"label": 3,
"text": "Title: On Structured Variational Approximations \nAbstract: The problem of approximating a probability distribution occurs frequently in many areas of applied mathematics, including statistics, communication theory, machine learning, and the theoretical analysis of complex systems such as neural networks. Saul and Jordan (1996) have recently proposed a powerful method for efficiently approximating probability distributions known as structured variational approximations. In structured variational approximations, exact algorithms for probability computation on tractable substructures are combined with variational methods to handle the interactions between the substructures which make the system as a whole intractable. In this note, I present a mathematical result which can simplify the derivation of struc tured variational approximations in the exponential family of distributions.",
"neighbors": [
76,
1288,
1393
],
"mask": "Train"
},
{
"node_id": 1129,
"label": 2,
"text": "Title: A Self-Organizing Binary Decision Tree For Incrementally Defined Rule Based \nAbstract: This paper presents an ASOCS (adaptive self-organizing concurrent system) model for massively parallel processing of incrementally defined rule systems in such areas as adaptive logic, robotics, logical inference, and dynamic control. An ASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. This paper focuses on adaptive algorithm 3 (AA3) and details its architecture and learning algorithm. It has advantages over previous ASOCS models in simplicity, implementability, and cost. An ASOCS can operate in either a data processing mode or a learning mode. During the data processing mode, an ASOCS acts as a parallel hardware circuit. In learning mode, rules expressed as boolean conjunctions are incrementally presented to the ASOCS. All ASOCS learning algorithms incorporate a new rule in a distributed fashion in a short, bounded time. ",
"neighbors": [
26,
809,
814,
919,
1080,
1190,
1222
],
"mask": "Train"
},
{
"node_id": 1130,
"label": 1,
"text": "Title: Dynamic Hill Climbing: Overcoming the limita- tions of optimization techniques \nAbstract: This paper describes a novel search algorithm, called dynamic hill climbing, that borrows ideas from genetic algorithms and hill climbing techniques. Unlike both genetic and hill climbing algorithms, dynamic hill climbing has the ability to dynamically change its coordinate frame during the course of an optimization. Furthermore, the algorithm moves from a coarse-grained search to a fine-grained search of the function space by changing its mutation rate and uses a diversity-based distance metric to ensure that it searches new regions of the space. Dynamic hill climbing is empirically compared to a traditional genetic algorithm using De Jong's well-known five function test suite [4] and is shown to vastly surpass the performance of the genetic algorithm, often finding better solutions using only 1% as many function evaluations. ",
"neighbors": [
163,
959,
1334
],
"mask": "Test"
},
{
"node_id": 1131,
"label": 1,
"text": "Title: ADAPTIVE TESTING OF CONTROLLERS FOR AUTONOMOUS VEHICLES \nAbstract: Autonomous vehicles are likely to require sophisticated software controllers to maintain vehicle performance in the presence of vehicle faults. The test and evaluation of complex software controllers is expected to be a challenging task. The goal of this e ffort is to apply machine learning techniques from the field of arti ficial intelligence to the general problem of evaluating an intelligent controller for an autonomous vehicle. The approach involves subjecting a controller to an adaptively chosen set of fault scenarios within a vehicle simulator, and searching for combinations of faults that produce noteworthy performance by the vehicle controller. The search employs a genetic algorithm. We illustrate the approach by evaluating the performance of a subsumption-based controller for an autonomous vehicle. The preliminary evidence suggests that this approach is an e ffective alternative to manual testing of sophisticated software controllers. ",
"neighbors": [
910,
966,
1253
],
"mask": "Test"
},
{
"node_id": 1132,
"label": 6,
"text": "Title: A Theory of Unsupervised Speedup Learning \nAbstract: Speedup learning seeks to improve the efficiency of search-based problem solvers. In this paper, we propose a new theoretical model of speedup learning which captures systems that improve problem solving performance by solving a user-given set of problems. We also use this model to motivate the notion of \"batch problem solving,\" and argue that it is more congenial to learning than sequential problem solving. Our theoretical results are applicable to all serially decomposable domains. We empirically validate our results in the domain of Eight Puzzle. 1 ",
"neighbors": [
1309
],
"mask": "Train"
},
{
"node_id": 1133,
"label": 3,
"text": "Title: A Fast Non-Parametric Density Estimation Algorithm \nAbstract: Non-parametric density estimation is the problem of approximating the values of a probability density function, given samples from the associated distribution. Non-parametric estimation finds applications in discriminant analysis, cluster analysis, and flow calculations based on Smoothed Particle Hydrodynamics. Usual estimators make use of kernel functions, and require on the order of n 2 arithmetic operations to evaluate the density at n sample points. We describe a sequence of special weight functions which requires almost linear number of operations in n for the same computation. ",
"neighbors": [
719,
1666
],
"mask": "Test"
},
{
"node_id": 1134,
"label": 1,
"text": "Title: Discontinuity in evolution: how different levels of organization imply pre-adaptation \nAbstract: Non-parametric density estimation is the problem of approximating the values of a probability density function, given samples from the associated distribution. Non-parametric estimation finds applications in discriminant analysis, cluster analysis, and flow calculations based on Smoothed Particle Hydrodynamics. Usual estimators make use of kernel functions, and require on the order of n 2 arithmetic operations to evaluate the density at n sample points. We describe a sequence of special weight functions which requires almost linear number of operations in n for the same computation. ",
"neighbors": [
1264,
2281
],
"mask": "Test"
},
{
"node_id": 1135,
"label": 6,
"text": "Title: Learning First-Order Acyclic Horn Programs from Entailment \nAbstract: In this paper, we consider learning first-order Horn programs from entailment. In particular, we show that any subclass of first-order acyclic Horn programs with constant arity is exactly learnable from equivalence and entailment membership queries provided it allows a polynomial-time subsumption procedure and satisfies some closure conditions. One consequence of this is that first-order acyclic determinate Horn programs with constant arity are exactly learnable from equiv alence and entailment membership queries.",
"neighbors": [
1174,
1442,
1444
],
"mask": "Train"
},
{
"node_id": 1136,
"label": 1,
"text": "Title: Using Neural Networks and Genetic Algorithms as Heuristics for NP-Complete Problems \nAbstract: Paradigms for using neural networks (NNs) and genetic algorithms (GAs) to heuristically solve boolean satisfiability (SAT) problems are presented. Since SAT is NP-Complete, any other NP-Complete problem can be transformed into an equivalent SAT problem in polynomial time, and solved via either paradigm. This technique is illustrated for hamiltonian circuit (HC) problems. ",
"neighbors": [
163,
727,
800,
935,
1018,
1030,
1060,
1063,
1142,
1286,
1333,
1516,
1523,
1558,
1575,
1594,
1740
],
"mask": "Train"
},
{
"node_id": 1137,
"label": 4,
"text": "Title: Learning Conventions in Multiagent Stochastic Domains using Likelihood Estimates \nAbstract: Fully cooperative multiagent systemsthose in which agents share a joint utility modelis of special interest in AI. A key problem is that of ensuring that the actions of individual agents are coordinated, especially in settings where the agents are autonomous decision makers. We investigate approaches to learning coordinated strategies in stochastic domains where an agent's actions are not directly observable by others. Much recent work in game theory has adopted a Bayesian learning perspective to the more general problem of equilibrium selection, but tends to assume that actions can be observed. We discuss the special problems that arise when actions are not observable, including effects on rates of convergence, and the effect of action failure probabilities and asymmetries. We also use likelihood estimates as a means of generalizing fictitious play learning models in our setting. Finally, we propose the use of maximum likelihood as a means of removing strategies from consideration, with the aim of convergence to a conventional equilibrium, at which point learning and deliberation can cease.",
"neighbors": [
558,
1459,
1687
],
"mask": "Train"
},
{
"node_id": 1138,
"label": 0,
"text": "Title: Learning Generic Mechanisms from Experiences for Analogical Reasoning \nAbstract: Humans appear to often solve problems in a new domain by transferring their expertise from a more familiar domain. However, making such cross-domain analogies is hard and often requires abstractions common to the source and target domains. Recent work in case-based design suggests that generic mechanisms are one type of abstractions used by designers. However, one important yet unexplored issue is where these generic mechanisms come from. We hypothesize that they are acquired incrementally from problem-solving experiences in familiar domains by generalization over patterns of regularity. Three important issues in generalization from experiences are what to generalize from an experience, how far to generalize, and what methods to use. In this paper, we show that mental models in a familiar domain provide the content, and together with the problem-solving context in which learning occurs, also provide the constraints for learning generic mechanisms from design experiences. In particular, we show how the model-based learning method integrated with similarity-based learning addresses the issues in generalization from experiences. ",
"neighbors": [
806,
1121,
1344,
1420,
1597,
2706
],
"mask": "Train"
},
{
"node_id": 1139,
"label": 1,
"text": "Title: Optimal Mutation Rates in Genetic Search \nAbstract: The optimization of a single bit string by means of iterated mutation and selection of the best (a (1+1)-Genetic Algorithm) is discussed with respect to three simple fitness functions: The counting ones problem, a standard binary encoded integer, and a Gray coded integer optimization problem. A mutation rate schedule that is optimal with respect to the success probability of mutation is presented for each of the objective functions, and it turns out that the standard binary code can hamper the search process even in case of unimodal objective functions. While normally a mutation rate of 1=l (where l denotes the bit string length) is recommendable, our results indicate that a variation of the mutation rate is useful in cases where the fitness function is a multimodal pseudo-boolean function, where multimodality may be caused by the objective function as well as the encoding mechanism.",
"neighbors": [
163,
780,
793,
1018,
1572,
1598
],
"mask": "Train"
},
{
"node_id": 1140,
"label": 1,
"text": "Title: LEARNING ROBOT BEHAVIORS USING GENETIC ALGORITHMS \nAbstract: Genetic Algorithms are used to learn navigation and collision avoidance behaviors for robots. The learning is performed under simulation, and the resulting behaviors are then used to control the The approach to learning behaviors for robots described here reflects a particular methodology for learning via a simulation model. The motivation is that making mistakes on real systems may be costly or dangerous. In addition, time constraints might limit the number of experiences during learning in the real world, while in many cases, the simulation model can be made to run faster than real time. Since learning may require experimenting with behaviors that might occasionally produce unacceptable results if applied to the real world, or might require too much time in the real environment, we assume that hypothetical behaviors will be evaluated in a simulation model (the off-line system). As illustrated in Figure 1, the current best behavior can be placed in the real, on-line system, while learning continues in the off-line system [1]. The learning algorithm was designed to learn useful behaviors from simulations of limited fidelity. The expectation is that behaviors learned in these simulations will be useful in real-world environments. Previous studies have illustrated that knowledge learned under simulation is robust and might be applicable to the real world if the simulation is more general (i.e. has more noise, more varied conditions, etc.) than the real world environment [2]. Where this is not possible, it is important to identify the differences between the simulation and the world and note the effect upon the learning process. The research reported here continues to examine this hypothesis. The next section very briefly explains the learning algorithm (and gives pointers to where more extensive documentation can be found). After that, the actual robot is described. Then we describe the simulation of the robot. The task _______________ actual robot.",
"neighbors": [
811,
910,
964,
965,
966,
981,
1311,
2294
],
"mask": "Train"
},
{
"node_id": 1141,
"label": 3,
"text": "Title: Bayesian Graphical Modeling for Intelligent Tutoring Systems \nAbstract: Conventional Intelligent Tutoring Systems (ITS) do not acknowledge uncertainty about the student's knowledge. Yet, both the outcome of any teaching intervention and the exact state of the student's knowledge are uncertain. In recent years, researchers have made startling progress in the management of uncertainty in knowledge-based systems. Building on these developments, we describe an ITS architecture that explicitly models uncertainty. This will facilitate more accurate student modeling and provide ITS's which can learn.",
"neighbors": [
1172,
1240,
1241
],
"mask": "Test"
},
{
"node_id": 1142,
"label": 1,
"text": "Title: A NN Algorithm for Boolean Satisfiability Problems \nAbstract: Satisfiability (SAT) refers to the task of finding a truth assignment that makes an arbitrary boolean expression true. This paper compares a neural network algorithm (NNSAT) with GSAT [4], a greedy algorithm for solving satisfiability problems. GSAT can solve problem instances that are difficult for traditional satisfiability algorithms. Results suggest that NNSAT scales better as the number of variables increase, solving at least as many hard SAT problems.",
"neighbors": [
1018,
1136
],
"mask": "Train"
},
{
"node_id": 1143,
"label": 1,
"text": "Title: Neural Networks in an Artificial Life Perspective \nAbstract: In the last few years several researchers within the Artificial Life and Mobile Robotics community used Artificial Neural Networks. Explicitly viewing Neural Networks in an Artificial Life perspective has a number of consequences that make research on what we will call Artificial Life Neural Networks ( ALNNs) rather different from traditional connectionist research. The aim of the paper is to make the differences between ALNNs and \"classical\" neural networks explicit.",
"neighbors": [
1404,
2165,
2429
],
"mask": "Train"
},
{
"node_id": 1144,
"label": 2,
"text": "Title: VIEWNET ARCHITECTURES FOR INVARIANT 3-D OBJECT LEARNING AND RECOGNITION FROM MULTIPLE 2-D VIEWS \nAbstract: 3 The recognition of 3-D objects from sequences of their 2-D views is modeled by a family of self-organizing neural architectures, called VIEWNET, that use View Information Encoded With NETworks. VIEWNET incorporates a preprocessor that generates a compressed but 2-D invariant representation of an image, a supervised incremental learning system (Fuzzy ARTMAP) that classifies the preprocessed representations into 2-D view categories whose outputs are combined into 3-D invariant object categories, and a working memory that makes a 3-D object prediction by accumulating evidence over time from 3-D object category nodes as multiple 2-D views are experienced. VIEWNET was benchmarked on an MIT Lincoln Laboratory database of 128x128 2-D views of aircraft, including small frontal views, with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view and of up to 98.5% correct with three 2-D views. The properties of 2-D view and 3-D object category nodes are compared with those of cells in monkey inferotemporal cortex. ",
"neighbors": [
592,
1509
],
"mask": "Test"
},
{
"node_id": 1145,
"label": 2,
"text": "Title: A Unifying View of Some Training Algorithms for Multilayer Perceptrons with FIR Filter Synapses \nAbstract: Recent interest has come about in deriving various neural network architectures for modelling time-dependent signals. A number of algorithms have been published for multilayer perceptrons with synapses described by finite impulse response (FIR) and infinite impulse response (IIR) filters (the latter case is also known as Locally Recurrent Globally Feedforward Networks). The derivations of these algorithms have used different approaches in calculating the gradients, and in this note, we present a short, but unifying account of how these different algorithms compare for the FIR case, both in derivation, and performance. New algorithms are subsequently presented. Simulation results have been performed to benchmark these algorithms. In this note, results are compared for the Mackey-Glass chaotic time series against a number of other methods including a standard multilayer perceptron, and a local approximation method. ",
"neighbors": [
1323
],
"mask": "Train"
},
{
"node_id": 1146,
"label": 2,
"text": "Title: The optimal number of learning samples and hidden units in function approximation with a feedforward network \nAbstract: This paper presents a methodology to estimate the optimal number of learning samples and the number of hidden units needed to obtain a desired accuracy of a function approximation by a feedforward network. The representation error and the generalization error, components of the total approximation error are analyzed and the approximation accuracy of a feedforward network is investigated as a function of the number of hidden units and the number of learning samples. Based on the asymptotical behavior of the approximation error, an asymptotical model of the error function (AMEF) is introduced of which the parameters can be determined experimentally. An alternative model of the error function, which include theoretical results about general bounds of approximation, is also analyzed. In combination with knowledge about the computational complexity of the learning rule an optimal learning set size and number of hidden units can be found resulting in a minimum computation time for a given desired precision of the approximation. This approach was applied to optimize the learning of the camera-robot mapping of a visually guided robot arm and a complex logarithm function approximation. ",
"neighbors": [
820,
1676
],
"mask": "Train"
},
{
"node_id": 1147,
"label": 3,
"text": "Title: Decomposable graphical Gaussian model determination \nAbstract: We propose a methodology for Bayesian model determination in decomposable graphical Gaussian models. To achieve this aim we consider a hyper inverse Wishart prior distribution on the concentration matrix for each given graph. To ensure compatibility across models, such prior distributions are obtained by marginalisation from the prior conditional on the complete graph. We explore alternative structures for the hyperparameters of the latter, and their consequences for the model. Model determination is carried out by implementing a reversible jump MCMC sampler. In particular, the dimension-changing move we propose involves adding or dropping an edge from the graph. We characterise the set of moves which preserve the decomposability of the graph, giving a fast algorithm for maintaining the junction tree representation of the graph at each sweep. As state variable, we propose to use the incomplete variance-covariance matrix, containing only the elements for which the corresponding element of the inverse is nonzero. This allows all computations to be performed locally, at the clique level, which is a clear advantage for the analysis of large and complex data-sets. Finally, the statistical and computational performance of the procedure is illustrated by means of both artificial and real multidimensional data-sets. ",
"neighbors": [
161,
772,
1240,
1241,
1347
],
"mask": "Train"
},
{
"node_id": 1148,
"label": 0,
"text": "Title: Opportunistic Reasoning: A Design Perspective \nAbstract: An essential component of opportunistic behavior is opportunity recognition, the recognition of those conditions that facilitate the pursuit of some suspended goal. Opportunity recognition is a special case of situation assessment, the process of sizing up a novel situation. The ability to recognize opportunities for reinstating suspended problem contexts (one way in which goals manifest themselves in design) is crucial to creative design. In order to deal with real world opportunity recognition, we attribute limited inferential power to relevant suspended goals. We propose that goals suspended in the working memory monitor the internal (hidden) representations of the currently recognized objects. A suspended goal is satisfied when the current internal representation and a suspended goal match. We propose a computational model for working memory and we compare it with other relevant theories of opportunistic planning. This working memory model is implemented as part of our IMPROVISER system. ",
"neighbors": [
30,
285,
486,
1355,
1534,
1597
],
"mask": "Train"
},
{
"node_id": 1149,
"label": 2,
"text": "Title: What Size Neural Network Gives Optimal Generalization? Convergence Properties of Backpropagation \nAbstract: Technical Report UMIACS-TR-96-22 and CS-TR-3617 Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 Abstract One of the most important aspects of any machine learning paradigm is how it scales according to problem size and complexity. Using a task with known optimal training error, and a pre-specified maximum number of training updates, we investigate the convergence of the backpropagation algorithm with respect to a) the complexity of the required function approximation, b) the size of the network in relation to the size required for an optimal solution, and c) the degree of noise in the training data. In general, for a) the solution found is worse when the function to be approximated is more complex, for b) oversized networks can result in lower training and generalization error in certain cases, and for c) the use of committee or ensemble techniques can be more beneficial as the level of noise in the training data is increased. For the experiments we performed, we do not obtain the optimal solution in any case. We further support the observation that larger networks can produce better training and generalization error using a face recognition example where a network with many more parameters than training points generalizes better than smaller networks. ",
"neighbors": [
912,
1150,
1323,
1891,
2044
],
"mask": "Test"
},
{
"node_id": 1150,
"label": 2,
"text": "Title: Lessons in Neural Network Training: Overfitting Lessons in Neural Network Training: Overfitting May be Harder\nAbstract: For many reasons, neural networks have become very popular AI machine learning models. Two of the most important aspects of machine learning models are how well the model generalizes to unseen data, and how well the model scales with problem complexity. Using a controlled task with known optimal training error, we investigate the convergence of the backpropagation (BP) algorithm. We find that the optimal solution is typically not found. Furthermore, we observe that networks larger than might be expected can result in lower training and generalization error. This result is supported by another real world example. We further investigate the training behavior by analyzing the weights in trained networks (excess degrees of freedom are seen to do little harm and to aid convergence), and contrasting the interpolation characteristics of multi-layer perceptron neural networks (MLPs) and polynomial models (overfitting behavior is very different the MLP is often biased towards smoother solutions). Finally, we analyze relevant theory outlining the reasons for significant practical differences. These results bring into question common beliefs about neural network training regarding convergence and optimal network size, suggest alternate guidelines for practical use (lower fear of excess degrees of freedom), and help to direct future work (e.g. methods for creation of more parsimonious solutions, importance of the MLP/BP bias and possibly worse performance of improved training algorithms). ",
"neighbors": [
912,
1149,
1323,
1630
],
"mask": "Train"
},
{
"node_id": 1151,
"label": 5,
"text": "Title: Learning Classification Rules Using Lattices \nAbstract: This paper presents a novel induction algorithm, Rulearner, which induces classification rules using a Galois lattice as an explicit map through the search space of rules. The construction of lattices from data is initially discussed and the use of these structures in inducing classification rules is examined. The Rulearner system is shown to compare favorably with commonly used symbolic learning methods which use heursitics rather than an explicit map to guide their search through the rule space. Furthermore, our learning system is shown to be robust in the presence of noisy data. The Rulearner system is also capable of learning both decision lists as well as unordered rule sets and thus allows for comparisons of these different learning paradigms within the same algorithmic framework.",
"neighbors": [
1350
],
"mask": "Train"
},
{
"node_id": 1152,
"label": 0,
"text": "Title: Fish and Shrink. A next step towards e-cient case retrieval in large scaled case bases \nAbstract: Keywords: Case-Based Reasoning, case retrieval, case representation This paper deals with the retrieval of useful cases in case-based reasoning. It focuses on the questions of what \"useful\" could mean and how the search for useful cases can be organized. We present the new search algorithm Fish and Shrink that is able to search quickly through the case base, even if the aspects that deflne usefulness are spontaneously combined at query time. We compare Fish and Shrink to other algorithms and show that most of them make an implicit closed world assumption. We flnally refer to a realization of the presented idea in the context of the prototype of the FABEL-Project 1 . The scenery is as follows. Previously collected cases are stored in a large scaled case base. An expert describes his problem and gives the aspects in which the requested case should be similar. The similarity measure thus given spontaneously shall now be used to explore the case base within a short time, shall present a required number of cases and make sure that none of the other cases is more similar. The question is now how to prepare the previously collected cases and how to deflne a retrieval algorithm which is able to deal with sponta neously user-deflned similarity measures.",
"neighbors": [
1453
],
"mask": "Test"
},
{
"node_id": 1153,
"label": 1,
"text": "Title: Evolution in Time and Space The Parallel Genetic Algorithm \nAbstract: The parallel genetic algorithm (PGA) uses two major modifications compared to the genetic algorithm. Firstly, selection for mating is distributed. Individuals live in a 2-D world. Selection of a mate is done by each individual independently in its neighborhood. Secondly, each individual may improve its fitness during its lifetime by e.g. local hill-climbing. The PGA is totally asynchronous, running with maximal efficiency on MIMD parallel computers. The search strategy of the PGA is based on a small number of active and intelligent individuals, whereas a GA uses a large population of passive individuals. We will investigate the PGA with deceptive problems and the traveling salesman problem. We outline why and when the PGA is succesful. Abstractly, a PGA is a parallel search with information exchange between the individuals. If we represent the optimization problem as a fitness landscape in a certain configuration space, we see, that a PGA tries to jump from two local minima to a third, still better local minima, by using the crossover operator. This jump is (probabilistically) successful, if the fitness landscape has a certain correlation. We show the correlation for the traveling salesman problem by a configuration space analysis. The PGA explores implicitly the above correlation.",
"neighbors": [
163,
856,
942,
1063,
1065,
1070,
1077,
1113,
1127,
1204,
1205,
1219,
1257,
1410,
1455,
1611,
1675
],
"mask": "Test"
},
{
"node_id": 1154,
"label": 0,
"text": "Title: Case-Based Learning: Beyond Classification of Feature Vectors \nAbstract: The dominant theme of case-based research at recent ML conferences has been on classifying cases represented by feature vectors. However, other useful tasks can be targeted, and other representations are often preferable. We review the recent literature on case-based learning, focusing on alternative performance tasks and more expressive case representations. We also highlight topics in need of additional research. ",
"neighbors": [
819,
1531
],
"mask": "Train"
},
{
"node_id": 1155,
"label": 0,
"text": "Title: Memory-Based Lexical Acquisition and Processing \nAbstract: Current approaches to computational lexicology in language technology are knowledge-based (competence-oriented) and try to abstract away from specific formalisms, domains, and applications. This results in severe complexity, acquisition and reusability bottlenecks. As an alternative, we propose a particular performance-oriented approach to Natural Language Processing based on automatic memory-based learning of linguistic (lexical) tasks. The consequences of the approach for computational lexicology are discussed, and the application of the approach on a number of lexical acquisition and disambiguation tasks in phonology, morphology and syntax is described.",
"neighbors": [
783,
785,
862,
1328,
1407,
1601,
1812
],
"mask": "Test"
},
{
"node_id": 1156,
"label": 3,
"text": "Title: A NEW SEQUENTIAL SIMULATED ANNEALING METHOD \nAbstract: Let H be a function not explicitly defined, but approximable by a sequence (H n ) n0 of functional estimators. In this context we propose a new sequential algorithm to optimise asymptotically H using stepwise estimators H n . We prove under mild conditions the almost sure convergence in law of this algorithm. ",
"neighbors": [
1013
],
"mask": "Validation"
},
{
"node_id": 1157,
"label": 2,
"text": "Title: Some Competitive Learning Methods (Some additions and refinements are planned for \nAbstract: Let H be a function not explicitly defined, but approximable by a sequence (H n ) n0 of functional estimators. In this context we propose a new sequential algorithm to optimise asymptotically H using stepwise estimators H n . We prove under mild conditions the almost sure convergence in law of this algorithm. ",
"neighbors": [
687,
741,
745,
1700,
1704
],
"mask": "Train"
},
{
"node_id": 1158,
"label": 3,
"text": "Title: Stochastic Complexity Based Estimation of Missing Elements in Questionnaire Data \nAbstract: In this paper we study a new information-theoretically justified approach to missing data estimation for multivariate categorical data. The approach discussed is a model-based imputation procedure relative to a model class (i.e., a functional form for the probability distribution of the complete data matrix), which in our case is the set of multinomial models with some independence assumptions. Based on the given model class assumption an information-theoretic criterion can be derived to select between the different complete data matrices. Intuitively this general criterion, called stochastic complexity, represents the shortest code length needed for coding the complete data matrix relative to the model class chosen. Using this information-theoretic criteria, the missing data problem is reduced to a search problem, i.e., finding the data completion with minimal stochastic complexity. In the experimental part of the paper we present empirical results of the approach using two real data sets, and compare these results to those achived by commonly used techniques such as case deletion and imputating sample averages. ",
"neighbors": [
1550,
1555
],
"mask": "Validation"
},
{
"node_id": 1159,
"label": 1,
"text": "Title: An evolutionary tabu search algorithm and the NHL scheduling problem \nAbstract: We present in this paper a new evolutionary procedure for solving general optimization problems that combines efficiently the mechanisms of genetic algorithms and tabu search. In order to explore the solution space properly interaction phases are interspersed with periods of optimization in the algorithm. An adaptation of this search principle to the National Hockey League (NHL) problem is discussed. The hybrid method developed in this paper is well suited for Open Shop Scheduling problems (OSSP). The results obtained appear to be quite satisfactory. ",
"neighbors": [
163,
1485,
2564
],
"mask": "Test"
},
{
"node_id": 1160,
"label": 6,
"text": "Title: PFSA Modelling of Behavioural Sequences by Evolutionary Programming Rockhampton, Queensland. (1994) \"PFSA Modelling of Behavioural\nAbstract: Behavioural observations can often be described as a sequence of symbols drawn from a finite alphabet. However the inductive inference of such strings by any automated technique to produce models of the data is a nontrivial task. This paper considers modelling of behavioural data using probabilistic finite state automata (PFSAs). There are a number of information-theoretic techniques for evaluating possible hypotheses. The measure used in this paper is the Minimum Message Length (MML) of Wallace. Although attempts have been made to construct PFSA models by incremental addition of substrings using heuristic rules and the MML to give the lowest information cost, the resultant models cannot be shown to be globally optimal. Fogel's Evolutionary Programming can produce globally optimal PFSA models by evolving data structures of arbitrary complexity without the requirement to encode the PFSA into binary strings as in Genetic Algorithms. However, evaluation of PFSAs during the evolution process by the MML of the PFSA alone is not possible since there will be symbols which cannot be consumed by a partially correct solution. It is suggested that the addition of a \"can't consume'' symbol to the symbol alphabet obviates this difficulty. The addition of this null symbol to the alphabet also permits the evolution of explanatory models which need not explain all of the data, a useful property to avoid overfitting noisy data. Results are given for a test set for which the optimal pfsa model is known and for a set of eye glance data derived from an instrument panel simulator.",
"neighbors": [
1166
],
"mask": "Train"
},
{
"node_id": 1161,
"label": 6,
"text": "Title: Inductive Learning by Selection of Minimal Complexity Representations \nAbstract: Behavioural observations can often be described as a sequence of symbols drawn from a finite alphabet. However the inductive inference of such strings by any automated technique to produce models of the data is a nontrivial task. This paper considers modelling of behavioural data using probabilistic finite state automata (PFSAs). There are a number of information-theoretic techniques for evaluating possible hypotheses. The measure used in this paper is the Minimum Message Length (MML) of Wallace. Although attempts have been made to construct PFSA models by incremental addition of substrings using heuristic rules and the MML to give the lowest information cost, the resultant models cannot be shown to be globally optimal. Fogel's Evolutionary Programming can produce globally optimal PFSA models by evolving data structures of arbitrary complexity without the requirement to encode the PFSA into binary strings as in Genetic Algorithms. However, evaluation of PFSAs during the evolution process by the MML of the PFSA alone is not possible since there will be symbols which cannot be consumed by a partially correct solution. It is suggested that the addition of a \"can't consume'' symbol to the symbol alphabet obviates this difficulty. The addition of this null symbol to the alphabet also permits the evolution of explanatory models which need not explain all of the data, a useful property to avoid overfitting noisy data. Results are given for a test set for which the optimal pfsa model is known and for a set of eye glance data derived from an instrument panel simulator.",
"neighbors": [
1560,
1592,
1702,
2324,
2423,
2657
],
"mask": "Train"
},
{
"node_id": 1162,
"label": 3,
"text": "Title: Signal Processing and Communications Reversible Jump Sampler for Autoregressive Time Series, Employing Full Conditionals to\nAbstract: Technical Report CUED/F-INFENG/TR. 304 We use reversible jump Markov chain Monte Carlo (MCMC) methods (Green 1995) to address the problem of model order uncertainty in au-toregressive (AR) time series within a Bayesian framework. Efficient model jumping is achieved by proposing model space moves from the full conditional density for the AR parameters, which is obtained analytically. This is compared with an alternative method, for which the moves are cheaper to compute, in which proposals are made only for the new parameters in each move. Results are presented for both synthetic and audio time series. ",
"neighbors": [
1613
],
"mask": "Test"
},
{
"node_id": 1163,
"label": 0,
"text": "Title: Case-Based Planning to Learn \nAbstract: Learning can be viewed as a problem of planning a series of modifications to memory. We adopt this view of learning and propose the applicability of the case-based planning methodology to the task of planning to learn. We argue that relatively simple, fine-grained primitive inferential operators are needed to support flexible planning. We show that it is possible to obtain the benefits of case-based reasoning within a planning to learn framework.",
"neighbors": [
1497,
1498,
1534
],
"mask": "Test"
},
{
"node_id": 1164,
"label": 6,
"text": "Title: PAC Analyses of a `Similarity Learning' IBL Algorithm \nAbstract: V S-CBR [14] is a simple instance-based learning algorithm that adjusts a weighted similarity measure as well as collecting cases. This paper presents a `PAC' analysis of V S-CBR, motivated by the PAC learning framework, which demonstrates two main ideas relevant to the study of instance-based learners. Firstly, the hypothesis spaces of a learner on different target concepts can be compared to predict the difficulty of the target concepts for the learner. Secondly, it is helpful to consider the `constituent parts' of an instance-based learner: to explore separately how many examples are needed to infer a good similarity measure and how many examples are needed for the case base. Applying these approaches, we show that V S-CBR learns quickly if most of the variables in the representation are irrelevant to the target concept and more slowly if there are more relevant variables. The paper relates this overall behaviour to the behaviour of the constituent parts of V S-CBR.",
"neighbors": [
1570,
1584,
1626
],
"mask": "Train"
},
{
"node_id": 1165,
"label": 5,
"text": "Title: Discovering Compressive Partial Determinations in Mixed Numerical and Symbolic Domains \nAbstract: Partial determinations are an interesting form of dependency between attributes in a relation. They generalize functional dependencies by allowing exceptions. We modify a known MDL formula for evaluating such partial determinations to allow for its use in an admissible heuristic in exhaustive search. Furthermore we describe an efficient preprocessing-based approach for handling numerical attributes. An empirical investigation tries to evaluate the viability of the presented ideas.",
"neighbors": [
430,
1327
],
"mask": "Train"
},
{
"node_id": 1166,
"label": 6,
"text": "Title: Assessment of candidate pfsa models induced from symbol datasets \nAbstract: The induction of the optimal finite state machine explanation from symbol strings is known to be at least NP-complete. However, satisfactory approximately optimal explanations may be found by the use of Evolutionary Programming. It has been shown that an information theoretic measure of finite state machine explanations can be used as the fitness function required for the evaluation of candidate explanations during the search for a near-optimal explanation. It is not obvious from the measure which class of explanation will be favoured over others during the search. By empirical studies it is possible to gain some insight into the dimensions the measure is optimising. In general, for probabilistic finite state machines, explanations assessed by a minimum message length estimator with the minimum number of transitions will be favoured over other explanations. The information measure will also favour explanations with uneven distributions of frequencies on transitions from a node suggesting that repeated sequences in symbol strings will be preferred as an explanation. Approximate bounds for acceptance of explanations and the length of string required for induction to be successful are also derived by considerations of the simplest possible and random explanations and their information measure. ",
"neighbors": [
1160
],
"mask": "Train"
},
{
"node_id": 1167,
"label": 1,
"text": "Title: Evolving Globally Synchronized Cellular Automata \nAbstract: How does an evolutionary process interact with a decentralized, distributed system in order to produce globally coordinated behavior? Using a genetic algorithm (GA) to evolve cellular automata (CAs), we show that the evolution of spontaneous synchronization, one type of emergent coordination, takes advantage of the underlying medium's potential to form embedded particles. The particles, typically phase defects between synchronous regions, are designed by the evolutionary process to resolve frustrations in the global phase. We describe in detail one typical solution discovered by the GA, delineating the discovered synchronization algorithm in terms of embedded particles and their interactions. We also use the particle-level description to analyze the evolutionary sequence by which this solution was discovered. Our results have implications both for understanding emergent collective behavior in natural systems and for the automatic programming of decentralized spatially extended multiprocessor systems. ",
"neighbors": [
1330,
1331,
1332
],
"mask": "Test"
},
{
"node_id": 1168,
"label": 3,
"text": "Title: Bootstrapping Z Estimators \nAbstract: We prove a general bootstrap theorem for possibly infinite-dimensional Zestimators which builds on the recent infinite-dimensional Ztheorem due to Van der Vaart (1995). Our result extends finite-dimensional results of this type for the bootstrap due to Arcones and Gine (1992), Lele (1991), and Newton and Raftery (1994). We sketch three examples of models with infinite-dimensional parameter spaces fi as applicatons of our general theorem. ",
"neighbors": [
802
],
"mask": "Train"
},
{
"node_id": 1169,
"label": 2,
"text": "Title: Individual and Collective Prognostic Prediction \nAbstract: The prediction of survival time or recurrence time is an important learning problem in medical domains. The Recurrence Surface Approximation (RSA) method is a natural, effective method for predicting recurrence times using censored input data. This paper introduces the Survival Curve RSA (SC-RSA), an extension to the RSA approach which produces accurate predicted rates of recurrence, while maintaining accuracy on individual predicted recurrence times. The method is applied to the problem of breast cancer recurrence using two different datasets. ",
"neighbors": [
524,
1284,
1454
],
"mask": "Validation"
},
{
"node_id": 1170,
"label": 6,
"text": "Title: Selective sampling using the Query by Committee algorithm Running title: Selective sampling using Query by Committee \nAbstract: We analyze the \"query by committee\" algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the two-member committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the number of queries. We show that, in particular, this exponential decrease holds for query learning of perceptrons. Keywords: selective sampling, query learning, Bayesian Learning, experimental design fl Yoav Freund, Room 2B-428, AT&T Laboratories, 700 Mountain Ave., Murray Hill, NJ, 07974. Telephone:908-582-3164.",
"neighbors": [
517,
1198
],
"mask": "Train"
},
{
"node_id": 1171,
"label": 2,
"text": "Title: Nonlinear Component Analysis as a Kernel Eigenvalue Problem \nAbstract: A new method for performing a nonlinear form of Principal Component Analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map; for instance the space of all possible 5-pixel products in 16fi16 images. We give the derivation of the method and present first experimental results on polynomial feature extraction for pattern recognition. ",
"neighbors": [
1050
],
"mask": "Train"
},
{
"node_id": 1172,
"label": 3,
"text": "Title: Introduction to the Special Section on Knowledge-Based Construction of Probabilistic and Decision Models (IEEE Transactions\nAbstract: Modeling techniques developed recently in the AI and uncertain reasoning communities permit significantly more flexible specifications of probabilistic knowledge. Specifically, graphical decision-modeling formalisms|belief networks, influence diagrams, and their variants|provide compact representation of probabilistic relationships, and support inference algorithms that automatically exploit the dependence structure in such models [1, 3, 4]. These advances have brought on a resurgence of interest in computational decision systems based on normative theories of belief and preference. However, graphical decision-modeling languages are still quite limited for purposes of knowledge representation because, while they can describe the relationships among particular event instances, they cannot capture general knowledge about probabilistic relationships across classes of events. The inability to capture general knowledge is a serious impediment for those AI tasks in which the relevant factors of a decision problem cannot be enumerated in advance. A graphical decision model encodes a particular set of probabilistic dependencies, a predefined set of decision alternatives, and a specific mathematical form for a utility function. Given a properly specified model, there exist relatively efficient algorithms for calculating posterior probabilities and optimal decision policies. A range of similar cases may be handled by parametric variations of the original model. However, if the structure of dependencies, the set of available alternatives, or the form of utility function changes from situation to situation, then a fixed network representation is no longer adequate. An ideal computational decision system would possess general, broad knowledge of a domain, but would have the ability to reason about the particular circumstances of any given decision problem within the domain. One obvious approach|which we call call knowledge-based model construction (KBMC)|is to generate a decision model dynamically at run-time, based on the problem description and information received thus far. Model construction consists of selection, instantiation, and assembly of causal and associational relationships from a broad knowledge base of general relationships among domain concepts. For example, suppose we wish to develop a system to recommend appropriate actions for maintaining a computer network. The natural graphical decision model would include chance ",
"neighbors": [
623,
915,
1141,
2108,
2341
],
"mask": "Test"
},
{
"node_id": 1173,
"label": 6,
"text": "Title: Dynamical Selection of Learning Algorithms \nAbstract: Determining the conditions for which a given learning algorithm is appropriate is an open problem in machine learning. Methods for selecting a learning algorithm for a given domain have met with limited success. This paper proposes a new approach to predicting a given example's class by locating it in the \"example space\" and then choosing the best learner(s) in that region of the example space to make predictions. The regions of the example space are defined by the prediction patterns of the learners being used. The learner(s) chosen for prediction are selected according to their past performance in that region. This dynamic approach to learning algorithm selection is compared to other methods for selecting from multiple learning algorithms. The approach is then extended to weight rather than select the algorithms according to their past performance in a given region. Both approaches are further evaluated on a set of Determining the conditions for which a given learning algorithm is appropriate is an open problem in machine learning. Methods for selecting a learning algorithm for a given domain (e.g. [Aha92, Breiman84]) or for a portion of the domain ([Brodley93, Brodley94]) have met with limited success. This paper proposes a new approach that dynamically selects a learning algorithm for each example by locating it in the \"example space\" and then choosing the best learner(s) for prediction in that part of the example space. The regions of the example space are formed by the observed prediction patterns of the learners being used. The learner(s) chosen for prediction are selected according to their past performance in that region which is defined by the \"cross-validation history.\" This paper introduces DS, a method for the dynamic selection of a learning algorithm(s). We call it \"dynamic\" because the learning algorithm(s) used to classify a novel example depends on that example. Preliminary experimentation motivated DW, an extension to DS that dynamically weights the learners predictions according to their regional accuracy. Further experimentation compares DS and DW to a collection of other meta-learning strategies such as cross-validation ([Breiman84]) and various forms of stacking ([Wolpert92]). In this phase of the experiementation, the meta-learners have six constituent learners which are heterogeneous in their search and representation methods (e.g. a rule learner, CN2 [Clark89]; a decision tree learner, C4.5 [Quinlan93]; an oblique decision tree learner, OC1 [Murthy93]; an instance-based learner, PEBLS [Cost93]; a k-nearest neighbor learner, ten domains and compared to several other meta-learning strategies.",
"neighbors": [
318,
1328,
2583
],
"mask": "Train"
},
{
"node_id": 1174,
"label": 6,
"text": "Title: LEARNING CONCEPTS BY ASKING QUESTIONS \nAbstract: Tw o important issues in machine learning are explored: the role that memory plays in acquiring new concepts; and the extent to which the learner can take an active part in acquiring these concepts. This chapter describes a program, called Marvin, which uses concepts it has learned previously to learn new concepts. The program forms hypotheses about the concept being learned and tests the hypotheses by asking the trainer questions. Learning begins when the trainer shows Marvin an example of the concept to be learned. The program determines which objects in the example belong to concepts stored in the memory. A description of the new concept is formed by using the information obtained from the memory to generalize the description of the training example. The generalized description is tested when the program constructs new examples and shows these to the trainer, asking if they belong to the target concept. ",
"neighbors": [
303,
414,
893,
902,
1033,
1074,
1102,
1135,
1297
],
"mask": "Train"
},
{
"node_id": 1175,
"label": 1,
"text": "Title: Complex Environments to Complex Behaviors FROM COMPLEX ENVIRONMENTS TO COMPLEX BEHAVIORS \nAbstract: Adaptation of ecological systems to their environments is commonly viewed through some explicit fitness function defined a priori by the experimenter, or measured a posteriori by estimations based on population size and/or reproductive rates. These methods do not capture the role of environmental complexity in shaping the selective pressures that control the adaptive process. Ecological simulations enabled by computational tools such as the Latent Energy Environments (LEE) model allow us to characterize more closely the effects of environmental complexity on the evolution of adaptive behaviors. LEE is described in this paper. Its motivation arises from the need to vary complexity in controlled and predictable ways, without assuming the relationship of these changes to the adaptive behaviors they engender. This goal is achieved through a careful characterization of environments in which different forms of \"energy\" are well-defined. A genetic algorithm using endogenous fitness and local selection is used to model the evolutionary process. Individuals in the population are modeled by neural networks with simple sensory-motor systems, and variations in their behaviors are related to interactions with varying environments. We outline the results of three experiments that analyze different sources of environmental complexity and their effects on the collective behaviors of evolving populations. ",
"neighbors": [
1325,
1628
],
"mask": "Train"
},
{
"node_id": 1176,
"label": 2,
"text": "Title: Distributed Representations and Nested Compositional Structure \nAbstract: Adaptation of ecological systems to their environments is commonly viewed through some explicit fitness function defined a priori by the experimenter, or measured a posteriori by estimations based on population size and/or reproductive rates. These methods do not capture the role of environmental complexity in shaping the selective pressures that control the adaptive process. Ecological simulations enabled by computational tools such as the Latent Energy Environments (LEE) model allow us to characterize more closely the effects of environmental complexity on the evolution of adaptive behaviors. LEE is described in this paper. Its motivation arises from the need to vary complexity in controlled and predictable ways, without assuming the relationship of these changes to the adaptive behaviors they engender. This goal is achieved through a careful characterization of environments in which different forms of \"energy\" are well-defined. A genetic algorithm using endogenous fitness and local selection is used to model the evolutionary process. Individuals in the population are modeled by neural networks with simple sensory-motor systems, and variations in their behaviors are related to interactions with varying environments. We outline the results of three experiments that analyze different sources of environmental complexity and their effects on the collective behaviors of evolving populations. ",
"neighbors": [
254,
1123,
1285,
1354,
1592,
2272
],
"mask": "Test"
},
{
"node_id": 1177,
"label": 5,
"text": "Title: An Efficient Subsumption Algorithm for Inductive Logic Programming \nAbstract: In this paper we investigate the efficiency of - subsumption (` ), the basic provability relation in ILP. As D ` C is NP-complete even if we restrict ourselves to linked Horn clauses and fix C to contain only a small constant number of literals, we investigate in several restrictions of D. We first adapt the notion of determinate clauses used in ILP and show that -subsumption is decidable in polynomial time if D is determinate with respect to C. Secondly, we adapt the notion of k-local Horn clauses and show that - subsumption is efficiently computable for some reasonably small k. We then show how these results can be combined, to give an efficient reasoning procedure for determinate k-local Horn clauses, an ILP-problem recently suggested to be polynomial predictable by Cohen (1993) by a simple counting argument. We finally outline how the -reduction algorithm, an essential part of every lgg ILP-learning algorithm, can be im proved by these ideas.",
"neighbors": [
1180,
1519,
1620,
1627
],
"mask": "Test"
},
{
"node_id": 1178,
"label": 1,
"text": "Title: Strongly Typed Genetic Programming \nAbstract: BBN Technical Report #7866: Abstract Genetic programming is a powerful method for automatically generating computer programs via the process of natural selection [Koza 92]. However, it has the limitation known as \"closure\", i.e. that all the variables, constants, arguments for functions, and values returned from functions must be of the same data type. To correct this deficiency, we introduce a variation of genetic programming called \"strongly typed\" genetic programming (STGP). In STGP, variables, constants, arguments, and returned values can be of any data type with the provision that the data type for each such value be specified beforehand. This allows the initialization process and the genetic operators to only generate syntactically correct parse trees. Key concepts for STGP are generic functions, which are not true strongly typed functions but rather templates for classes of such functions, and generic data types, which are analogous. To illustrate STGP, we present four examples involving vector/matrix manipulation and list manipulation: (1) the multi-dimensional least-squares regression problem, (2) the multi-dimensional Kalman filter, (3) the list manipulation function NTH, and (4) the list manipulation function MAPCAR.",
"neighbors": [
163,
854,
956,
995,
1034,
1099,
1230,
1231,
1232,
1362,
1476,
1688,
1690,
1736,
1737
],
"mask": "Validation"
},
{
"node_id": 1179,
"label": 2,
"text": "Title: Even with Arbitrary Transfer Functions, RCC Cannot Compute Certain FSA \nAbstract: Category: algorithms and architectures | recurrent networks. No part of this paper has been submitted elsewhere. Preference: poster. Abstract Existing proofs demonstrating the computational limitations of the Recurrent Cascade Correlation (RCC) Network (Fahlman, 1991) explicitly limit their results to units having sigmoidal or hard-threshold transfer functions (Giles et al., 1995; and Kremer, 1996). The proof given here shows that, for any given finite, discrete, deterministic transfer function used by the units of an RCC network, there are finite-state automata (FSA) that the network cannot model, no matter how many units are used. The proof applies equally well to continuous transfer functions with a finite number of fixed-points, such as the sigmoid function.",
"neighbors": [
946
],
"mask": "Test"
},
{
"node_id": 1180,
"label": 5,
"text": "Title: Efficient Algorithms for -Subsumption \nAbstract: subsumption is a decidable but incomplete approximation of logic implication, important to inductive logic programming and theorem proving. We show that by context based elimination of possible matches a certain superset of the determinate clauses can be tested for subsumption in polynomial time. We discuss the relation between subsumption and the clique problem, showing in particular that using additional prior knowledge about the substitution space only a small fraction of the search space can be identified as possibly containing globally consistent solutions, which leads to an effective pruning rule. We present empirical results, demonstrating that a combination of both of the above approaches provides an extreme reduction of computational effort.",
"neighbors": [
1177,
1620
],
"mask": "Train"
},
{
"node_id": 1181,
"label": 6,
"text": "Title: Learning Sparse Perceptrons \nAbstract: We introduce a new algorithm designed to learn sparse perceptrons over input representations which include high-order features. Our algorithm, which is based on a hypothesis-boosting method, is able to PAC-learn a relatively natural class of target concepts. Moreover, the algorithm appears to work well in practice: on a set of three problem domains, the algorithm produces classifiers that utilize small numbers of features yet exhibit good generalization performance. Perhaps most importantly, our algorithm generates concept descriptions that are easy for humans to understand.",
"neighbors": [
25,
569,
1431
],
"mask": "Validation"
},
{
"node_id": 1182,
"label": 5,
"text": "Title: Overcoming the myopia of inductive learning algorithms with RELIEFF \nAbstract: Current inductive machine learning algorithms typically use greedy search with limited looka-head. This prevents them to detect significant conditional dependencies between the attributes that describe training objects. Instead of myopic impurity functions and lookahead, we propose to use RELI-EFF, an extension of RELIEF developed by Kira and Rendell [10], [11], for heuristic guidance of inductive learning algorithms. We have reimplemented Assistant, a system for top down induction of decision trees, using RELIEFF as an estimator of attributes at each selection step. The algorithm is tested on several artificial and several real world problems and the results are compared with some other well known machine learning algorithms. Excellent results on artificial data sets and two real world problems show the advantage of the presented approach to inductive learning. ",
"neighbors": [
1010,
1073,
1569,
1578,
1684,
1726
],
"mask": "Validation"
},
{
"node_id": 1183,
"label": 4,
"text": "Title: Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition \nAbstract: This paper describes the MAXQ method for hierarchical reinforcement learning based on a hierarchical decomposition of the value function and derives conditions under which the MAXQ decomposition can represent the optimal value function. We show that for certain execution models, the MAXQ decomposition will produce better policies than Feudal Q learning.",
"neighbors": [
562,
738,
1193,
1202
],
"mask": "Train"
},
{
"node_id": 1184,
"label": 1,
"text": "Title: Causality in Genetic Programming \nAbstract: Causality relates changes in the structure of an object with the effects of such changes, that is changes in the properties or behavior of the object. This paper analyzes the concept of causality in Genetic Programming (GP) and suggests how it can be used in adapting control parameters for speeding up GP search. We first analyze the effects of crossover to show the weak causality of the GP representation and operators. Hierarchical GP approaches based on the discovery and evolution of functions amplify this phenomenon. However, selection gradually retains strongly causal changes. Causality is correlated to search space exploitation and is discussed in the context of the exploration-exploitation tradeoff. The results described argue for a bottom-up GP evolutionary thesis. Finally, new developments based on the idea of GP architecture evolution (Koza, 1994a) are discussed from the causality perspective. ",
"neighbors": [
120,
141,
781,
844,
860,
1362,
1784,
2199
],
"mask": "Validation"
},
{
"node_id": 1185,
"label": 6,
"text": "Title: An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants \nAbstract: Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several variants in conjunction with a decision tree inducer (three variants) and a Naive-Bayes inducer. The purpose of the study is to improve our understanding of why and when these algorithms, which use perturbation, reweighting, and combination techniques, affect classification error. We provide a bias and variance decomposition of the error to show how different methods and variants influence these two terms. This allowed us to determine that Bagging reduced variance of unstable methods, while boosting methods (AdaBoost and Arc-x4) reduced both the bias and variance of unstable methods but increased the variance for Naive-Bayes, which was very stable. We observed that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference. Voting variants, some of which are introduced in this paper, include: pruning versus no pruning, use of probabilistic estimates, weight perturbations (Wagging), and backfitting of data. We found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit. We measure tree sizes and show an interesting positive correlation between the increase in the average tree size in AdaBoost trials and its success in reducing the error. We compare the mean-squared error of voting methods to non-voting methods and show that the voting methods lead to large and significant reductions in the mean-squared errors. Practical problems that arise in implementing boosting algorithms are explored, including numerical instabilities and underflows. We use scatterplots that graphically show how AdaBoost reweights instances, emphasizing not only \"hard\" areas but also outliers and noise. ",
"neighbors": [
1000,
1521
],
"mask": "Train"
},
{
"node_id": 1186,
"label": 3,
"text": "Title: Rationality and Intelligence \nAbstract: The long-term goal of our field is the creation and understanding of intelligence. Productive research in AI, both practical and theoretical, benefits from a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. The concept of rational agency has long been considered a leading candidate to fulfill this role. This paper outlines a gradual evolution in the formal conception of rationality that brings it closer to our informal conception of intelligence and simultaneously reduces the gap between theory and practice. Some directions for future research are indicated.",
"neighbors": [
492,
591,
1268,
1309
],
"mask": "Validation"
},
{
"node_id": 1187,
"label": 6,
"text": "Title: Rationality and Intelligence \nAbstract: Design and Evaluation of the RISE 1.0 Learning System Pedro Domingos pedrod@ics.uci.edu Technical Report 94-34 August 30, 1994 ",
"neighbors": [
426,
1234
],
"mask": "Train"
},
{
"node_id": 1188,
"label": 2,
"text": "Title: In Estimating analogical similarity by dot-products of Holographic Reduced Representations. \nAbstract: Models of analog retrieval require a computationally cheap method of estimating similarity between a probe and the candidates in a large pool of memory items. The vector dot-product operation would be ideal for this purpose if it were possible to encode complex structures as vector representations in such a way that the superficial similarity of vector representations reflected underlying structural similarity. This paper describes how such an encoding is provided by Holographic Reduced Representations (HRRs), which are a method for encoding nested relational structures as fixed-width distributed representations. The conditions under which structural similarity is reflected in the dot-product rankings of ",
"neighbors": [
1123,
1354
],
"mask": "Train"
},
{
"node_id": 1189,
"label": 6,
"text": "Title: Figure 3: Average model size accepted from a ran-dom prefix-closed samples of various size, and\nAbstract: that is based on Angluin's L fl algorithm. The algorithm maintains a model consistent with its past examples. When a new counterexample arrives it tries to extend the model in a minimal fashion. We conducted a set of experiments where random automata that represent different strategies were generated, and the algorithm tried to learn them based on prefix-closed samples of their behavior. The algorithm managed to learn very compact models that agree with the samples. The size of the sample had a small effect on the size of the model. The experimental results suggest that for random prefix-closed samples the algorithm behaves well. However, following Angluin's result on the difficulty of learning almost uniform complete samples [ An-gluin, 1978 ] , it is obvious that our algorithm does not solve the complexity issue of inferring a DFA from a general prefix-closed sample. We are currently looking for classes of prefix-closed samples in which US-L* behaves well. [ Carmel and Markovitch, 1994 ] D. Carmel and S. Markovitch. The M* algorithm: Incorporating opponent models into adversary search. Technical Report CIS report 9402, Technion, March 1994. [ Carmel and Markovitch, 1995 ] D. Carmel and S. Markovitch. Unsupervised learning of finite automata: A practical approach. Technical Report CIS report 9504, Technion, March 1995. [ Shoham and Tennenholtz, 1994 ] Y. Shoham and M. Tennenholtz. Co-Learning and the evolution of social activity. Technical Report STAN-CS-TR-94-1511, Stanford Univrsity, Department of Computer Science, 1994. ",
"neighbors": [
638,
1643,
1687
],
"mask": "Train"
},
{
"node_id": 1190,
"label": 2,
"text": "Title: Analysis of the Convergence and Generalization of AA1 \nAbstract: that is based on Angluin's L fl algorithm. The algorithm maintains a model consistent with its past examples. When a new counterexample arrives it tries to extend the model in a minimal fashion. We conducted a set of experiments where random automata that represent different strategies were generated, and the algorithm tried to learn them based on prefix-closed samples of their behavior. The algorithm managed to learn very compact models that agree with the samples. The size of the sample had a small effect on the size of the model. The experimental results suggest that for random prefix-closed samples the algorithm behaves well. However, following Angluin's result on the difficulty of learning almost uniform complete samples [ An-gluin, 1978 ] , it is obvious that our algorithm does not solve the complexity issue of inferring a DFA from a general prefix-closed sample. We are currently looking for classes of prefix-closed samples in which US-L* behaves well. [ Carmel and Markovitch, 1994 ] D. Carmel and S. Markovitch. The M* algorithm: Incorporating opponent models into adversary search. Technical Report CIS report 9402, Technion, March 1994. [ Carmel and Markovitch, 1995 ] D. Carmel and S. Markovitch. Unsupervised learning of finite automata: A practical approach. Technical Report CIS report 9504, Technion, March 1995. [ Shoham and Tennenholtz, 1994 ] Y. Shoham and M. Tennenholtz. Co-Learning and the evolution of social activity. Technical Report STAN-CS-TR-94-1511, Stanford Univrsity, Department of Computer Science, 1994. ",
"neighbors": [
809,
1129,
1321
],
"mask": "Train"
},
{
"node_id": 1191,
"label": 6,
"text": "Title: Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms \nAbstract: The term \"bias\" is widely used|and with different meanings|in the fields of machine learning and statistics. This paper clarifies the uses of this term and shows how to measure and visualize the statistical bias and variance of learning algorithms. Statistical bias and variance can be applied to diagnose problems with machine learning bias, and the paper shows four examples of this. Finally, the paper discusses methods of reducing bias and variance. Methods based on voting can reduce variance, and the paper compares Breiman's bagging method and our own tree randomization method for voting decision trees. Both methods uniformly improve performance on data sets from the Irvine repository. Tree randomization yields perfect performance on the Letter Recognition task. A weighted nearest neighbor algorithm based on the infinite bootstrap is also introduced. In general, decision tree algorithms have moderate-to-high variance, so an important implication of this work is that variance|rather than appropriate or inappropriate machine learning bias|is an important cause of poor performance for decision tree algorithms. ",
"neighbors": [
661,
692,
1053,
1290,
2423
],
"mask": "Validation"
},
{
"node_id": 1192,
"label": 4,
"text": "Title: Roles of Macro-Actions in Accelerating Reinforcement Learning \nAbstract: We analyze the use of built-in policies, or macro-actions, as a form of domain knowledge that can improve the speed and scaling of reinforcement learning algorithms. Such macro-actions are often used in robotics, and macro-operators are also well-known as an aid to state-space search in AI systems. The macro-actions we consider are closed-loop policies with termination conditions. The macro-actions can be chosen at the same level as primitive actions. Macro-actions commit the learning agent to act in a particular, purposeful way for a sustained period of time. Overall, macro-actions may either accelerate or retard learning, depending on the appropriateness of the macro-actions to the particular task. We analyze their effect in a simple example, breaking the acceleration effect into two parts: 1) the effect of the macro-action in changing exploratory behavior, independent of learning, and 2) the effect of the macro-action on learning, independent of its effect on behavior. In our example, both effects are significant, but the latter appears to be larger. Finally, we provide a more complex gridworld illustration of how appropriately chosen macro-actions can accelerate overall learning. ",
"neighbors": [
321,
875,
2150,
2473
],
"mask": "Train"
},
{
"node_id": 1193,
"label": 4,
"text": "Title: Reinforcement Learning with Hierarchies of Machines \nAbstract: We present a new approach to reinforcement learning in which the policies considered by the learning process are constrained by hierarchies of partially specified machines. This allows for the use of prior knowledge to reduce the search space and provides a framework in which knowledge can be transferred across problems and in which component solutions can be recombined to solve larger and more complicated problems. Our approach can be seen as providing a link between reinforcement learning and behavior-based or teleo-reactive approaches to control. We present provably convergent algorithms for problem-solving and learning with hierarchical machines and demonstrate their effectiveness on a problem with several thousand states. ",
"neighbors": [
1183
],
"mask": "Train"
},
{
"node_id": 1194,
"label": 0,
"text": "Title: An Explanation-Based Approach to Improve Retrieval in Case-Based Planning \nAbstract: When a case-based planner is retrieving a previous case in preparation for solving a new similar problem, it is often not aware of the implicit features of the new problem situation which determine if a particular case may be successfully applied. This means that some cases may be retrieved in error in that the case may fail to improve the planner's performance. Retrieval may be incrementally improved by detecting and explaining these failures as they occur. In this paper we provide a definition of case failure for the planner, dersnlp (derivation replay in snlp), which solves new problems by replaying its previous plan derivations. We provide EBL (explanation-based learning) techniques for detecting and constructing the reasons for the failure. We also describe how to organize a case library so as to incorporate this failure information as it is produced. Finally we present an empirical study which demonstrates the effectiveness of this approach in improving the performance of dersnlp.",
"neighbors": [
594,
1122,
1621
],
"mask": "Test"
},
{
"node_id": 1195,
"label": 2,
"text": "Title: Statistical Evaluation of Neural Network Experiments: Minimum Requirements and Current Practice \nAbstract: ",
"neighbors": [
1203,
1323,
1630
],
"mask": "Train"
},
{
"node_id": 1196,
"label": 2,
"text": "Title: The Free Speech Phoneme Probability Estimation with Dynamic Sparsely Connected Artificial Neural Networks \nAbstract: This paper presents new methods for training large neural networks for phoneme probability estimation. An architecture combining timedelay windows and recurrent connections is used to capture the important dynamic information of the speech signal. Because the number of connections in a fully connected recurrent network grows super-linear with the number of hidden units, schemes for sparse connection and connection pruning are explored. It is found that sparsely connected networks outperform their fully connected counterparts with an equal number of connections. The implementation of the combined architecture and training scheme is described in detail. The networks are evaluated in a hybrid HMM/ANN system for phoneme recognition on the TIMIT database, and for word recognition on the WAXHOLM database. The achieved phone error-rate, 27.8%, for the standard 39 phoneme set on the core testset of the TIMIT database is in the range of the lowest reported. All training and simulation software used is made freely available by the author, and detailed information about the software and the training process is given in an Appendix. ",
"neighbors": [
840,
1038
],
"mask": "Validation"
},
{
"node_id": 1197,
"label": 6,
"text": "Title: Why Does Bagging Work? A Bayesian Account and its Implications bagging's success, both in a\nAbstract: The error rate of decision-tree and other classification learners can often be much reduced by bagging: learning multiple models from bootstrap samples of the database, and combining them by uniform voting. In this paper we empirically test two alternative explanations for this, both based on Bayesian learning theory: (1) bagging works because it is an approximation to the optimal procedure of Bayesian model averaging, with an appropriate implicit prior; (2) bagging works because it effectively shifts the prior to a more appropriate region of model space. All the experimental evidence contradicts the first hypothesis, and confirms the second. Bagging (Breiman 1996a) is a simple and effective way to reduce the error rate of many classification learning algorithms. For example, in the empirical study described below, it reduces the error of a decision-tree learner in 19 of 26 databases, by 4% on average. In the bagging procedure, given a training set of size s, a \"bootstrap\" replicate of it is constructed by taking s samples with replacement from the training set. Thus a new training set of the same size is produced, where each of the original examples may appear once, more than once, or not. On average, 63% of the original examples will appear in the bootstrap sample. The learning algorithm is then applied to this training set. This procedure is repeated m times, and the resulting m models are aggregated by uniform voting. Bagging is one of several \"multiple model\" approaches that have recently received much attention (see, for example, (Chan, Stolfo, & Wolpert 1996)). Other procedures of this type include boosting (Freund & Schapire 1996) and stacking (Wolpert 1992). ",
"neighbors": [
1053,
1290,
1484,
2634
],
"mask": "Train"
},
{
"node_id": 1198,
"label": 6,
"text": "Title: Query by Committee \nAbstract: We propose an algorithm called query by committee, in which a committee of students is trained on the same data set. The next query is chosen according to the principle of maximal disagreement. The algorithm is studied for two toy models: the high-low game and perceptron learning of another perceptron. As the number of queries goes to infinity, the committee algorithm yields asymptotically finite information gain. This leads to generalization error that decreases exponentially with the number of examples. This in marked contrast to learning from randomly chosen inputs, for which the information gain approaches zero and the generalization error decreases with a relatively slow inverse power law. We suggest that asymptotically finite information gain may be an important characteristic of good query algorithms. ",
"neighbors": [
418,
517,
859,
1170,
1296,
1683
],
"mask": "Test"
},
{
"node_id": 1199,
"label": 6,
"text": "Title: Query by Committee \nAbstract: Tech Report 4-94 Department of Statistics, Open University, Walton Hall, MK7 6AA, UK Tech Report 205 Department of Computer Science, Monash University, Clayton, Vic. 3168, Australia Abstract: This paper examines the minimum encoding approaches to inference, Minimum Message Length (MML) and Minimum Description Length (MDL). This paper was written with the objective of providing an introduction to this area for statisticians. We describe coding techniques for data, and examine how these techniques can be applied to perform inference and model selection. ",
"neighbors": [
1550,
1702
],
"mask": "Train"
},
{
"node_id": 1200,
"label": 2,
"text": "Title: Edges are the `Independent Components' of Natural Scenes. \nAbstract: Field (1994) has suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and Barlow (1989) has reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual features. We show here that non-linear `infomax', when applied to an ensemble of natural scenes, produces sets of visual filters that are localised and oriented. Some of these filters are Gabor-like and resemble those produced by the sparseness-maximisation network of Olshausen & Field (1996). In addition, the outputs of these filters are as independent as possible, since the info-max network is able to perform Independent Components Analysis (ICA). We compare the resulting ICA filters and their associated basis functions, with other decorrelating filters produced by Principal Components Analysis (PCA) and zero-phase whitening filters (ZCA). The ICA filters have more sparsely distributed (kurtotic) outputs on natural scenes. They also resemble the receptive fields of simple cells in visual cortex, which suggests that these neurons form an information-theoretic co-ordinate system for images.",
"neighbors": [
570,
576,
1520
],
"mask": "Train"
},
{
"node_id": 1201,
"label": 3,
"text": "Title: Model Selection for Consumer Loan Application Data \nAbstract: Loan applications at banks are often long, requiring the applicant to provide large amounts of data. Is all of it necessary? Can we save the applicant some frustration and the bank some expense by using only a subset of the relevant variables? To answer this question, I have attempted to model the current loan approval process at a particular bank. I have used several model selection techniques for logistic regression, including stepwise regression, Occam's Window, Markov Chain Monte Carlo Model Composition (Raftery, Madigan, and Hoeting, 1993), and Bayesian Random Searching. The resulting models largely agree upon a subset of only one-third of the original variables. fl This paper was completed in partial fulfillment of the Ph.D. data analysis requirement. ",
"neighbors": [
325,
1240
],
"mask": "Test"
},
{
"node_id": 1202,
"label": 4,
"text": "Title: Between MDPs and Semi-MDPs: Learning, Planning, and Representing Knowledge at Multiple Temporal Scales \nAbstract: Learning, planning, and representing knowledge at multiple levels of temporal abstraction are key challenges for AI. In this paper we develop an approach to these problems based on the mathematical framework of reinforcement learning and Markov decision processes (MDPs). We extend the usual notion of action to include options|whole courses of behavior that may be temporally extended, stochastic, and contingent on events. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as muscle twitches and joint torques. Options may be given a priori, learned by experience, or both. They may be used interchangeably with actions in a variety of planning and learning methods. The theory of semi-Markov decision processes (SMDPs) can be applied to model the consequences of options and as a basis for planning and learning methods using them. In this paper we develop these connections, building on prior work by Bradtke and Duff (1995), Parr (in prep.) and others. Our main novel results concern the interface between the MDP and SMDP levels of analysis. We show how a set of options can be altered by changing only their termination conditions to improve over SMDP methods with no additional cost. We also introduce intra-option temporal-difference methods that are able to learn from fragments of an option's execution. Finally, we propose a notion of subgoal which can be used to improve the options themselves. Overall, we argue that options and their models provide hitherto missing aspects of a powerful, clear, and expressive framework for representing and organizing knowledge.",
"neighbors": [
1183
],
"mask": "Test"
},
{
"node_id": 1203,
"label": 2,
"text": "Title: A Quantitative Study of Experimental Evaluations of Neural Network Learning Algorithms: Current Research Practice \nAbstract: 190 articles about neural network learning algorithms published in 1993 and 1994 are examined for the amount of experimental evaluation they contain. 29% of them employ not even a single realistic or real learning problem. Only 8% of the articles present results for more than one problem using real world data. Furthermore, one third of all articles do not present any quantitative comparison with a previously known algorithm. These results suggest that we should strive for better assessment practices in neural network learning algorithm research. For the long-term benefit of the field, the publication standards should be raised in this respect and easily accessible collections of benchmark problems should be built. ",
"neighbors": [
542,
779,
816,
881,
1119,
1195,
1411,
1630
],
"mask": "Train"
},
{
"node_id": 1204,
"label": 1,
"text": "Title: The Role of Development in Genetic Algorithms \nAbstract: Technical Report Number CS94-394 Computer Science and Engineering, U.C.S.D. Abstract The developmental mechanisms transforming genotypic to phenotypic forms are typically omitted in formulations of genetic algorithms (GAs) in which these two representational spaces are identical. We argue that a careful analysis of developmental mechanisms is useful when understanding the success of several standard GA techniques, and can clarify the relationships between more recently proposed enhancements. We provide a framework which distinguishes between two developmental mechanisms | learning and maturation | while also showing several common effects on GA search. This framework is used to analyze how maturation and local search can change the dynamics of the GA. We observe that in some contexts, maturation and local search can be incorporated into the fitness evaluation, but illustrate reasons for considering them seperately. Further, we identify contexts in which maturation and local search can be distinguished from the fitness evaluation. ",
"neighbors": [
129,
537,
538,
1153,
2624
],
"mask": "Test"
},
{
"node_id": 1205,
"label": 1,
"text": "Title: The Role of Development in Genetic Algorithms \nAbstract: A Genetic Algorithm Tutorial Darrell Whitley Technical Report CS-93-103 (Revised) November 10, 1993 ",
"neighbors": [
163,
793,
1016,
1153
],
"mask": "Train"
},
{
"node_id": 1206,
"label": 1,
"text": "Title: Learning Monitoring Strategies: A Difficult Genetic Programming Application \nAbstract: Finding optimal or at least good monitoring strategies is an important consideration when designing an agent. We have applied genetic programming to this task, with mixed results. Since the agent control language was kept purposefully general, the set of monitoring strategies constitutes only a small part of the overall space of possible behaviors. Because of this, it was often difficult for the genetic algorithm to evolve them, even though their performance was superior. These results raise questions as to how easy it will be for genetic programming to scale up as the areas it is applied to become more complex. ",
"neighbors": [
163,
789,
1544
],
"mask": "Test"
},
{
"node_id": 1207,
"label": 1,
"text": "Title: Data Analyses Using Simulated Breeding and Inductive Learning Methods \nAbstract: Marketing decision making tasks require the acquisition of efficient decision rules from noisy questionnaire data. Unlike popular learning-from-example methods, in such tasks, we must interpret the characteristics of the data without clear features of the data nor pre-determined evaluation criteria. The problem is how domain experts get simple, easy-to-understand, and accurate knowledge from noisy data. This paper describes a novel method to acquire efficient decision rules from questionnaire data using both simulated breeding and inductive learning techniques. The basic ideas of the method are that simulated breeding is used to get the effective features from the questionnaire data and that inductive learning is used to acquire simple decision rules from the data. The simulated breeding is one of the Genetic Algorithm based techniques to subjectively or interactively evaluate the qualities of offspring generated by genetic operations. The proposed method has been qualitatively and quantitatively validated by a case study on consumer product questionnaire data: the acquired rules are simpler than the results from the direct application of inductive learning; a domain expert admits that they are easy to understand; and they are at the same level on the accuracy compared with the other methods. ",
"neighbors": [
163,
378,
430,
900,
1333
],
"mask": "Test"
},
{
"node_id": 1208,
"label": 5,
"text": "Title: An Experimental Comparison of Genetic Programming and Inductive Logic Programming on Learning Recursive List Functions \nAbstract: This paper experimentally compares three approaches to program induction: inductive logic programming (ILP), genetic programming (GP), and genetic logic programming (GLP) (a variant of GP for inducing Pro-log programs). Each of these methods was used to induce four simple, recursive, list-manipulation functions. The results indicate that ILP is the most likely to induce a correct program from small sets of random examples, while GP is generally less accurate. GLP performs the worst, and is rarely able to induce a correct program. Interpretations of these results in terms of differences in search methods and inductive biases are presented. Keywords: Genetic Programming, Inductive Logic Programming, Empiri cal Comparison This paper will also be submitted to the 8th Int. Workshop on Inductive Logic Programming, 1998. ",
"neighbors": [
1429,
1434
],
"mask": "Test"
},
{
"node_id": 1209,
"label": 0,
"text": "Title: S o l u t i o n Relevant A b s t r a\nAbstract: Two major problems in case-based reasoning are the efficient and justified retrieval of source cases and the adaptation of retrieved solutions to the conditions of the target. For analogical theorem proving by induction, we describe how a solution-relevant abstraction can restrict the retrieval of source cases and the mapping from the source problem to the target problem and how it can determine reformulations that further adapt the source solution.",
"neighbors": [
539,
1215
],
"mask": "Test"
},
{
"node_id": 1210,
"label": 0,
"text": "Title: Structural Similarity and Adaptation \nAbstract: Most commonly, case-based reasoning is applied in domains where attribute value representations of cases are sufficient to represent the features relevant to support classification, diagnosis or design tasks. Distance functions like the Hamming-distance or their transformation into similarity functions are applied to retrieve past cases to be used to generate the solution of an actual problem. Often, domain knowledge is available to adapt past solutions to new problems or to evaluate solutions. However, there are domains like architectural design or law in which structural case representations and corresponding structural similarity functions are needed. Often, the acquisition of adaptation knowledge seems to be impossible or rather requires an effort that is not manageable for fielded applications. Despite of this, humans use cases as the main source to generate adapted solutions. How to achieve this computationally? This paper presents a general approach to structural similarity assessment and adaptation. The approach allows to explore structural case representations and limited domain knowledge to support design tasks. It is exemplarily instantiated in three modules of the design assistant FABEL-Idea that generates adapted design solutions on the basis of prior CAD layouts.",
"neighbors": [
539,
883,
1453
],
"mask": "Train"
},
{
"node_id": 1211,
"label": 2,
"text": "Title: Natural Gradient Descent for Training Multi-Layer Perceptrons \nAbstract: The main difficulty in implementing the natural gradient learning rule is to compute the inverse of the Fisher information matrix when the input dimension is large. We have found a new scheme to represent the Fisher information matrix. Based on this scheme, we have designed an algorithm to compute the inverse of the Fisher information matrix. When the input dimension n is much larger than the number of hidden neurons, the complexity of this algorithm is of order O(n 2 ) while the complexity of conventional algorithms for the same purpose is of order O(n 3 ). The simulation has confirmed the efficience and robustness of the natural gradient learning rule.",
"neighbors": [
1058,
1247,
1520
],
"mask": "Train"
},
{
"node_id": 1212,
"label": 0,
"text": "Title: Acquiring Case Adaptation Knowledge: A Hybrid Approach \nAbstract: The ability of case-based reasoning (CBR) systems to apply cases to novel situations depends on their case adaptation knowledge. However, endowing CBR systems with adequate adaptation knowledge has proven to be a very difficult task. This paper describes a hybrid method for performing case adaptation, using a combination of rule-based and case-based reasoning. It shows how this approach provides a framework for acquiring flexible adaptation knowledge from experiences with autonomous adaptation and suggests its potential as a basis for acquisition of adaptation knowledge from interactive user guidance. It also presents initial experimental results examining the benefits of the approach and comparing the relative contributions of case learning and adaptation learning to reasoning performance. ",
"neighbors": [
580,
817,
818,
819,
1126,
1497,
1552
],
"mask": "Train"
},
{
"node_id": 1213,
"label": 4,
"text": "Title: Learning in Multi-Robot Systems \nAbstract: This paper 1 discusses why traditional reinforcement learning methods, and algorithms applied to those models, result in poor performance in dynamic, situated multi-agent domains characterized by multiple goals, noisy perception and action, and inconsistent reinforcement. We propose a methodology for designing the representation and the forcement functions that take advantage of implicit domain knowledge in order to accelerate learning in such domains, and demonstrate it experimentally in two different mobile robot domains.",
"neighbors": [
565,
738,
1649
],
"mask": "Train"
},
{
"node_id": 1214,
"label": 0,
"text": "Title: Learning Problem-Solving Concepts by Reflecting on Problem Solving \nAbstract: Learning and problem solving are intimately related: problem solving determines the knowledge requirements of the reasoner which learning must fulfill, and learning enables improved problem-solving performance. Different models of problem solving, however, recognize different knowledge needs, and, as a result, set up different learning tasks. Some recent models analyze problem solving in terms of generic tasks, methods, and subtasks. These models require the learning of problem-solving concepts such as new tasks and new task decompositions. We view reflection as a core process for learning these problem-solving concepts. In this paper, we identify the learning issues raised by the task-structure framework of problem solving. We view the problem solver as an abstract device, and represent how it works in terms of a structure-behavior-function model which specifies how the knowledge and reasoning of the problem solver results in the accomplishment of its tasks. We describe how this model enables reflection, and how model-based reflection enables the reasoner to adapt its task structure to produce solutions of better quality. The Autognostic system illustrates this reflection process. ",
"neighbors": [
523,
583,
1635
],
"mask": "Validation"
},
{
"node_id": 1215,
"label": 0,
"text": "Title: Supporting Combined Human and Machine Planning: An Interface for Planning by Analogical Reasoning \nAbstract: Realistic and complex planning situations require a mixed-initiative planning framework in which human and automated planners interact to mutually construct a desired plan. Ideally, this joint cooperation has the potential of achieving better plans than either the human or the machine can create alone. Human planners often take a case-based approach to planning, relying on their past experience and planning by retrieving and adapting past planning cases. Planning by analogical reasoning in which generative and case-based planning are combined, as in Prodigy/Analogy, provides a suitable framework to study this mixed-initiative integration. However, having a human user engaged in this planning loop creates a variety of new research questions. The challenges we found creating a mixed-initiative planning system fall into three categories: planning paradigms differ in human and machine planning; visualization of the plan and planning process is a complex, but necessary task; and human users range across a spectrum of experience, both with respect to the planning domain and the underlying planning technology. This paper presents our approach to these three problems when designing an interface to incorporate a human into the process of planning by analogical reasoning with Prodigy/Analogy. The interface allows the user to follow both generative and case-based planning, it supports visualization of both plan and the planning rationale, and it addresses the variance in the experience of the user by allowing the user to control the presentation of information.",
"neighbors": [
580,
818,
819,
824,
825,
1209,
1699,
1707
],
"mask": "Test"
},
{
"node_id": 1216,
"label": 1,
"text": "Title: Evolutionary Programming and Evolution Strategies: Similarities and Differences \nAbstract: Evolutionary Programming and Evolution Strategies, rather similar representatives of a class of probabilistic optimization algorithms gleaned from the model of organic evolution, are discussed and compared to each other with respect to similarities and differences of their basic components as well as their performance in some experimental runs. Theoretical results on global convergence, step size control for a strictly convex, quadratic function and an extension of the convergence rate theory for Evolution Strategies are presented and discussed with respect to their implications on Evolutionary Programming. ",
"neighbors": [
1035,
1299,
1571,
1719
],
"mask": "Train"
},
{
"node_id": 1217,
"label": 4,
"text": "Title: A game theoretic approach to moving horizon control \nAbstract: A control law is constructed for a linear time varying system by solving a two player zero sum differential game on a moving horizon, the game being that which is used to construct an H 1 controller on a finite horizon. Conditions are given under which this controller results in a stable system and satisfies an infinite horizon H 1 norm bound. A risk sensitive formulation is used to provide a state estimator in the observation feedback case.",
"neighbors": [
1349
],
"mask": "Train"
},
{
"node_id": 1218,
"label": 1,
"text": "Title: Genetic algorithms with multi-parent recombination \nAbstract: In this paper we investigate genetic algorithms where more than two parents are involved in the recombination operation. In particular, we introduce gene scanning as a reproduction mechanism that generalizes classical crossovers, such as n-point crossover or uniform crossover, and is applicable to an arbitrary number (two or more) of parents. We performed extensive tests for optimizing numerical functions, the TSP and graph coloring to observe the effect of different numbers of parents. The experiments show that 2-parent recombination is outperformed when using more parents on the classical DeJong functions. For the other problems the results are not conclusive, in some cases 2 parents are optimal, while in some others more parents are better. ",
"neighbors": [
145,
163,
714,
833,
1035,
1299,
1424,
1516,
1530,
1571,
1670
],
"mask": "Validation"
},
{
"node_id": 1219,
"label": 1,
"text": "Title: Putting the Genetics back into Genetic Algorithms \nAbstract: In this paper we investigate genetic algorithms where more than two parents are involved in the recombination operation. In particular, we introduce gene scanning as a reproduction mechanism that generalizes classical crossovers, such as n-point crossover or uniform crossover, and is applicable to an arbitrary number (two or more) of parents. We performed extensive tests for optimizing numerical functions, the TSP and graph coloring to observe the effect of different numbers of parents. The experiments show that 2-parent recombination is outperformed when using more parents on the classical DeJong functions. For the other problems the results are not conclusive, in some cases 2 parents are optimal, while in some others more parents are better. ",
"neighbors": [
163,
1153
],
"mask": "Test"
},
{
"node_id": 1220,
"label": 2,
"text": "Title: A Method of Combining Multiple Probabilistic Classifiers through Soft Competition on Different Feature Sets \nAbstract: A novel method is proposed for combining multiple probabilistic classifiers on different feature sets. In order to achieve the improved classification performance, a generalized finite mixture model is proposed as a linear combination scheme and implemented based on radial basis function networks. In the linear combination scheme, soft competition on different feature sets is adopted as an automatic feature rank mechanism so that different feature sets can be always simultaneously used in an optimal way to determine linear combination weights. For training the linear combination scheme, a learning algorithm is developed based on Expectation-Maximization (EM) algorithm. The proposed method has been applied to a typical real world problem, viz. speaker identification, in which different feature sets often need consideration simultaneously for robustness. Simulation results show that the proposed method yields good performance in speaker identification.",
"neighbors": [
74,
1484,
1608,
1618
],
"mask": "Validation"
},
{
"node_id": 1221,
"label": 1,
"text": "Title: Adapting the Evaluation Space to Improve Global Learning \nAbstract: ",
"neighbors": [
910,
1797,
2703
],
"mask": "Test"
},
{
"node_id": 1222,
"label": 2,
"text": "Title: Towards a General Distributed Platform for Learning and Generalization and Word Perfect Corp. 1 Introduction\nAbstract: Different learning models employ different styles of generalization on novel inputs. This paper proposes the need for multiple styles of generalization to support a broad application base. The Priority ASOCS model (Priority Adaptive Self-Organizing Concurrent System) is overviewed and presented as a potential platform which can support multiple generalization styles. PASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. The PASOCS can operate in either a data processing mode or a learning mode. During data processing mode, the system acts as a parallel hardware circuit. During learning mode, the PASOCS incorporates rules, with attached priorities, which represent the application being learned. Learning is accomplished in a distributed fashion in time logarithmic in the number of rules. The new model has significant learning time and space complexity improvements over previous models. Generalization in a learning system is at best always a guess. The proper style of generalization is application dependent. Thus, one style of generalization may not be sufficient to allow a learning system to support a broad spectrum of applications [14]. Current connectionist models use one specific style of generalization which is implicit in the learning algorithm. We suggest that the type of generalization used be a self-organizing parameter of the learning system which can be discovered as learning takes place. This requires a) a model which allows flexible generalization styles, and b) mechanisms to guide the system into the best style of generalization for the problem being learned. This paper overviews a learning model which seeks to efficiently support requirement a) above. The model is called Priority ASOCS (PASOCS) [9], which is a member of a class of models called ASOCS (Adaptive Self-Organizing Concurrent Systems) [5]. Section 2 of this paper gives an example of how different generalization techniques can approach a problem. Section 3 presents an overview of PASOCS. Section 4 illustrates how flexible generalization can be supported. Section 5 concludes the paper. ",
"neighbors": [
809,
1129,
1321
],
"mask": "Train"
},
{
"node_id": 1223,
"label": 6,
"text": "Title: A New Metric-Based Approach to Model Selection \nAbstract: We introduce a new approach to model selection that performs better than the standard complexity-penalization and hold-out error estimation techniques in many cases. The basic idea is to exploit the intrinsic metric structure of a hypothesis space, as determined by the natural distribution of unlabeled training patterns, and use this metric as a reference to detect whether the empirical error estimates derived from a small (labeled) training sample can be trusted in the region around an empirically optimal hypothesis. Using simple metric intuitions we develop new geometric strategies for detecting overfitting and performing robust yet responsive model selection in spaces of candidate functions. These new metric-based strategies dramatically outperform previous approaches in experimental studies of classical polynomial curve fitting. Moreover, the technique is simple, efficient, and can be applied to most function learning tasks. The only requirement is access to an auxiliary collection of unlabeled training data. ",
"neighbors": [
848,
1335,
1422,
1607
],
"mask": "Validation"
},
{
"node_id": 1224,
"label": 1,
"text": "Title: Using Real-Valued Genetic Algorithms to Evolve Rule Sets for Classification \nAbstract: In this paper, we use a genetic algorithm to evolve a set of classification rules with real-valued attributes. We show how real-valued attribute ranges can be encoded with real-valued genes and present a new uniform method for representing don't cares in the rules. We view supervised classification as an optimization problem, and evolve rule sets that maximize the number of correct classifications of input instances. We use a variant of the Pitt approach to genetic-based machine learning system with a novel conflict resolution mechanism between competing rules within the same rule set. Experimental results demonstrate the effectiveness of our proposed approach on a benchmark wine classifier system. ",
"neighbors": [
145,
163,
1333,
2673
],
"mask": "Validation"
},
{
"node_id": 1225,
"label": 1,
"text": "Title: Knowledge-Based Genetic Learning \nAbstract: Genetic algorithms have been proven to be a powerful tool within the area of machine learning. However, there are some classes of problems where they seem to be scarcely applicable, e.g. when the solution to a given problem consists of several parts that influence each other. In that case the classic genetic operators cross-over and mutation do not work very well thus preventing a good performance. This paper describes an approach to overcome this problem by using high-level genetic operators and integrating task specific but domain independent knowledge to guide the use of these operators. The advantages of this approach are shown for learning a rule base to adapt the parameters of an image processing operator path within the SOLUTION system.",
"neighbors": [
163,
1117,
1333
],
"mask": "Train"
},
{
"node_id": 1226,
"label": 5,
"text": "Title: On Learning Multiple Descriptions of a Concept \nAbstract: In sparse data environments, greater classification accuracy can be achieved by learning several concept descriptions of the data and combining their classifications. Stochastic search is a general tool which can be used to generate many good concept descriptions (rule sets) for each class in the data. Bayesian probability theory offers an optimal strategy for combining classifications of the individual concept descriptions, and here we use an approximation of that theory. This strategy is most useful when additional data is difficult to obtain and every increase in classification accuracy is important. The primary result of this paper is that multiple concept descriptions are particularly helpful in \"flat\" hypothesis spaces in which there are many equally good ways to grow a rule, each having similar gain. Another result is experimental evidence that learning multiple rule sets yields more accurate classifications than learning multiple rules for some domains. To demonstrate these behaviors, we learn multiple concept descriptions by adapting HYDRA, a noise-tolerant relational learning algorithm. ",
"neighbors": [
1234,
1290
],
"mask": "Train"
},
{
"node_id": 1227,
"label": 6,
"text": "Title: What should be minimized in a decision tree: A re-examination \nAbstract: Computer Science Department University of Massachusetts at Amherst CMPSCI Technical Report 95-20 September 6, 1995 ",
"neighbors": [
1236
],
"mask": "Validation"
},
{
"node_id": 1228,
"label": 4,
"text": "Title: Team-Partitioned, Opaque-Transition Reinforcement Learning \nAbstract: In this paper, we present a novel multi-agent learning paradigm called team-partitioned, opaque-transition reinforcement learning (TPOT-RL). TPOT-RL introduces the concept of using action-dependent features to generalize the state space. In our work, we use a learned action-dependent feature space. TPOT-RL is an effective technique to allow a team of agents to learn to cooperate towards the achievement of a specific goal. It is an adaptation of traditional RL methods that is applicable in complex, non-Markovian, multi-agent domains with large state spaces and limited training opportunities. Multi-agent scenarios are opaque-transition, as team members are not always in full communication with one another and adversaries may affect the environment. Hence, each learner cannot rely on having knowledge of future state transitions after acting in the world. TPOT-RL enables teams of agents to learn effective policies with very few training examples even in the face of a large state space with large amounts of hidden state. The main responsible features are: dividing the learning task among team members, using a very coarse, action-dependent feature space, and allowing agents to gather reinforcement directly from observation of the environment. TPOT-RL is fully implemented and has been tested in the robotic soccer domain, a complex, multi-agent framework. This paper presents the algorithmic details of TPOT-RL as well as empirical results demonstrating the effectiveness of the developed multi-agent learning approach with learned features.",
"neighbors": [
1649,
1687,
1688,
1693
],
"mask": "Train"
},
{
"node_id": 1229,
"label": 2,
"text": "Title: Using Multiple Node Types to Improve the Performance of DMP (Dynamic Multilayer Perceptron) \nAbstract: This paper discusses a method for training multilayer perceptron networks called DMP2 (Dynamic Multilayer Perceptron 2). The method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. The focus of this paper is on the effects of using multiple node types within the DMP framework. Simulation results show that DMP2 performs favorably in comparison with other learning algorithms, and that using multiple node types can be beneficial to network performance. ",
"neighbors": [
809,
1615
],
"mask": "Train"
},
{
"node_id": 1230,
"label": 1,
"text": "Title: Entailment for Specification Refinement \nAbstract: Specification refinement is part of formal program derivation, a method by which software is directly constructed from a provably correct specification. Because program derivation is an intensive manual exercise used for critical software systems, an automated approach would allow it to be viable for many other types of software systems. The goal of this research is to determine if genetic programming (GP) can be used to automate the specification refinement process. The initial steps toward this goal are to show that a well-known proof logic for program derivation can be encoded such that a GP-based system can infer sentences in the logic for proof of a particular sentence. The results are promising and indicate that GP can be useful in aiding pro gram derivation.",
"neighbors": [
995,
1178,
1231,
2470,
2598
],
"mask": "Validation"
},
{
"node_id": 1231,
"label": 1,
"text": "Title: Type Inheritance in Strongly Typed Genetic Programming \nAbstract: This paper appears as chapter 18 of Kenneth E. Kinnear, Jr. and Peter J. Angeline, editors Advances in Genetic Programming 2, MIT Press, 1996. Abstract Genetic Programming (GP) is an automatic method for generating computer programs, which are stored as data structures and manipulated to evolve better programs. An extension restricting the search space is Strongly Typed Genetic Programming (STGP), which has, as a basic premise, the removal of closure by typing both the arguments and return values of functions, and by also typing the terminal set. A restriction of STGP is that there are only two levels of typing. We extend STGP by allowing a type hierarchy, which allows more than two levels of typing. ",
"neighbors": [
854,
956,
995,
1178,
1230,
1232,
1495,
1985,
2086
],
"mask": "Train"
},
{
"node_id": 1232,
"label": 1,
"text": "Title: Augmenting Collective Adaptation with Simple Process Agents \nAbstract: We have integrated the distributed search of genetic programming based systems with collective memory to form a collective adaptation search method. Such a system significantly improves search as problem complexity is increased. However, there is still considerable scope for improvement. In collective adaptation, search agents gather knowledge of their environment and deposit it in a central information repository. Process agents are then able to manipulate that focused knowledge, exploiting the exploration of the search agents. We examine the utility of increasing the capabilities of the centralized process agents. ",
"neighbors": [
995,
1178,
1231,
2211,
2598
],
"mask": "Train"
},
{
"node_id": 1233,
"label": 0,
"text": "Title: A CBR Integration From Inception to Productization \nAbstract: Our case-based reasoning (CBR) integration with the constraint satisfaction problem (CSP) formalism has undergone several transformations on its journey from initial research idea to product-intent design. Both unexpected research results as well as interesting insights into the real-world applicability of the integrated methodology emerged as the integration was explored from alternative viewpoints. In this paper, the alternative viewpoints and the results that were enabled by these viewpoints are described. ",
"neighbors": [
922,
923
],
"mask": "Validation"
},
{
"node_id": 1234,
"label": 5,
"text": "Title: Concept Learning and the Problem of Small \nAbstract: ",
"neighbors": [
13,
790,
937,
977,
1187,
1226,
1263,
1275,
1510
],
"mask": "Train"
},
{
"node_id": 1235,
"label": 6,
"text": "Title: Unbiased Assessment of Learning Algorithms \nAbstract: In order to rank the performance of machine learning algorithms, many researchers conduct experiments on benchmark data sets. Since most learning algorithms have domain-specific parameters, it is a popular custom to adapt these parameters to obtain a minimal error rate on the test set. The same rate is then used to rank the algorithm, which causes an optimistic bias. We quantify this bias, showing, in particular, that an algorithm with more parameters will probably be ranked higher than an equally good algorithm with fewer parameters. We demonstrate this result, showing the number of parameters and trials required in order to pretend to outperform C4.5 or FOIL, respectively, for various benchmark problems. We then describe out how unbiased ranking experiments should be conducted. ",
"neighbors": [
1512,
2508
],
"mask": "Train"
},
{
"node_id": 1236,
"label": 6,
"text": "Title: Exploring the Decision Forest: An Empirical Investigation of Occam's Razor in Decision Tree Induction \nAbstract: We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set of consistent decision trees and the factors that affect the accuracy of individual trees. In particular, we investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data. The experiments were performed on a massively parallel Maspar computer. The results of the experiments on several artificial and two real world problems indicate that, for many of the problems investigated, smaller consistent decision trees are on average less accurate than the average accuracy of slightly larger trees.",
"neighbors": [
296,
382,
861,
1227,
1669
],
"mask": "Train"
},
{
"node_id": 1237,
"label": 6,
"text": "Title: An Empirical Evaluation of Bagging and Boosting \nAbstract: An ensemble consists of a set of independently trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble as a whole is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman 1996a) and Boosting (Freund & Schapire 1996) are two relatively new but popular methods for producing ensembles. In this paper we evaluate these methods using both neural networks and decision trees as our classification algorithms. Our results clearly show two important facts. The first is that even though Bagging almost always produces a better classifier than any of its individual component classifiers and is relatively impervious to overfitting, it does not generalize any better than a baseline neural-network ensemble method. The second is that Boosting is a powerful technique that can usually produce better ensembles than Bagging; however, it is more susceptible to noise and can quickly overfit a data set. ",
"neighbors": [
826,
1422,
1457,
1484,
1521
],
"mask": "Train"
},
{
"node_id": 1238,
"label": 6,
"text": "Title: On Pruning and Averaging Decision Trees \nAbstract: Pruning a decision tree is considered by some researchers to be the most important part of tree building in noisy domains. While, there are many approaches to pruning, an alternative approach of averaging over decision trees has not received as much attention. We perform an empirical comparison of pruning with the approach of averaging over decision trees. For this comparison we use a computa-tionally efficient method of averaging, namely averaging over the extended fanned set of a tree. Since there are a wide range of approaches to pruning, we compare tree averaging with a traditional pruning approach, along with an optimal pruning approach.",
"neighbors": [
378,
1025,
1290,
1500,
1550
],
"mask": "Validation"
},
{
"node_id": 1239,
"label": 2,
"text": "Title: Hidden Markov Modeling of simultaneously recorded cells in the Associative cortex of behaving monkeys \nAbstract: A widely held idea regarding information processing in the brain is the cell-assembly hypothesis suggested by Hebb in 1949. According to this hypothesis, the basic unit of information processing in the brain is an assembly of cells, which can act briefly as a closed system, in response to a specific stimulus. This work presents a novel method of characterizing this supposed activity using a Hidden Markov Model. This model is able to reveal some of the underlying cortical network activity of behavioral processes. In our study the process in hand was the simultaneous activity of several cells recorded from the frontal cortex of behaving monkeys. Using such a model we were able to identify the behavioral mode of the animal and directly identify the corresponding collective network activity. Furthermore, the segmentation of the data into the discrete states also provides direct evidence for the state dependency of the short-time correlation functions between the same pair of cells. Thus, this cross-correlation depends on the network state of activity and not on local connectivity alone.",
"neighbors": [
1387
],
"mask": "Train"
},
{
"node_id": 1240,
"label": 3,
"text": "Title: Model Selection and Accounting for Model Uncertainty in Linear Regression Models \nAbstract: 1 Adrian E. Raftery is Professor of Statistics and Sociology, David Madigan is Assistant Professor of Statistics, and Jennifer Hoeting is a Ph.D. Candidate, all at the Department of Statistics, GN-22, University of Washington, Seattle, WA 98195. The research of Raftery and Hoeting was supported by ONR Contract N-00014-91-J-1074. Madigan's research was partially supported by NSF grant no. DMS 92111627. The authors are grateful to Danika Lew for research assistance. ",
"neighbors": [
84,
742,
772,
841,
897,
912,
950,
987,
998,
999,
1086,
1141,
1147,
1201,
1241,
1347,
1527
],
"mask": "Train"
},
{
"node_id": 1241,
"label": 3,
"text": "Title: Bayesian Graphical Models for Discrete Data \nAbstract: z York's research was supported by a NSF graduate fellowship. The authors are grateful to Julian Besag, David Bradshaw, Jeff Bradshaw, James Carlsen, David Draper, Ivar Heuch, Robert Kass, Augustine Kong, Steffen Lauritzen, Adrian Raftery, and James Zidek for helpful comments and discussions. ",
"neighbors": [
84,
772,
912,
950,
998,
1141,
1147,
1240,
1347,
2559
],
"mask": "Test"
},
{
"node_id": 1242,
"label": 2,
"text": "Title: Categorical Perception in Facial Emotion Classification \nAbstract: We present an automated emotion recognition system that is capable of identifying six basic emotions (happy, surprise, sad, angry, fear, disgust) in novel face images. An ensemble of simple feed-forward neural networks are used to rate each of the images. The outputs of these networks are then combined to generate a score for each emotion. The networks were trained on a database of face images that human subjects consistently rated as portraying a single emotion. Such a system achieves 86% generalization on novel face images (individuals the networks were not trained on) drawn from the same database. The neural network model exhibits categorical perception between some emotion pairs. A linear sequence of morph images is created between two expressions of an individual's face and this sequence is analyzed by the model. Sharp transitions in the output response vector occur in a single step in the sequence for some emotion pairs and not for others. We plan to us the model's response to limit and direct testing in determining if human subjects exhibit categorical perception in morph image sequences. ",
"neighbors": [
939
],
"mask": "Train"
},
{
"node_id": 1243,
"label": 2,
"text": "Title: BLIND SEPARATION OF REAL WORLD AUDIO SIGNALS USING OVERDETERMINED MIXTURES \nAbstract: We discuss the advantages of using overdetermined mixtures to improve upon blind source separation algorithms that are designed to extract sound sources from acoustic mixtures. A study of the nature of room impulse responses helps us choose an adaptive filter architecture. We use ideal inverses of acquired room impulse responses to compare the effectiveness of different-sized separating filter configurations of various filter lengths. Using a multi-channel blind least-mean-square algorithm (MBLMS), we show that, by adding additional sensors, we can improve upon the separation of signals mixed with real world filters. ",
"neighbors": [
570,
1245,
1524
],
"mask": "Test"
},
{
"node_id": 1244,
"label": 5,
"text": "Title: Producing More Comprehensible Models While Retaining Their Performance \nAbstract: Rissanen's Minimum Description Length (MDL) principle is adapted to handle continuous attributes in the Inductive Logic Programming setting. Application of the developed coding as a MDL pruning mechanism is devised. The behavior of the MDL pruning is tested in a synthetic domain with artificially added noise of different levels and in two real life problems | modelling of the surface roughness of a grinding workpiece and modelling of the mutagenicity of nitroaromatic compounds. Results indicate that MDL pruning is a successful parameter-free noise fighting tool in real-life domains since it acts as a safeguard against building too complex models while retaining the accuracy of the model. ",
"neighbors": [
314,
344,
348,
1061,
1596
],
"mask": "Train"
},
{
"node_id": 1245,
"label": 2,
"text": "Title: Blind separation of delayed and convolved sources. \nAbstract: We address the difficult problem of separating multiple speakers with multiple microphones in a real room. We combine the work of Torkkola and Amari, Cichocki and Yang, to give Natural Gradient information maximisation rules for recurrent (IIR) networks, blindly adjusting delays, separating and deconvolving mixed signals. While they work well on simulated data, these rules fail in real rooms which usually involve non-minimum phase transfer functions, not-invertible using stable IIR filters. An approach that sidesteps this problem is to perform infomax on a feedforward architecture in the frequency domain (Lambert 1996). We demonstrate real-room separation of two natural signals using this approach.",
"neighbors": [
570,
576,
1243,
1520,
1524
],
"mask": "Train"
},
{
"node_id": 1246,
"label": 2,
"text": "Title: NIPS*97 Multiplicative Updating Rule for Blind Separation Derived from the Method of Scoring \nAbstract: For blind source separation, when the Fisher information matrix is used as the Riemannian metric tensor for the parameter space, the steepest descent algorithm to maximize the likelihood function in this Riemannian parameter space becomes the serial updating rule with equivariant property. This algorithm can be further simplified by using the asymptotic form of the Fisher information matrix around the equilibrium.",
"neighbors": [
570,
1520
],
"mask": "Train"
},
{
"node_id": 1247,
"label": 2,
"text": "Title: NIPS*97 The Efficiency and The Robustness of Natural Gradient Descent Learning Rule \nAbstract: We have discovered a new scheme to represent the Fisher information matrix of a stochastic multi-layer perceptron. Based on this scheme, we have designed an algorithm to compute the inverse of the Fisher information matrix. When the input dimension n is much larger than the number of hidden neurons, the complexity of this algorithm is of order O(n 2 ) while the complexity of conventional algorithms for the same purpose is of order O(n 3 ). The inverse of the Fisher information matrix is used in the natural gradient descent algorithm to train single-layer or multi-layer perceptrons. It is confirmed by simulation that the natural gradient ",
"neighbors": [
1211
],
"mask": "Test"
},
{
"node_id": 1248,
"label": 0,
"text": "Title: Lazy Acquisition of Place Knowledge \nAbstract: In this paper we define the task of place learning and describe one approach to this problem. Our framework represents distinct places as evidence grids, a probabilistic description of occupancy. Place recognition relies on nearest neighbor classification, augmented by a registration process to correct for translational differences between the two grids. The learning mechanism is lazy in that it involves the simple storage of inferred evidence grids. Experimental studies with physical and simulated robots suggest that this approach improves place recognition with experience, that it can handle significant sensor noise, that it benefits from improved quality in stored cases, and that it scales well to environments with many distinct places. Additional studies suggest that using historical information about the robot's path through the environment can actually reduce recognition accuracy. Previous researchers have studied evidence grids and place learning, but they have not combined these two powerful concepts, nor have they used systematic experimentation to evaluate their methods' abilities. ",
"neighbors": [
66,
688,
835
],
"mask": "Train"
},
{
"node_id": 1249,
"label": 1,
"text": "Title: Evolution of Non-Deterministic Incremental Algorithms as a New Approach for Search in State Spaces \nAbstract: Let us call a non-deterministic incremental algorithm one that is able to construct any solution to a combinatorial problem by selecting incrementally an ordered sequence of choices that defines this solution, each choice being made non-deterministically. In that case, the state space can be represented as a tree, and a solution is a path from the root of that tree to a leaf. This paper describes how the simulated evolution of a population of such non-deterministic incremental algorithms offers a new approach for the exploration of a state space, compared to other techniques like Genetic Algorithms (GA), Evolutionary Strategies (ES) or Hill Climbing. In particular, the efficiency of this method, implemented as the Evolving Non-Determinism (END) model, is presented for the sorting network problem, a reference problem that has challenged computer science. Then, we shall show that the END model remedies some drawbacks of these optimization techniques and even outperforms them for this problem. Indeed, some 16-input sorting networks as good as the best known have been built from scratch, and even a 25-year-old result for the 13-input problem has been improved by one comparator.",
"neighbors": [
793,
1054,
1408,
1473,
1474,
1728,
1734
],
"mask": "Test"
},
{
"node_id": 1250,
"label": 2,
"text": "Title: Priming, Perceptual Reversal, and Circular Reaction in a Neural Network Model of Schema-Based Vision \nAbstract: VISOR is a neural network system for object recognition and scene analysis that learns visual schemas from examples. Processing in VISOR is based on cooperation, competition, and parallel bottom-up and top-down activation of schema representations. Similar principles appear to underlie much of human visual processing, and VISOR can therefore be used to model various perceptual phenomena. This paper focuses on analyzing three phenomena through simulation with VISOR: (1) priming and mental imagery, (2) perceptual reversal, and (3) circular reaction. The results illustrate similarity and subtle differences between the mechanisms mediating priming and mental imagery, show how the two opposing accounts of perceptual reversal (neural satiation and cognitive factors) may both contribute to the phenomenon, and demonstrate how intentional actions can be gradually learned from reflex actions. Successful simulation of such effects suggests that similar mechanisms may govern human visual perception and learning of visual schemas. ",
"neighbors": [
399,
1251
],
"mask": "Test"
},
{
"node_id": 1251,
"label": 2,
"text": "Title: VISOR: Schema-based Scene Analysis with Structured Neural Networks \nAbstract: A novel approach to object recognition and scene analysis based on neural network representation of visual schemas is described. Given an input scene, the VISOR system focuses attention successively at each component, and the schema representations cooperate and compete to match the inputs. The schema hierarchy is learned from examples through unsupervised adaptation and reinforcement learning. VISOR learns that some objects are more important than others in identifying the scene, and that the importance of spatial relations varies depending on the scene. As the inputs differ increasingly from the schemas, VISOR's recognition process is remarkably robust, and automatically generates a measure of confidence in the analysis.",
"neighbors": [
399,
1250
],
"mask": "Test"
},
{
"node_id": 1252,
"label": 2,
"text": "Title: Constructive Training Methods for Feedforward Neural Networks with Binary Weights \nAbstract: DIMACS Technical Report 95-35 August 1995 ",
"neighbors": [
820,
829,
830,
907,
918,
1115,
1477,
1485,
1634
],
"mask": "Train"
},
{
"node_id": 1253,
"label": 4,
"text": "Title: USING A GENETIC ALGORITHM TO LEARN BEHAVIORS FOR AUTONOMOUS VEHICLES \nAbstract: Truly autonomous vehicles will require both projec - tive planning and reactive components in order to perform robustly. Projective components are needed for long-term planning and replanning where explicit reasoning about future states is required. Reactive components allow the system to always have some action available in real-time, and themselves can exhibit robust behavior, but lack the ability to expli - citly reason about future states over a long time period. This work addresses the problem of creating reactive components for autonomous vehicles. Creating reactive behaviors (stimulus-response rules) is generally difficult, requiring the acquisition of much knowledge from domain experts, a problem referred to as the knowledge acquisition bottleneck. SAMUEL is a system that learns reactive behaviors for autonomous agents. SAMUEL learns these behaviors under simulation, automating the process of creating stimulus-response rules and therefore reducing the bottleneck. The learning algorithm was designed to learn useful behaviors from simulations of limited fidelity. Current work is investigating how well behaviors learned under simulation environments work in real world environments. In this paper, we describe SAMUEL, and describe behaviors that have been learned for simulated autonomous aircraft, autonomous underwater vehicles, and robots. These behaviors include dog fighting, missile evasion, track - ing, navigation, and obstacle avoidance. ",
"neighbors": [
163,
811,
910,
965,
1131,
1311
],
"mask": "Test"
},
{
"node_id": 1254,
"label": 2,
"text": "Title: BACKPROPAGATION SEPARATES WHERE PERCEPTRONS DO \nAbstract: Feedforward nets with sigmoidal activation functions are often designed by minimizing a cost criterion. It has been pointed out before that this technique may be outperformed by the classical perceptron learning rule, at least on some problems. In this paper, we show that no such pathologies can arise if the error criterion is of a threshold LMS type, i.e., is zero for values \"beyond\" the desired target values. More precisely, we show that if the data are linearly separable, and one considers nets with no hidden neurons, then an error function as above cannot have any local minima that are not global. Simulations of networks with hidden units are consistent with these results, in that often data which can be classified when minimizing a threshold LMS criterion may fail to be classified when using instead a simple LMS cost. In addition, the proof gives the following stronger result, under the stated hypotheses: the continuous gradient adjustment procedure is such that from any initial weight configuration a separating set of weights is obtained in finite time. This is a precise analogue of the Perceptron Learning Theorem. The results are then compared with the more classical pattern recognition problem of threshold LMS with linear activations, where no spurious local minima exist even for nonseparable data: here it is shown that even if using the threshold criterion, such bad local minima may occur, if the data are not separable and sigmoids are used. ",
"neighbors": [
930,
1062,
1464
],
"mask": "Train"
},
{
"node_id": 1255,
"label": 3,
"text": "Title: Modelling Risk from a Disease in Time and Space \nAbstract: This paper combines existing models for longitudinal and spatial data in a hierarchical Bayesian framework, with particular emphasis on the role of time- and space-varying covariate effects. Data analysis is implemented via Markov chain Monte Carlo methods. The methodology is illustrated by a tentative re-analysis of Ohio lung cancer data 1968-88. Two approaches that adjust for unmeasured spatial covariates, particularly tobacco consumption, are described. The first includes random effects in the model to account for unobserved heterogeneity; the second adds a simple urbanization measure as a surrogate for smoking behaviour. The Ohio dataset has been of particular interest because of the suggestion that a nuclear facility in the southwest of the state may have caused increased levels of lung cancer there. However, we contend here that the data are inadequate for a proper investigation of this issue. fl Email: leo@stat.uni-muenchen.de",
"neighbors": [
95,
99,
358,
894
],
"mask": "Train"
},
{
"node_id": 1256,
"label": 5,
"text": "Title: A BENCHMARK FOR CLASSIFIER LEARNING \nAbstract: Technical Report 474 November 1993 ",
"neighbors": [
881,
1019,
1644,
2675
],
"mask": "Train"
},
{
"node_id": 1257,
"label": 1,
"text": "Title: The Schema Theorem and Price's Theorem \nAbstract: Holland's Schema Theorem is widely taken to be the foundation for explanations of the power of genetic algorithms (GAs). Yet some dissent has been expressed as to its implications. Here, dissenting arguments are reviewed and elaborated upon, explaining why the Schema Theorem has no implications for how well a GA is performing. Interpretations of the Schema Theorem have implicitly assumed that a correlation exists between parent and offspring fitnesses, and this assumption is made explicit in results based on Price's Covariance and Selection Theorem. Schemata do not play a part in the performance theorems derived for representations and operators in general. However, schemata re-emerge when recombination operators are used. Using Geiringer's recombination distribution representation of recombination operators, a \"missing\" schema theorem is derived which makes explicit the intuition for when a GA should perform well. Finally, the method of \"adaptive landscape\" analysis is examined and counterexamples offered to the commonly used correlation statistic. Instead, an alternative statistic | the transmission function in the fitness domain | is proposed as the optimal statistic for estimating GA performance from limited samples.",
"neighbors": [
163,
380,
1153,
1719,
1872,
2087,
2175,
2259
],
"mask": "Train"
},
{
"node_id": 1258,
"label": 2,
"text": "Title: Independent Component Analysis of Simulated EEG Using a Three-Shell Spherical Head Model 1 \nAbstract: 1 This report was supported in part by the Navy Medical Research and Development Command and the Office of Naval Research, Department of the Navy under work unit ONR.Reimb-6429. The views expressed in this article are those of the authors and do not reflect the official policy or position of the Department of the Navy, Department of Defense, or the U.S. Government. Approved for public release, distribution unlimited. ",
"neighbors": [
570,
576,
1520
],
"mask": "Train"
},
{
"node_id": 1259,
"label": 5,
"text": "Title: Finding Accurate Frontiers: A Knowledge-Intensive Approach to Relational Learning \nAbstract: An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory. ",
"neighbors": [
92,
136,
521,
893,
1081,
1082,
1944,
2213,
2312
],
"mask": "Train"
},
{
"node_id": 1260,
"label": 6,
"text": "Title: Transferring and Retraining Learned Information Filters \nAbstract: Any system that learns how to filter documents will suffer poor performance during an initial training phase. One way of addressing this problem is to exploit filters learned by other users in a collaborative fashion. We investigate \"direct transfer\" of learned filters in this setting|a limiting case for any collaborative learning system. We evaluate the stability of several different learning methods under direct transfer, and conclude that symbolic learning methods that use negatively correlated features of the data perform poorly in transfer, even when they perform well in more conventional evaluation settings. This effect is robust: it holds for several learning methods, when a diverse set of users is used in training the classifier, and even when the learned classifiers can be adapted to the new user's distribution. Our experiments give rise to several concrete proposals for improving generalization performance in a collaborative setting, including a beneficial variation on a feature selection method that has been widely used in text categorization. ",
"neighbors": [
344,
654,
1104,
1269,
1312,
2090
],
"mask": "Validation"
},
{
"node_id": 1261,
"label": 1,
"text": "Title: EVOLVING NEURAL NETWORKS WITH COLLABORATIVE SPECIES \nAbstract: We present a coevolutionary architecture for solving decomposable problems and apply it to the evolution of artificial neural networks. Although this work is preliminary in nature it has a number of advantages over non-coevolutionary approaches. The coevolutionary approach utilizes a divide-and-conquer technique in which species representing simpler subtasks are evolved in separate instances of a genetic algorithm executing in parallel. Collaborations among the species are formed representing complete solutions. Species are created dynamically as needed. Results are presented in which the coevolutionary architecture produces higher quality solutions in fewer evolutionary trials when compared with an alternative non-coevolutionary approach on the problem of evolving cascade networks for parity computation. ",
"neighbors": [
247,
1114,
1117,
2089
],
"mask": "Train"
},
{
"node_id": 1262,
"label": 2,
"text": "Title: Maximum A Posteriori Classification of DNA Structure from Sequence Information \nAbstract: We introduce an algorithm, lllama, which combines simple pattern recognizers into a general method for estimating the entropy of a sequence. Each pattern recognizer exploits a partial match between subsequences to build a model of the sequence. Since the primary features of interest in biological sequence domains are subsequences with small variations in exact composition, lllama is particularly suited to such domains. We describe two methods, lllama-length and lllama-alone, which use this entropy estimate to perform maximum a posteriori classification. We apply these methods to several problems in three-dimensional structure classification of short DNA sequences. The results include a surprisingly low 3.6% error rate in predicting helical conformation of oligonucleotides. We compare our results to those obtained using more traditional methods for automated generation of classifiers.",
"neighbors": [
1104
],
"mask": "Validation"
},
{
"node_id": 1263,
"label": 0,
"text": "Title: Using Partitioning to Speed Up Specific-to-General Rule Induction \nAbstract: RISE (Domingos 1995; in press) is a rule induction algorithm that proceeds by gradually generalizing rules, starting with one rule per example. This has several advantages compared to the more common strategy of gradually specializing initially null rules, and has been shown to lead to significant accuracy gains over algorithms like C4.5RULES and CN2 in a large number of application domains. However, RISE's running time (like that of other rule induction algorithms) is quadratic in the number of examples, making it unsuitable for processing very large databases. This paper studies the use of partitioning to speed up RISE, and compares it with the well-known method of windowing. The use of partitioning in a specific-to-general induction setting creates synergies that would not be possible with a general-to-specific system. Partitioning often reduces running time and improves accuracy at the same time. In noisy conditions, the performance of windowing deteriorates rapidly, while that of partitioning remains stable. ",
"neighbors": [
1234,
2585
],
"mask": "Train"
},
{
"node_id": 1264,
"label": 1,
"text": "Title: An Artificial Life Model for Investigating the Evolution of Modularity \nAbstract: To investigate the issue of how modularity emerges in nature, we present an Artificial Life model that allow us to reproduce on the computer both the organisms (i.e., robots that have a genotype, a nervous system, and sensory and motor organs) and the environment in which organisms live, behave and reproduce. In our simulations neural networks are evolutionarily trained to control a mobile robot designed to keep an arena clear by picking up trash objects and releasing them outside the arena. During the evolutionary process modular neural networks, which control the robot's behavior, emerge as a result of genetic duplications. Preliminary simulation results show that duplication-based modular architecture outperforms the nonmod-ular architecture, which represents the starting architecture in our simulations. Moreover, an interaction between mutation and duplication rate emerges from our results. Our future goal is to use this model in order to explore the relationship between the evolutionary emergence of modularity and the phenomenon of gene duplication.",
"neighbors": [
1134,
1738
],
"mask": "Train"
},
{
"node_id": 1265,
"label": 2,
"text": "Title: Differential theory of learning for efficient neural network pattern recognition \nAbstract: We describe a new theory of differential learning by which a broad family of pattern classifiers (including many well-known neural network paradigms) can learn stochastic concepts efficiently. We describe the relationship between a classifier's ability to generalize well to unseen test examples and the efficiency of the strategy by which it learns. We list a series of proofs that differential learning is efficient in its information and computational resource requirements, whereas traditional probabilistic learning strategies are not. The proofs are illustrated by a simple example that lends itself to closed-form analysis. We conclude with an optical character recognition task for which three different types of differentially generated classifiers generalize significantly better than their probabilistically generated counterparts. ",
"neighbors": [
611,
921,
1725
],
"mask": "Train"
},
{
"node_id": 1266,
"label": 2,
"text": "Title: A Hypothesis-driven Constructive Induction Approach to Expanding Neural Networks \nAbstract: With most machine learning methods, if the given knowledge representation space is inadequate then the learning process will fail. This is also true with methods using neural networks as the form of the representation space. To overcome this limitation, an automatic construction method for a neural network is proposed. This paper describes the BP-HCI method for a hypothesis-driven constructive induction in a neural network trained by the backpropagation algorithm. The method searches for a better representation space by analyzing the hypotheses generated in each step of an iterative learning process. The method was applied to ten problems, which include, in particular, exclusive-or, MONK2, parity-6BIT and inverse parity-6BIT problems. All problems were successfully solved with the same initial set of parameters; the extension of representation space was no more than necessary extension for each problem.",
"neighbors": [
836,
1301,
1576,
1663
],
"mask": "Train"
},
{
"node_id": 1267,
"label": 6,
"text": "Title: Estimating the Accuracy of Learned Concepts \nAbstract: This paper investigates alternative estimators of the accuracy of concepts learned from examples. In particular, the cross-validation and 632 bootstrap estimators are studied, using synthetic training data and the foil learning algorithm. Our experimental results contradict previous papers in statistics, which advocate the 632 bootstrap method as superior to cross-validation. Nevertheless, our results also suggest that conclusions based on cross-validation in previous machine learning papers are unreliable. Specifically, our observations are that (i) the true error of the concept learned by foil from independently drawn sets of examples of the same concept varies widely, (ii) the estimate of true error provided by cross-validation has high variability but is approximately unbiased, and (iii) the 632 bootstrap estimator has lower variability than cross-validation, but is systematically biased.",
"neighbors": [
344,
1335,
1500,
1512
],
"mask": "Train"
},
{
"node_id": 1268,
"label": 3,
"text": "Title: The BATmobile: Towards a Bayesian Automated Taxi \nAbstract: The problem of driving an autonomous vehicle in highway traffic engages many areas of AI research and has substantial economic significance. We describe work in progress on a new approach to this problem based on a decision-theoretic architecture using dynamic probabilistic networks. The architecture provides a sound solution to the problems of sensor noise, sensor failure, and uncertainty about the behavior of other vehicles and about the effects of one's own actions. Our approach has been implemented in a computer simulation system, and the autonomous vehicle successfully negotiates a variety of difficult situations.",
"neighbors": [
788,
976,
1186,
1414,
1757,
1842,
1898,
2140,
2323,
2419
],
"mask": "Validation"
},
{
"node_id": 1269,
"label": 6,
"text": "Title: Context-sensitive learning methods for text categorization \nAbstract: Two recently implemented machine learning algorithms, RIPPER and sleeping experts for phrases, are evaluated on a number of large text categorization problems. These algorithms both construct classifiers that allow the \"context\" of a word w to affect how (or even whether) the presence or absence of w will contribute to a classification. However, RIPPER and sleeping experts differ radically in many other respects: differences include different notions as to what constitutes a context, different ways of combining contexts to construct a classifier, different methods to search for a combination of contexts, and different criteria as to what contexts should be included in such a combination. In spite of these differences, both RIPPER and sleeping experts perform extremely well across a wide variety of categorization problems, generally outperforming previously applied learning methods. We view this result as a confirmation of the usefulness of classifiers that represent contextual information. ",
"neighbors": [
418,
453,
569,
876,
1260,
1312,
2618
],
"mask": "Validation"
},
{
"node_id": 1270,
"label": 6,
"text": "Title: Automatic Parameter Selection by Minimizing Estimated Error \nAbstract: We address the problem of finding the parameter settings that will result in optimal performance of a given learning algorithm using a particular dataset as training data. We describe a \"wrapper\" method, considering determination of the best parameters as a discrete function optimization problem. The method uses best-first search and cross-validation to wrap around the basic induction algorithm: the search explores the space of parameter values, running the basic algorithm many times on training and holdout sets produced by cross-validation to get an estimate of the expected error of each parameter setting. Thus, the final selected parameter settings are tuned for the specific induction algorithm and dataset being studied. We report experiments with this method on 33 datasets selected from the UCI and StatLog collections using C4.5 as the basic induction algorithm. At a 90% confidence level, our method improves the performance of C4.5 on nine domains, degrades performance on one, and is statistically indistinguishable from C4.5 on the rest. On the sample of datasets used for comparison, our method yields an average 13% relative decrease in error rate. We expect to see similar performance improvements when using our method with other machine learning al gorithms.",
"neighbors": [
208,
236,
430,
627,
1335,
2342
],
"mask": "Train"
},
{
"node_id": 1271,
"label": 5,
"text": "Title: Beyond Correlation: Bringing Artificial Intelligence to Events Data \nAbstract: The Feature Vector Editor offers a user-extensible environment for exploratory data analysis. Several empirical studies have applied this environment to the SHER-FACS International Conflict Management dataset. Current analysis techniques include boolean analysis, temporal analysis, and automatic rule learning. Implemented portably in ANSI Common Lisp and the Common Lisp Interface Manager (CLIM), the system features an advanced interface that makes it intuitive for people to manipulate data and discover significant relationships. The system encapsulates data within objects and defines generic protocols that mediate all interactions between data, users and analysis algorithms. Generic data protocols make possible rapid integration of new datasets and new analytical algorithms with heterogeneous data formats. More sophisticated research reformulates SHERFACS conflict codings as machine-parsable narratives suitable for processing into semantic representations by the RELATUS Natural Language System. Experiments with 244 SHERFACS cases demonstrated the feasibility of building knowledge bases from synthetic texts exceeding 600 pages. ",
"neighbors": [
236,
997
],
"mask": "Train"
},
{
"node_id": 1272,
"label": 2,
"text": "Title: Input-Output Analysis of Feedback Loops with Saturation Nonlinearities \nAbstract: The Feature Vector Editor offers a user-extensible environment for exploratory data analysis. Several empirical studies have applied this environment to the SHER-FACS International Conflict Management dataset. Current analysis techniques include boolean analysis, temporal analysis, and automatic rule learning. Implemented portably in ANSI Common Lisp and the Common Lisp Interface Manager (CLIM), the system features an advanced interface that makes it intuitive for people to manipulate data and discover significant relationships. The system encapsulates data within objects and defines generic protocols that mediate all interactions between data, users and analysis algorithms. Generic data protocols make possible rapid integration of new datasets and new analytical algorithms with heterogeneous data formats. More sophisticated research reformulates SHERFACS conflict codings as machine-parsable narratives suitable for processing into semantic representations by the RELATUS Natural Language System. Experiments with 244 SHERFACS cases demonstrated the feasibility of building knowledge bases from synthetic texts exceeding 600 pages. ",
"neighbors": [
1281,
1282,
1346,
1451,
1604
],
"mask": "Test"
},
{
"node_id": 1273,
"label": 6,
"text": "Title: The Sources of Increased Accuracy for Two Proposed Boosting Algorithms \nAbstract: We introduce two boosting algorithms that aim to increase the generalization accuracy of a given classifier by incorporating it as a level-0 component in a stacked generalizer. Both algorithms construct a complementary level-0 classifier that can only generate coarse hypotheses for the training data. We show that the two algorithms boost generalization accuracy on a representative collection of data sets. The two algorithms are distinguished in that one of them modifies the class targets of selected training instances in order to train the complementary classifier. We show that the two algorithms achieve approximately equal generalization accuracy, but that they create complementary classifiers that display different degrees of accuracy and diversity. Our study provides evidence that it may be useful to investigate families of boosting algorithms that incorporate varying levels of accuracy and diversity, so as to achieve an appropriate mix for a given task and domain. ",
"neighbors": [
319,
569,
686,
826,
1422
],
"mask": "Train"
},
{
"node_id": 1274,
"label": 1,
"text": "Title: Surgery \nAbstract: Object localization has applications in many areas of engineering and science. The goal is to spatially locate an arbitrarily-shaped object. In many applications, it is desirable to minimize the number of measurements collected for this purpose, while ensuring sufficient localization accuracy. In surgery, for example, collecting a large number of localization measurements may either extend the time required to perform a surgical procedure, or increase the radiation dosage to which a patient is exposed. Localization accuracy is a function of the spatial distribution of discrete measurements over an object when measurement noise is present. In [Simon et al., 1995a], metrics were presented to evaluate the information available from a set of discrete object measurements. In this study, new approaches to the discrete point data selection problem are described. These include hillclimbing, genetic algorithms (GAs), and Population-Based Incremental Learning (PBIL). Extensions of the standard GA and PBIL methods, which employ multiple parallel populations, are explored. The results of extensive empirical testing are provided. The results suggest that a combination of PBIL and hillclimbing result in the best overall performance. A computer-assisted surgical system which incorporates some of the methods presented in this paper is currently being evaluated in cadaver trials. Evolution-Based Methods for Selecting Point Data Shumeet Baluja was supported by a National Science Foundation Graduate Student Fellowship and a Graduate Student Fellowship from the National Aeronautics and Space Administration, administered by the Lyndon B. Johnson Space Center, Houston, TX. David Simon was partially supported by a National Science Foundation National Challenge grant (award IRI-9422734). for Object Localization: Applications to",
"neighbors": [
163,
343,
427,
1303,
1305
],
"mask": "Validation"
},
{
"node_id": 1275,
"label": 5,
"text": "Title: Fossil: A Robust Relational Learner \nAbstract: The research reported in this paper describes Fossil, an ILP system that uses a search heuristic based on statistical correlation. This algorithm implements a new method for learning useful concepts in the presence of noise. In contrast to Foil's stopping criterion, which allows theories to grow in complexity as the size of the training sets increases, we propose a new stopping criterion that is independent of the number of training examples. Instead, Fossil's stopping criterion depends on a search heuristic that estimates the utility of literals on a uniform scale. In addition we outline how this feature can be used for top-down pruning and present some preliminary results. ",
"neighbors": [
344,
378,
426,
585,
1234,
2290,
2291,
2617
],
"mask": "Train"
},
{
"node_id": 1276,
"label": 3,
"text": "Title: Heuristics and Normative Models of Judgment under Uncertainty \nAbstract: Psychological evidence shows that probability theory is not a proper descriptive model of intuitive human judgment. Instead, some heuristics have been proposed as such a descriptive model. This paper argues that probability theory has limi tations even as a normative model. A new normative model of judgment under uncertainty is designed under the assumption that the system's knowledge and resources are insufficient with respect to the questions that the system needs to answer. The proposed heuristics in human reasoning can also be observed in this new model, and can be justified according to the assumption. ",
"neighbors": [
1503,
1504,
1506,
1525
],
"mask": "Validation"
},
{
"node_id": 1277,
"label": 1,
"text": "Title: Evolution of Pseudo-colouring Algorithms for Image Enhancement with Interactive Genetic Programming \nAbstract: Technical Report: CSRP-97-5 School of Computer Science The University of Birmingham Abstract In this paper we present an approach to the interactive development of programs for image enhancement with Genetic Programming (GP) based on pseudo-colour transformations. In our approach the user drives GP by deciding which individual should be the winner in tournament selection. The presence of the user does not only allow running GP without a fitness function but it also transforms GP into a very efficient search procedure capable of producing effective solutions to real-life problems in only hundreds of evaluations. In the paper we also propose a strategy to further reduce user interaction: we record the choices made by the user in interactive runs and we later use them to build a model which can replace him/her in longer runs. Experimental results with interactive GP and with our user-modelling strategy are also reported.",
"neighbors": [
163,
1476,
1533,
2152,
2277,
2470
],
"mask": "Train"
},
{
"node_id": 1278,
"label": 0,
"text": "Title: A Functional Theory of Creative Reading \nAbstract: Reading is an area of human cognition which has been studied for decades by psychologists, education researchers, and artificial intelligence researchers. Yet, there still does not exist a theory which accurately describes the complete process. We believe that these past attempts fell short due to an incomplete understanding of the overall task of reading; namely, the complete set of mental tasks a reasoner must perform to read and the mechanisms that carry out these tasks. We present a functional theory of the reading process and argue that it represents a coverage of the task. The theory combines experimental results from psychology, artificial intelligence, education, and linguistics, along with the insights we have gained from our own research. This greater understanding of the mental tasks necessary for reading will enable new natural language understanding systems to be more flexible and more capable than earlier ones. Furthermore, we argue that creativity is a necessary component of the reading process and must be considered in any theory or system attempting to describe it. We present a functional theory of creative reading and a novel knowledge organization scheme that supports the creativity mechanisms. The reading theory is currently being implemented in the ISAAC (Integrated Story Analysis And Creativity) system, a computer system which reads science fiction stories. fl This paper is part of the Georgia Institute of Technology, College of Computing, Technical Report series. ",
"neighbors": [
289,
486,
583,
1534
],
"mask": "Test"
},
{
"node_id": 1279,
"label": 1,
"text": "Title: Speeding up Genetic Programming: A Parallel BSP implementation the Bulk Synchronous Parallel Pro gramming (BSP)\nAbstract: ",
"neighbors": [
1065
],
"mask": "Train"
},
{
"node_id": 1280,
"label": 6,
"text": "Title: Theory and Practice of Vector Quantizers Trained on Small Training Sets \nAbstract: We examine how the performance of a memoryless vector quantizer changes as a function of its training set size. Specifically, we study how well the training set distortion predicts test distortion when the training set is a randomly drawn subset of blocks from the test or training image(s). Using the Vapnik-Chervonenkis dimension, we derive formal bounds for the difference of test and training distortion of vector quantizer codebooks. We then describe extensive empirical simulations that test these bounds for a variety of bit rates and vector dimensions, and give practical suggestions for determining the training set size necessary to achieve good generalization from a codebook. We conclude that, by using training sets comprised of only a small fraction of the available data, one can produce results that are close to the results obtainable when all available data are used. ",
"neighbors": [
955
],
"mask": "Train"
},
{
"node_id": 1281,
"label": 2,
"text": "Title: On Finite Gain Stabilizability of Linear Systems Subject to Input Saturation \nAbstract: This paper deals with (global) finite-gain input/output stabilization of linear systems with saturated controls. For neutrally stable systems, it is shown that the linear feedback law suggested by the passivity approach indeed provides stability, with respect to every L p -norm. Explicit bounds on closed-loop gains are obtained, and they are related to the norms for the respective systems without saturation. These results do not extend to the class of systems for which the state matrix has eigenvalues on the imaginary axis with nonsimple (size > 1) Jordan blocks, contradicting what may be expected from the fact that such systems are globally asymptotically stabilizable in the state-space sense; this is shown in particular for the double integrator. ",
"neighbors": [
948,
1272,
1282,
1346,
1451,
1471,
1604
],
"mask": "Validation"
},
{
"node_id": 1282,
"label": 2,
"text": "Title: Global Stabilization of Linear Discrete-Time Systems with Bounded Feedback \nAbstract: This paper deals with the problem of global stabilization of linear discrete time systems by means of bounded feedback laws. The main result proved is an analog of one proved for the continuous time case by the authors, and shows that such stabilization is possible if and only if the system is stabilizable with arbitrary controls and the transition matrix has spectral radius less or equal to one. The proof provides in principle an algorithm for the construction of such feedback laws, which can be implemented either as cascades or as parallel connections (\"single hidden layer neural networks\") of simple saturation functions. ",
"neighbors": [
948,
1022,
1272,
1281,
1446,
1471,
1494
],
"mask": "Train"
},
{
"node_id": 1283,
"label": 2,
"text": "Title: Bilinear Separation of Two Sets in n-Space \nAbstract: The NP-complete problem of determining whether two disjoint point sets in the n-dimensional real space R n can be separated by two planes is cast as a bilinear program, that is minimizing the scalar product of two linear functions on a polyhedral set. The bilinear program, which has a vertex solution, is processed by an iterative linear programming algorithm that terminates in a finite number of steps at a point satisfying a necessary optimality condition or at a global minimum. Encouraging computational experience on a number of test problems is reported.",
"neighbors": [
142,
230,
391,
427,
823,
1284,
1318,
1547
],
"mask": "Train"
},
{
"node_id": 1284,
"label": 2,
"text": "Title: Feature Selection via Mathematical Programming \nAbstract: The problem of discriminating between two finite point sets in n-dimensional feature space by a separating plane that utilizes as few of the features as possible, is formulated as a mathematical program with a parametric objective function and linear constraints. The step function that appears in the objective function can be approximated by a sigmoid or by a concave exponential on the nonnegative real line, or it can be treated exactly by considering the equivalent linear program with equilibrium constraints (LPEC). Computational tests of these three approaches on publicly available real-world databases have been carried out and compared with an adaptation of the optimal brain damage (OBD) method for reducing neural network complexity. One feature selection algorithm via concave minimization (FSV) reduced cross-validation error on a cancer prognosis database by 35.4% while reducing problem features from 32 to 4. Feature selection is an important problem in machine learning [18, 15, 16, 17, 33]. In its basic form the problem consists of eliminating as many of the features in a given problem as possible, while still carrying out a preassigned task with acceptable accuracy. Having a minimal number of features often leads to better generalization and simpler models that can be more easily interpreted. In the present work, our task is to discriminate between two given sets in an n-dimensional feature space by using as few of the given features as possible. We shall formulate this problem as a mathematical program with a parametric objective function that will attempt to achieve this task by generating a separating plane in a feature space of as small a dimension as possible while minimizing the average distance of misclassified points to the plane. One of the computational experiments that we carried out on our feature selection procedure showed its effectiveness, not only in minimizing the number of features selected, but also in quickly recognizing and removing spurious random features that were introduced. Thus, on the Wisconsin Prognosis Breast Cancer WPBC database [36] with a feature space of 32 dimensions and 6 random features added, one of our algorithms FSV (11) immediately removed the 6 random features as well as 28 of the original features resulting in a separating plane in a 4-dimensional reduced feature space. By using tenfold cross-validation [35], separation error in the 4-dimensional space was reduced 35.4% from the corresponding error in the original problem space. (See Section 3 for details.) We note that mathematical programming approaches to the feature selection problem have been recently proposed in [4, 22]. Even though the approach of [4] is based on an LPEC formulation, both the LPEC and its method of solution are different from the ones used here. The polyhedral concave minimization approach of [22] is principally involved with theoretical considerations of one specific algorithm and no cross-validatory results are given. Other effective computational applications of mathematical programming to neural networks are given in [30, 26]. ",
"neighbors": [
230,
427,
430,
1055,
1169,
1283
],
"mask": "Train"
},
{
"node_id": 1285,
"label": 2,
"text": "Title: Learning Context-free Grammars: Capabilities and Limitations of a Recurrent Neural Network with an External Stack Memory \nAbstract: This work describes an approach for inferring Deterministic Context-free (DCF) Grammars in a Connectionist paradigm using a Recurrent Neural Network Pushdown Automaton (NNPDA). The NNPDA consists of a recurrent neural network connected to an external stack memory through a common error function. We show that the NNPDA is able to learn the dynamics of an underlying pushdown automaton from examples of grammatical and non-grammatical strings. Not only does the network learn the state transitions in the automaton, it also learns the actions required to control the stack. In order to use continuous optimization methods, we develop an analog stack which reverts to a discrete stack by quantization of all activations, after the network has learned the transition rules and stack actions. We further show an enhancement of the network's learning capabilities by providing hints. In addition, an initial comparative study of simulations with first, second and third order recurrent networks has shown that the increased degree of freedom in a higher order networks improve generalization but not necessarily learning speed. ",
"neighbors": [
405,
770,
968,
1176,
1298,
1382
],
"mask": "Train"
},
{
"node_id": 1286,
"label": 1,
"text": "Title: HGA: A Hardware-Based Genetic Algorithm \nAbstract: A genetic algorithm (GA) is a robust problem-solving method based on natural selection. Hardware's speed advantage and its ability to parallelize offer great rewards to genetic al gorithms. Speedups of 1-3 orders of magnitude have been observed when frequently used software routines were im plemented in hardware by way of reprogrammable field-pro grammable gate arrays (FPGAs). Reprogrammability is es sential in a general-purpose GA engine because certain GA modules require changeability (e.g. the function to be opti mized by the GA). Thus a hardware-based GA is both feasi ble and desirable. A fully functional hardware-based genetic algorithm (the HGA) is presented here as a proof-of-concept system. It was designed using VHDL to allow for easy scala bility. It is designed to act as a coprocessor with the CPU of a PC. The user programs the FPGAs which implement the function to be optimized. Other GA parameters may also be specified by the user. Simulation results and performance analyses of the HGA are presented. A prototype HGA is de scribed and compared to a similar GA implemented in soft ware. In the simple tests, the prototype took about 6% as many clock cycles to run as the software-based GA. Further suggested improvements could realistically make the HGA 2-3 orders of magnitude faster than the software-based GA. ",
"neighbors": [
163,
1136
],
"mask": "Validation"
},
{
"node_id": 1287,
"label": 3,
"text": "Title: Factorial Hidden Markov Models \nAbstract: One of the basic probabilistic tools used for time series modeling is the hidden Markov model (HMM). In an HMM, information about the past of the time series is conveyed through a single discrete variable|the hidden state. We present a generalization of HMMs in which this state is factored into multiple state variables and is therefore represented in a distributed manner. Both inference and learning in this model depend critically on computing the posterior probabilities of the hidden state variables given the observations. We present an exact algorithm for inference in this model, and relate it to the Forward-Backward algorithm for HMMs and to algorithms for more general belief networks. Due to the combinatorial nature of the hidden state representation, this exact algorithm is intractable. As in other intractable systems, approximate inference can be carried out using Gibbs sampling or mean field theory. We also present a structured approximation in which the the state variables are decoupled, based on which we derive a tractable learning algorithm. Empirical comparisons suggest that these approximations are efficient and accurate alternatives to the exact methods. Finally, we use the structured approximation to model Bach's chorales and show that it outperforms HMMs in capturing the complex temporal patterns in this dataset.",
"neighbors": [
499,
787,
810,
905,
945,
976,
1397,
1414,
1437
],
"mask": "Train"
},
{
"node_id": 1288,
"label": 3,
"text": "Title: Exploiting Tractable Substructures in Intractable Networks \nAbstract: We develop a refined mean field approximation for inference and learning in probabilistic neural networks. Our mean field theory, unlike most, does not assume that the units behave as independent degrees of freedom; instead, it exploits in a principled way the existence of large substructures that are computationally tractable. To illustrate the advantages of this framework, we show how to incorporate weak higher order interactions into a first-order hidden Markov model, treating the corrections (but not the first order structure) within mean field theory.",
"neighbors": [
107,
108,
499,
787,
1091,
1128,
1414,
1461,
1502,
1593
],
"mask": "Validation"
},
{
"node_id": 1289,
"label": 2,
"text": "Title: 5 Bayesian estimation 5.1 Introduction \nAbstract: This chapter takes a different standpoint to address the problem of learning. We will here reason only in terms of probability, and make extensive use of the chain rule known as \"Bayes' rule\". A fast definition of the basics in probability is provided in appendix A for quick reference. Most of this chapter is a review of the methods of Bayesian learning applied to our modelling purposes. Some original analyses and comments are also provided in section 5.8, 5.11 and 5.12. There is a latent rivalry between \"Bayesian\" and \"Orthodox\" statistics. It is by no means our intention to enter this kind of controversy. We are perfectly willing to accept orthodox as well as unorthodox methods, as long as they are scientifically sound and provide good results when applied to learning tasks. The same disclaimer applies to the two frameworks presented here. They have been the object of heated controversy in the past 3 years in the neural networks community. We will not take side, but only present both frameworks, with their strong points and their weaknesses. In the context of this work, the \"Bayesian frameworks\" are especially interesting as the provide some continuous update rules that can be used during regularised cost minimisation to yield an automatic selection of the regularisation level. Unlike the methods presented in chapter 3, it is not necessary to try several regularisation levels and perform as many optimisations. The Bayesian framework is the only one in which training is achieved through a one-pass optimisation procedure. ",
"neighbors": [
157,
1452
],
"mask": "Test"
},
{
"node_id": 1290,
"label": 6,
"text": "Title: A THEORY OF LEARNING CLASSIFICATION RULES \nAbstract: This chapter takes a different standpoint to address the problem of learning. We will here reason only in terms of probability, and make extensive use of the chain rule known as \"Bayes' rule\". A fast definition of the basics in probability is provided in appendix A for quick reference. Most of this chapter is a review of the methods of Bayesian learning applied to our modelling purposes. Some original analyses and comments are also provided in section 5.8, 5.11 and 5.12. There is a latent rivalry between \"Bayesian\" and \"Orthodox\" statistics. It is by no means our intention to enter this kind of controversy. We are perfectly willing to accept orthodox as well as unorthodox methods, as long as they are scientifically sound and provide good results when applied to learning tasks. The same disclaimer applies to the two frameworks presented here. They have been the object of heated controversy in the past 3 years in the neural networks community. We will not take side, but only present both frameworks, with their strong points and their weaknesses. In the context of this work, the \"Bayesian frameworks\" are especially interesting as the provide some continuous update rules that can be used during regularised cost minimisation to yield an automatic selection of the regularisation level. Unlike the methods presented in chapter 3, it is not necessary to try several regularisation levels and perform as many optimisations. The Bayesian framework is the only one in which training is achieved through a one-pass optimisation procedure. ",
"neighbors": [
218,
378,
423,
429,
893,
1019,
1025,
1191,
1197,
1226,
1238,
1644,
1712,
1918,
2080,
2169,
2329
],
"mask": "Validation"
},
{
"node_id": 1291,
"label": 3,
"text": "Title: Bits-back coding software guide \nAbstract: Abstract | In this document, I first review the theory behind bits-back coding (aka. free energy coding) (Frey and Hinton 1996) and then describe the interface to C-language software that can be used for bits-back coding. This method is a new approach to the problem of optimal compression when a source code produces multiple codewords for a given symbol. It may seem that the most sensible codeword to use in this case is the shortest one. However, in the proposed bits-back approach, random codeword selection yields an effective codeword length that can be less than the shortest codeword length. If the random choices are Boltzmann distributed, the effective length is optimal for the given source code. The software which I describe in this guide is easy to use and the source code is only a few pages long. I illustrate the bits-back coding software on a simple quantized Gaussian mixture problem. ",
"neighbors": [
1548
],
"mask": "Validation"
},
{
"node_id": 1292,
"label": 5,
"text": "Title: CONSTRUCTIVE INDUCTION FROM DATA IN AQ17-DCI: Further Experiments \nAbstract: Abstract | In this document, I first review the theory behind bits-back coding (aka. free energy coding) (Frey and Hinton 1996) and then describe the interface to C-language software that can be used for bits-back coding. This method is a new approach to the problem of optimal compression when a source code produces multiple codewords for a given symbol. It may seem that the most sensible codeword to use in this case is the shortest one. However, in the proposed bits-back approach, random codeword selection yields an effective codeword length that can be less than the shortest codeword length. If the random choices are Boltzmann distributed, the effective length is optimal for the given source code. The software which I describe in this guide is easy to use and the source code is only a few pages long. I illustrate the bits-back coding software on a simple quantized Gaussian mixture problem. ",
"neighbors": [
960,
1049,
1071,
1085
],
"mask": "Train"
},
{
"node_id": 1293,
"label": 2,
"text": "Title: Using Recurrent Neural Networks to Learn the Structure of Interconnection Networks \nAbstract: A modified Recurrent Neural Network (RNN) is used to learn a Self-Routing Interconnection Network (SRIN) from a set of routing examples. The RNN is modified so that it has several distinct initial states. This is equivalent to a single RNN learning multiple different synchronous sequential machines. We define such a sequential machine structure as augmented and show that a SRIN is essentially an Augmented Synchronous Sequential Machine (ASSM). As an example, we learn a small six-switch SRIN. After training we extract the net-work's internal representation of the ASSM and corresponding SRIN. fl This paper is adapted from ( Goudreau, 1993, Chapter 6 ) . A shortened version of this paper was published in ( Goudreau & Giles, 1993 ) . ",
"neighbors": [
672,
1592,
1600,
1606,
2284
],
"mask": "Train"
},
{
"node_id": 1294,
"label": 3,
"text": "Title: Representation Requirements for Supporting Decision Model Formulation \nAbstract: This paper outlines a methodology for analyzing the representational support for knowledge-based decision-modeling in a broad domain. A relevant set of inference patterns and knowledge types are identified. By comparing the analysis results to existing representations, some insights are gained into a design approach for integrating categorical and uncertain knowledge in a context sensitive manner.",
"neighbors": [
915
],
"mask": "Train"
},
{
"node_id": 1295,
"label": 2,
"text": "Title: On the Computational Utility of Consciousness \nAbstract: We propose a computational framework for understanding and modeling human consciousness. This framework integrates many existing theoretical perspectives, yet is sufficiently concrete to allow simulation experiments. We do not attempt to explain qualia (subjective experience), but instead ask what differences exist within the cognitive information processing system when a person is conscious of mentally-represented information versus when that information is unconscious. The central idea we explore is that the contents of consciousness correspond to temporally persistent states in a network of computational modules. Three simulations are described illustrating that the behavior of persistent states in the models corresponds roughly to the behavior of conscious states people experience when performing similar tasks. Our simulations show that periodic settling to persistent (i.e., conscious) states improves performance by cleaning up inaccuracies and noise, forcing decisions, and helping keep the system on track toward a solution.",
"neighbors": [
886
],
"mask": "Train"
},
{
"node_id": 1296,
"label": 6,
"text": "Title: Sifting informative examples from a random source. \nAbstract: We discuss two types of algorithms for selecting relevant examples that have been developed in the context of computation learning theory. The examples are selected out of a stream of examples that are generated independently at random. The first two algorithms are the so-called \"boosting\" algorithms of Schapire [ Schapire, 1990 ] and Fre-und [ Freund, 1990 ] , and the Query-by-Committee algorithm of Seung [ Seung et al., 1992 ] . We describe the algorithms and some of their proven properties, point to some of their commonalities, and suggest some possible future implications. ",
"neighbors": [
109,
456,
1198
],
"mask": "Validation"
},
{
"node_id": 1297,
"label": 5,
"text": "Title: The Origins of Inductive Logic Programming: A Prehistoric Tale \nAbstract: This paper traces the development of the main ideas that have led to the present state of knowledge in Inductive Logic Programming. The story begins with research in psychology on the subject of human concept learning. Results from this research influenced early efforts in Artificial Intelligence which combined with the formal methods of inductive inference to evolve into the present discipline of Inductive Logic Programming. Inductive Logic Programming is often considered to be a young discipline. However, it has its roots in research dating back nearly 40 years. This paper traces the development of ideas beginning in psychology and the effect they had on concept learning research in Artificial Intelligence. Independent of any requirement for a psychological basis, formal methods of inductive inference were developed. These separate streams eventually gave rise to Inductive Logic Programming. This account is not entirely unbiased. More attention is given to the work of those researchers who most influenced my own interest in machine learning. Being a retrospective paper, I do not attempt to describe recent developments in ILP. This account only includes research prior to 1991 the year in which the term Inductive Logic Programming was first used (Muggleton, 1991). This is the reason for the subtitle A Prehistoric Tale. The major headings in the paper are taken from the names of periods in the evolution of life on Earth. ",
"neighbors": [
1174
],
"mask": "Test"
},
{
"node_id": 1298,
"label": 2,
"text": "Title: Rule Revision with Recurrent Neural Networks \nAbstract: Recurrent neural networks readily process, recognize and generate temporal sequences. By encoding grammatical strings as temporal sequences, recurrent neural networks can be trained to behave like deterministic sequential finite-state automata. Algorithms have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge (or rules) into recurrent neural networks, we show that recurrent neural networks are able to perform rule revision. Rule revision is performed by comparing the inserted rules with the rules in the finite-state automata extracted from trained networks. The results from training a recurrent neural network to recognize a known non-trivial, randomly generated regular grammar show that not only do the networks preserve correct rules but that they are able to correct through training inserted rules which were initially incorrect. (By incorrect, we mean that the rules were not the ones in the randomly generated grammar.) ",
"neighbors": [
407,
409,
1285,
1592
],
"mask": "Test"
},
{
"node_id": 1299,
"label": 1,
"text": "Title: Multi-parent Recombination \nAbstract: In this section we survey recombination operators that can apply more than two parents to create offspring. Some multi-parent recombination operators are defined for a fixed number of parents, e.g. have arity three, in some operators the number of parents is a random number that might be greater than two, and in yet other operators the arity is a parameter that can be set to an arbitrary integer number. We pay special attention to this latter type of operators and summarize results on the effect of operator arity on EA performance. ",
"neighbors": [
714,
1216,
1218,
1392
],
"mask": "Train"
},
{
"node_id": 1300,
"label": 2,
"text": "Title: Tempering Backpropagation Networks: Not All Weights are Created Equal approach yields hitherto unparalleled performance on\nAbstract: Backpropagation learning algorithms typically collapse the network's structure into a single vector of weight parameters to be optimized. We suggest that their performance may be improved by utilizing the structural information instead of discarding it, and introduce a framework for tempering each weight accordingly. In the tempering model, activation and error signals are treated as approximately independent random variables. The characteristic scale of weight changes is then matched to that of the residuals, allowing structural properties such as a node's fan-in and fan-out to affect the local learning rate and backpropagated error. The model also permits calculation of an upper bound on the global learning rate for batch updates, which in turn leads to different update rules for bias vs. non-bias weights. ",
"neighbors": [
1320,
1342
],
"mask": "Train"
},
{
"node_id": 1301,
"label": 5,
"text": "Title: Discovering Representation Space Transformations for Learning Concept Descriptions Combining DNF and M-of-N Rules \nAbstract: This paper addresses a class of learning problems that require a construction of descriptions that combine both M-of-N rules and traditional Disjunctive Normal form (DNF) rules. The presented method learns such descriptions, which we call conditional M-of-N rules, using the hypothesis-driven constructive induction approach. In this approach, the representation space is modified according to patterns discovered in the iteratively generated hypotheses. The need for the M-of-N rules is detected by observing \"exclusive-or\" or \"equivalence\" patterns in the hypotheses. These patterns indicate symmetry relations among pairs of attributes. Symmetrical attributes are combined into maximal symmetry classes. For each symmetry class, the method constructs a \"counting attribute\" that adds a new dimension to the representation space. The search for hypothesis in iteratively modified representation spaces is done by the standard AQ inductive rule learning algorithm. It is shown that the proposed method is capable of solving problems that would be very difficult to tackle by any of the traditional symbolic learning methods.",
"neighbors": [
960,
1266,
1595,
2346
],
"mask": "Train"
},
{
"node_id": 1302,
"label": 0,
"text": "Title: Correcting for Length Biasing in Conversational Case Scoring \nAbstract: Inference's conversational case-based reasoning (CCBR) approach, embedded in the CBR Content Navigator line of products, is susceptible to a bias in its case scoring algorithm. In particular, shorter cases tend to be given higher scores, assuming all other factors are held constant. This report summarizes our investigation for mediating this bias. We introduce an approach for eliminating this bias and evaluate how it affects retrieval performance for six case libraries. We also suggest explanations for these results, and note the limitations of our study. ",
"neighbors": [
983
],
"mask": "Train"
},
{
"node_id": 1303,
"label": 1,
"text": "Title: Stochastic Hillclimbing as a Baseline Method for Evaluating Genetic Algorithms \nAbstract: We investigate the effectiveness of stochastic hillclimbing as a baseline for evaluating the performance of genetic algorithms (GAs) as combinatorial function optimizers. In particular, we address four problems to which GAs have been applied in the literature: the maximum cut problem, Koza's 11-multiplexer problem, MDAP (the Multiprocessor Document Allocation Problem), and the jobshop problem. We demonstrate that simple stochastic hillclimbing methods are able to achieve results comparable or superior to those obtained by the GAs designed to address these four problems. We further illustrate, in the case of the jobshop problem, how insights obtained in the formulation of a stochastic hillclimbing algorithm can lead to improvements in the encoding used by a GA. fl Department of Computer Science, University of California at Berkeley. Supported by a NASA Graduate Fellowship. This paper was written while the author was a visiting researcher at the Ecole Normale Superieure-rue d'Ulm, Groupe de BioInformatique, France. E-mail: juels@cs.berkeley.edu y Department of Mathematics, University of California at Berkeley. Supported by an NDSEG Graduate Fellowship. E-mail: wattenbe@math.berkeley.edu ",
"neighbors": [
163,
343,
1274,
2202,
2347
],
"mask": "Train"
},
{
"node_id": 1304,
"label": 0,
"text": "Title: Concept Sharing: A Means to Improve Multi-Concept Learning \nAbstract: This paper describes several means for sharing between related concepts to improve learning in the same domain. The sharing comes in the form of substructures or possibly entire structures of previous concepts which may aid in learning other concepts. These substructures highlight useful information in the domain. Using two domains, we evaluate the effectiveness of concept sharing with respect to accuracy, concept size, search complexity, and noise resistance.",
"neighbors": [
1354
],
"mask": "Test"
},
{
"node_id": 1305,
"label": 1,
"text": "Title: Distribution Category: A Parallel Genetic Algorithm for the Set Partitioning Problem \nAbstract: This work was supported by the Office of Scientific Computing, U.S. Department of Energy, under Contract W-31-109-Eng-38. It was submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate School of the Illinois Institute of Technology, May 1994 (thesis adviser: Dr. Tom Christopher). ",
"neighbors": [
163,
728,
1016,
1065,
1098,
1106,
1274,
1575,
1740
],
"mask": "Test"
},
{
"node_id": 1306,
"label": 6,
"text": "Title: Improving the Accuracy and Speed of Support Vector Machines \nAbstract: Support Vector Learning Machines (SVM) are finding application in pattern recognition, regression estimation, and operator inversion for ill-posed problems. Against this very general backdrop, any methods for improving the generalization performance, or for improving the speed in test phase, of SVMs are of increasing interest. In this paper we combine two such techniques on a pattern recognition problem. The method for improving generalization performance (the \"virtual support vector\" method) does so by incorporating known invariances of the problem. This method achieves a drop in the error rate on 10,000 NIST test digit images of 1.4% to 1.0%. The method for improving the speed (the \"reduced set\" method) does so by approximating the support vector decision surface. We apply this method to achieve a factor of fifty speedup in test phase over the virtual support vector machine. The combined approach yields a machine which is both 22 times faster than the original machine, and which has better generalization performance, achieving 1.1% error. The virtual support vector method is applicable to any SVM problem with known invariances. The reduced set method is applicable to any support vector machine. ",
"neighbors": [
607,
1050,
1310,
1499
],
"mask": "Test"
},
{
"node_id": 1307,
"label": 2,
"text": "Title: Extracting Tree-Structured Representations of Trained Networks \nAbstract: A significant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, Trepan, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that Trepan is able to produce decision trees that maintain a high level of fidelity to their respective networks while being comprehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large net works and problems with high-dimensional input spaces.",
"neighbors": [
1057,
1657
],
"mask": "Validation"
},
{
"node_id": 1308,
"label": 3,
"text": "Title: A Defect in Dempster-Shafer Theory \nAbstract: By analyzing the relationships among chance, weight of evidence and degree of belief, it is shown that the assertion \"chances are special cases of belief functions\" and the assertion \"Dempster's rule can be used to combine belief functions based on distinct bodies of evidence\" together lead to an inconsistency in Dempster-Shafer theory. To solve this problem, some fundamental postulates of the theory must be rejected. A new approach for uncertainty management is introduced, which shares many intuitive ideas with D-S theory, while avoiding this problem. ",
"neighbors": [
1503,
1504,
1506,
1507
],
"mask": "Test"
},
{
"node_id": 1309,
"label": 6,
"text": "Title: A Formalization of Explanation-Based Macro-operator Learning \nAbstract: In spite of the popularity of Explanation-Based Learning (EBL), its theoretical basis is not well-understood. Using a generalization of Probably Approximately Correct (PAC) learning to problem solving domains, this paper formalizes two forms of Explanation-Based Learning of macro-operators and proves the sufficient conditions for their success. These two forms of EBL, called \"Macro Caching\" and \"Serial Parsing,\" respectively exhibit two distinct sources of power or \"bias\": the sparseness of the solution space and the decomposability of the problem-space. The analysis shows that exponential speedup can be achieved when either of these biases is suitable for a domain. Somewhat surprisingly, it also shows that computing the preconditions of the macro-operators is not necessary to obtain these speedups. The theoretical results are confirmed by experiments in the domain of Eight Puzzle. Our work suggests that the best way to address the utility problem in EBL is to implement a bias which exploits the problem-space structure of the set of domains that one is interested in learning.",
"neighbors": [
924,
1132,
1186
],
"mask": "Validation"
},
{
"node_id": 1310,
"label": 6,
"text": "Title: Incorporating Invariances in Support Vector Learning Machines \nAbstract: Developed only recently, support vector learning machines achieve high generalization ability by minimizing a bound on the expected test error; however, so far there existed no way of adding knowledge about invariances of a classification problem at hand. We present a method of incorporating prior knowledge about transformation invari-ances by applying transformations to support vectors, the training ex amples most critical for determining the classification boundary.",
"neighbors": [
1306,
1499
],
"mask": "Train"
},
{
"node_id": 1311,
"label": 1,
"text": "Title: ROBO-SHEPHERD: LEARNING COMPLEX ROBOTIC BEHAVIORS \nAbstract: This paper reports on recent results using genetic algorithms to learn decision rules for complex robot behaviors. The method involves evaluating hypothetical rule sets on a simulator and applying simulated evolution to evolve more effective rules. The main contributions of this paper are (1) the task learned is a complex behavior involving multiple mobile robots, and (2) the learned rules are verified through experiments on operational mobile robots. The case study involves a shepherding task in which one mobile robot attempts to guide another robot to a specified area. ",
"neighbors": [
764,
910,
1140,
1253
],
"mask": "Validation"
},
{
"node_id": 1312,
"label": 5,
"text": "Title: Learning Trees and Rules with Set-valued Features \nAbstract: In most learning systems examples are represented as fixed-length \"feature vectors\", the components of which are either real numbers or nominal values. We propose an extension of the feature-vector representation that allows the value of a feature to be a set of strings; for instance, to represent a small white and black dog with the nominal features size and species and the set-valued feature color, one might use a feature vector with size=small, species=canis-familiaris and color=fwhite,blackg. Since we make no assumptions about the number of possible set elements, this extension of the traditional feature-vector representation is closely connected to Blum's \"infinite attribute\" representation. We argue that many decision tree and rule learning algorithms can be easily extended to set-valued features. We also show by example that many real-world learning problems can be efficiently and naturally represented with set-valued features; in particular, text categorization problems and problems that arise in propositionalizing first-order representations lend themselves to set-valued features. ",
"neighbors": [
344,
418,
638,
1260,
1269,
1428,
1622
],
"mask": "Train"
},
{
"node_id": 1313,
"label": 2,
"text": "Title: TRAINREC: A System for Training Feedforward Simple Recurrent Networks Efficiently and Correctly \nAbstract: In most learning systems examples are represented as fixed-length \"feature vectors\", the components of which are either real numbers or nominal values. We propose an extension of the feature-vector representation that allows the value of a feature to be a set of strings; for instance, to represent a small white and black dog with the nominal features size and species and the set-valued feature color, one might use a feature vector with size=small, species=canis-familiaris and color=fwhite,blackg. Since we make no assumptions about the number of possible set elements, this extension of the traditional feature-vector representation is closely connected to Blum's \"infinite attribute\" representation. We argue that many decision tree and rule learning algorithms can be easily extended to set-valued features. We also show by example that many real-world learning problems can be efficiently and naturally represented with set-valued features; in particular, text categorization problems and problems that arise in propositionalizing first-order representations lend themselves to set-valued features. ",
"neighbors": [
1005,
1382,
1623,
1655
],
"mask": "Train"
},
{
"node_id": 1314,
"label": 4,
"text": "Title: Quick 'n' Dirty Generalization For Mobile Robot Learning Content Areas: robotics, reinforcement learning, machine learning,\nAbstract: The mobile robot domain challenges policy-iteration reinforcement learning algorithms with difficult problems of structural credit assignment and uncertainty. Structural credit assignment is particularly acute in domains where \"real-time\" trial length is a limiting factor on the number of learning steps that physical hardware can perform. Noisy sensors and effectors in complex dynamic environments further complicate the learning problem, leading to situations where speed of learning and policy flexibility may be more important than policy optimality. Input generalization addresses these problems but is typically too time consuming for robot domains. We present two algorithms, YB-learning and YB , that perform simple and fast generalization of the input space based on bit-similarity. The algorithms trade off long-term optimality for immediate performance and flexibility. The algorithms were tested in simulation against non-generalized learning across different numbers of discounting steps, and YB was shown to perform better during the earlier stages of learning, particularly in the presence of noise. In trials performed on a sonar-based mobile robot subject to uncertainty of the \"real world,\" YB surpassed the simulation results by a wide margin, strongly supporting the role of such \"quick and dirty\" generalization strategies in noisy real-time mobile robot domains.",
"neighbors": [
1529
],
"mask": "Test"
},
{
"node_id": 1315,
"label": 2,
"text": "Title: Modeling Volatility using State Space Models \nAbstract: In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude due to their ignoring the distinction between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. ",
"neighbors": [
611,
668,
1079,
2452,
2595
],
"mask": "Train"
},
{
"node_id": 1316,
"label": 4,
"text": "Title: KnightCap: A chess program that learns by combining TD() with minimax search \nAbstract: In this paper we present TDLeaf(), a variation on the TD() algorithm that enables it to be used in conjunction with minimax search. We present some experiments in which our chess program, KnightCap, used TDLeaf() to learn its evaluation function while playing on the Free Ineternet Chess Server (FICS, fics.onenet.net). It improved from a 1650 rating to a 2100 rating in just 308 games and 3 days of play. We discuss some of the reasons for this success and also the relationship between our results and Tesauro's results in backgammon. ",
"neighbors": [
295,
565,
882
],
"mask": "Test"
},
{
"node_id": 1317,
"label": 0,
"text": "Title: Use of Analogy in Automated Theorem Proving \nAbstract: Technical Report ATP-90, Artificial Intelligence Laboratory, University of Texas at Austin. ",
"neighbors": [
1354
],
"mask": "Train"
},
{
"node_id": 1318,
"label": 2,
"text": "Title: Misclassification Minimization \nAbstract: The problem of minimizing the number of misclassified points by a plane, attempting to separate two point sets with intersecting convex hulls in n-dimensional real space, is formulated as a linear program with equilibrium constraints (LPEC). This general LPEC can be converted to an exact penalty problem with a quadratic objective and linear constraints. A Frank-Wolfe-type algorithm is proposed for the penalty problem that terminates at a stationary point or a global solution. Novel aspects of the approach include: (i) A linear complementarity formulation of the step function that \"counts\" misclassifications, (ii) Exact penalty formulation without boundedness, nondegeneracy or constraint qualification assumptions, (iii) An exact solution extraction from the sequence of minimizers of the penalty function for a finite value of the penalty parameter for the general LPEC and an explicitly exact solution for the LPEC with uncoupled constraints, and (iv) A parametric quadratic programming formulation of the LPEC associated with the misclassification minimization problem.",
"neighbors": [
142,
227,
427,
1283
],
"mask": "Validation"
},
{
"node_id": 1319,
"label": 1,
"text": "Title: The Ecology of Echo Echo is a generic ecosystem model in which evolving agents are\nAbstract: The problem of minimizing the number of misclassified points by a plane, attempting to separate two point sets with intersecting convex hulls in n-dimensional real space, is formulated as a linear program with equilibrium constraints (LPEC). This general LPEC can be converted to an exact penalty problem with a quadratic objective and linear constraints. A Frank-Wolfe-type algorithm is proposed for the penalty problem that terminates at a stationary point or a global solution. Novel aspects of the approach include: (i) A linear complementarity formulation of the step function that \"counts\" misclassifications, (ii) Exact penalty formulation without boundedness, nondegeneracy or constraint qualification assumptions, (iii) An exact solution extraction from the sequence of minimizers of the penalty function for a finite value of the penalty parameter for the general LPEC and an explicitly exact solution for the LPEC with uncoupled constraints, and (iv) A parametric quadratic programming formulation of the LPEC associated with the misclassification minimization problem.",
"neighbors": [
1391
],
"mask": "Train"
},
{
"node_id": 1320,
"label": 2,
"text": "Title: On Centering Neural Network Weight Updates \nAbstract: Technical Report IDSIA-19-97 Abstract. It has long been known that neural networks can learn faster when their input and hidden unit activities are centered about zero; recently we have extended this approach to also encompass the centering of error signals (Schraudolph and Sejnowski, 1996). Here we generalize this notion to all factors involved in the weight update, leading us to propose centering the slope of hidden unit activation functions as well. Slope centering removes the linear component of backpropagated error; this improves credit assignment in networks with shortcut connections. Benchmark results show that this can speed up learning significantly without adversely affecting the trained network's generalization ability.",
"neighbors": [
359,
808,
1300
],
"mask": "Validation"
},
{
"node_id": 1321,
"label": 2,
"text": "Title: Priority ASOCS ASOCS models have two significant advantages over other learning models: \nAbstract: This paper presents an ASOCS (Adaptive Self-Organizing Concurrent System) model for massively parallel processing of incrementally defined rule systems in such areas as adaptive logic, robotics, logical inference, and dynamic control. An ASOCS is an adaptive network composed of many simple computing elements operating asynchronously and in parallel. An ASOCS can operate in either a data processing mode or a learning mode. During data processing mode, an ASOCS acts as a parallel hardware circuit. During learning mode, an ASOCS incorporates a rule expressed as a Boolean conjunction in a distributed fashion in time logarithmic in the number of rules. This paper proposes a learning algorithm and architecture for Priority ASOCS. This new ASOCS model uses rules with priorities. The new model has significant learning time and space complexity improvements over previous models. Non-von Neumann architectures such as neural networks attack the word-at-a-time bottleneck of traditional computing systems [1]. Neural networks learn input-output mappings using highly distributed processing and memory [10,11,12]. Their numerous simple processing elements with modifiable weighted links permit a high degree of parallelism. A typical neural network has fixed topology. It learns by modifying weighted links between nodes. A new class of connectionist architectures has been proposed called ASOCS (Adaptive Self-Organizing Concurrent Systems) [4,5]. ASOCS models support efficient computation through self-organized learning and parallel execution. Learning is done through the incremental presentation of rules and/or examples. ASOCS models learn by modifying their topology. Data types include Boolean and multi-state variables; recent models support analog variables. The model incorporates rules into an adaptive logic network in a parallel and self organizing fashion. In processing mode, ASOCS supports fully parallel execution on actual inputs according to the learned rules. The adaptive logic network acts as a parallel hardware circuit during execution, mapping n input boolean vectors into m output boolean vectors, in a combinatoric fashion. The overall philosophy of ASOCS follows the high level goals of current neural network models. However, the mechanisms of learning and execution vary significantly. The ASOCS logic network is topologically dynamic with the network growing to efficiently fit the specific application. Current ASOCS models are based on digital nodes. ASOCS also supports use of symbolic and heuristic learning mechanisms, thus combining the parallelism and distributed nature of connectionist computing with the potential power of AI symbolic learning. A proof of concept ASOCS chip has been developed [2]. ",
"neighbors": [
297,
814,
1041,
1080,
1190,
1222
],
"mask": "Validation"
},
{
"node_id": 1322,
"label": 6,
"text": "Title: Theory and Applications of Agnostic PAC-Learning with Small Decision Trees \nAbstract: We exhibit a theoretically founded algorithm T2 for agnostic PAC-learning of decision trees of at most 2 levels, whose computation time is almost linear in the size of the training set. We evaluate the performance of this learning algorithm T2 on 15 common real-world datasets, and show that for most of these datasets T2 provides simple decision trees with little or no loss in predictive power (compared with C4.5). In fact, for datasets with continuous attributes its error rate tends to be lower than that of C4.5. To the best of our knowledge this is the first time that a PAC-learning algorithm is shown to be applicable to real-world classification problems. Since one can prove that T2 is an agnostic PAC-learning algorithm, T2 is guaranteed to produce close to optimal 2-level decision trees from sufficiently large training sets for any (!) distribution of data. In this regard T2 differs strongly from all other learning algorithms that are considered in applied machine learning, for which no guarantee can be given about their performance on new datasets. We also demonstrate that this algorithm T2 can be used as a diagnostic tool for the investigation of the expressive limits of 2-level decision trees. Finally, T2, in combination with new bounds on the VC-dimension of decision trees of bounded depth that we derive, provides us now for the first time with the tools necessary for comparing learning curves of decision trees for real-world datasets with the theoretical estimates of PAC learning theory. ",
"neighbors": [
323,
1020,
1027,
1622,
2539
],
"mask": "Train"
},
{
"node_id": 1323,
"label": 2,
"text": "Title: On the Distribution of Performance from Multiple Neural Network Trials, On the Distribution of Performance\nAbstract: Andrew D. Back was with the Department of Electrical and Computer Engineering, University of Queensland. St. Lucia, Australia. He is now with the Brain Information Processing Group, Frontier Research Program, RIKEN, The Institute of Physical and Chemical Research, 2-1 Hirosawa, Wako-shi, Saitama 351-01, Japan Abstract The performance of neural network simulations is often reported in terms of the mean and standard deviation of a number of simulations performed with different starting conditions. However, in many cases, the distribution of the individual results does not approximate a Gaussian distribution, may not be symmetric, and may be multimodal. We present the distribution of results for practical problems and show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task which we consider, we find that the distribution of performance is skewed towards better performance for smoother target functions and skewed towards worse performance ",
"neighbors": [
1062,
1145,
1149,
1150,
1195
],
"mask": "Train"
},
{
"node_id": 1324,
"label": 3,
"text": "Title: [6] D. Geiger. Graphoids: a qualitative framework for probabilistic inference. An introduction to algorithms for\nAbstract: Andrew D. Back was with the Department of Electrical and Computer Engineering, University of Queensland. St. Lucia, Australia. He is now with the Brain Information Processing Group, Frontier Research Program, RIKEN, The Institute of Physical and Chemical Research, 2-1 Hirosawa, Wako-shi, Saitama 351-01, Japan Abstract The performance of neural network simulations is often reported in terms of the mean and standard deviation of a number of simulations performed with different starting conditions. However, in many cases, the distribution of the individual results does not approximate a Gaussian distribution, may not be symmetric, and may be multimodal. We present the distribution of results for practical problems and show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task which we consider, we find that the distribution of performance is skewed towards better performance for smoother target functions and skewed towards worse performance ",
"neighbors": [
260,
1543,
1747,
2076
],
"mask": "Validation"
},
{
"node_id": 1325,
"label": 1,
"text": "Title: Environmental Effects on Minimal Behaviors in the Minimat World \nAbstract: The structure of an environment affects the behaviors of the organisms that have evolved in it. How is that structure to be described, and how can its behavioral consequences be explained and predicted? We aim to establish initial answers to these questions by simulating the evolution of very simple organisms in simple environments with different structures. Our artificial creatures, called \"minimats,\" have neither sensors nor memory and behave solely by picking amongst the actions of moving, eating, reproducing, and sitting, according to an inherited probability distribution. Our simulated environments contain only food (and multiple minimats) and are structured in terms of their spatial and temporal food density and the patchiness with which the food appears. Changes in these environmental parameters affect the evolved behaviors of minimats in different ways, and all three parameters are of importance in describing the minimat world. One of the most useful behavioral strategies that evolves is \"looping\" movement, which allows minimats-despite their lack of internal state-to match their behavior to the temporal (and spatial) structure of their environment. Ultimately we find that minimats construct their own environments through their individual behaviors, making the study of the impact of global environment structure on individual behavior much more complex. ",
"neighbors": [
219,
1175,
2170,
2309
],
"mask": "Train"
},
{
"node_id": 1326,
"label": 3,
"text": "Title: Causal diagrams for empirical research \nAbstract: The primary aim of this paper is to show how graphical models can be used as a mathematical language for integrating statistical and subject-matter information. In particular, the paper develops a principled, nonparametric framework for causal inference, in which diagrams are queried to determine if the assumptions available are sufficient for identifying causal effects from nonexperimental data. If so the diagrams can be queried to produce mathematical expressions for causal effects in terms of observed distributions; otherwise, the diagrams can be queried to suggest additional observations or auxiliary experiments from which the desired inferences can be obtained. Key words: Causal inference, graph models, structural equations, treatment effect. ",
"neighbors": [
105,
248,
419,
776,
971,
1602,
1747,
2144,
2161,
2434
],
"mask": "Train"
},
{
"node_id": 1327,
"label": 5,
"text": "Title: On Biases in Estimating Multi-Valued Attributes \nAbstract: We analyse the biases of eleven measures for estimating the quality of the multi-valued attributes. The values of information gain, J-measure, gini-index, and relevance tend to linearly increase with the number of values of an attribute. The values of gain-ratio, distance measure, Relief , and the weight of evidence decrease for informative attributes and increase for irrelevant attributes. The bias of the statistic tests based on the chi-square distribution is similar but these functions are not able to discriminate among the attributes of different quality. We also introduce a new function based on the MDL principle whose value slightly decreases with the increasing number of attribute's values.",
"neighbors": [
638,
1165,
1569
],
"mask": "Train"
},
{
"node_id": 1328,
"label": 2,
"text": "Title: A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features \nAbstract: In the past, nearest neighbor algorithms for learning from examples have worked best in domains in which all features had numeric values. In such domains, the examples can be treated as points and distance metrics can use standard definitions. In symbolic domains, a more sophisticated treatment of the feature space is required. We introduce a nearest neighbor algorithm for learning in domains with symbolic features. Our algorithm calculates distance tables that allow it to produce real-valued distances between instances, and attaches weights to the instances to further modify the structure of feature space. We show that this technique produces excellent classification accuracy on three problems that have been studied by machine learning researchers: predicting protein secondary structure, identifying DNA promoter sequences, and pronouncing English text. Direct experimental comparisons with the other learning algorithms show that our nearest neighbor algorithm is comparable or superior in all three domains. In addition, our algorithm has advantages in training speed, simplicity, and perspicuity. We conclude that experimental evidence favors the use and continued development of nearest neighbor algorithms for domains such as the ones studied here. ",
"neighbors": [
783,
785,
927,
947,
1020,
1031,
1101,
1107,
1109,
1111,
1155,
1173,
1412,
1423,
1513,
1568,
1584,
1644
],
"mask": "Train"
},
{
"node_id": 1329,
"label": 6,
"text": "Title: Supervised and Unsupervised Discretization of Continuous Features \nAbstract: Many supervised machine learning algorithms require a discrete feature space. In this paper, we review previous work on continuous feature discretization, identify defining characteristics of the methods, and conduct an empirical evaluation of several methods. We compare binning, an unsupervised discretization method, to entropy-based and purity-based methods, which are supervised algorithms. We found that the performance of the Naive-Bayes algorithm significantly improved when features were discretized using an entropy-based method. In fact, over the 16 tested datasets, the discretized version of Naive-Bayes slightly outperformed C4.5 on average. We also show that in some cases, the performance of the C4.5 induction algorithm significantly improved if features were discretized in advance; in our experiments, the performance never significantly degraded, an interesting phenomenon considering the fact that C4.5 is capable of locally discretiz ing features.",
"neighbors": [
1020,
1049,
1071,
1986,
2127
],
"mask": "Test"
},
{
"node_id": 1330,
"label": 1,
"text": "Title: Evolving Cellular Automata with Genetic Algorithms: A Review of Recent Work \nAbstract: We review recent work done by our group on applying genetic algorithms (GAs) to the design of cellular automata (CAs) that can perform computations requiring global coordination. A GA was used to evolve CAs for two computational tasks: density classification and synchronization. In both cases, the GA discovered rules that gave rise to sophisticated emergent computational strategies. These strategies can be analyzed using a \"computational mechanics\" framework in which \"particles\" carry information and interactions between particles effects information processing. This framework can also be used to explain the process by which the strategies were designed by the GA. The work described here is a first step in employing GAs to engineer useful emergent computation in decentralized multi-processor systems. It is also a first step in understanding how an evolutionary process can produce complex systems with sophisticated collective computational abilities. ",
"neighbors": [
793,
1167
],
"mask": "Train"
},
{
"node_id": 1331,
"label": 1,
"text": "Title: Mechanisms of Emergent Computation in Cellular Automata \nAbstract: We introduce a class of embedded-particle models for describing the emergent computational strategies observed in cellular automata (CAs) that were evolved for performing certain computational tasks. The models are evaluated by comparing their estimated performances with the actual performances of the CAs they model. The results show, via a close quantitative agreement, that the embedded-particle framework captures the main information processing mechanisms of the emergent computation that arise in these evolved CAs.",
"neighbors": [
1167
],
"mask": "Test"
},
{
"node_id": 1332,
"label": 1,
"text": "Title: Statistical Dynamics of the Royal Road Genetic Algorithm \nAbstract: Metastability is a common phenomenon. Many evolutionary processes, both natural and artificial, alternate between periods of stasis and brief periods of rapid change in their behavior. In this paper an analytical model for the dynamics of a mutation-only genetic algorithm (GA) is introduced that identifies a new and general mechanism causing metastability in evolutionary dynamics. The GA's population dynamics is described in terms of flows in the space of fitness distributions. The trajectories through fitness distribution space are derived in closed form in the limit of infinite populations. We then show how finite populations induce metastability, even in regions where fitness does not exhibit a local optimum. In particular, the model predicts the occurrence of \"fitness epochs\"| periods of stasis in population fitness distributions|at finite population size and identifies the locations of these fitness epochs with the flow's hyperbolic fixed points. This enables exact predictions of the metastable fitness distributions during the fitness epochs, as well as giving insight into the nature of the periods of stasis and the innovations between them. All these results are obtained as closed-form expressions in terms of the GA's parameters. An analysis of the Jacobian matrices in the neighborhood of an epoch's metastable fitness distribution allows for the calculation of its stable and unstable manifold dimensions and so reveals the state space's topological structure. More general quantitative features of the dynamics|fitness fluctuation amplitudes, epoch stability, and speed of the innovations|are also determined from the Jacobian eigenvalues. The analysis shows how quantitative predictions for a range of dynamical behaviors, that are specific to the finite population dynamics, can be derived from the solution of the infinite population dynamics. The theoretical predictions are shown to agree very well with statistics from GA simulations. We also discuss the connections of our results with those from population genetics and molecular evolution theory. ",
"neighbors": [
1167
],
"mask": "Train"
},
{
"node_id": 1333,
"label": 1,
"text": "Title: Using Genetic Algorithms for Supervised Concept Learning \nAbstract: Genetic Algorithms (GAs) have traditionally been used for non-symbolic learning tasks. In this chapter we consider the application of a GA to a symbolic learning task, supervised concept learning from examples. A GA concept learner (GABL) is implemented that learns a concept from a set of positive and negative examples. GABL is run in a batch-incremental mode to facilitate comparison with an incremental concept learner, ID5R. Preliminary results support that, despite minimal system bias, GABL is an effective concept learner and is quite competitive with ID5R as the target concept increases in complexity. ",
"neighbors": [
163,
578,
793,
1136,
1207,
1224,
1225,
1369,
1467,
1514,
1708
],
"mask": "Test"
},
{
"node_id": 1334,
"label": 1,
"text": "Title: THE OPTIONS DESIGN EXPLORATION SYSTEM Reference Manual and User Guide Version B2.1 \nAbstract: Genetic Algorithms (GAs) have traditionally been used for non-symbolic learning tasks. In this chapter we consider the application of a GA to a symbolic learning task, supervised concept learning from examples. A GA concept learner (GABL) is implemented that learns a concept from a set of positive and negative examples. GABL is run in a batch-incremental mode to facilitate comparison with an incremental concept learner, ID5R. Preliminary results support that, despite minimal system bias, GABL is an effective concept learner and is quite competitive with ID5R as the target concept increases in complexity. ",
"neighbors": [
163,
793,
1130,
1696
],
"mask": "Train"
},
{
"node_id": 1335,
"label": 3,
"text": "Title: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection \nAbstract: We review accuracy estimation methods and compare the two most common methods: cross-validation and bootstrap. Recent experimental results on artificial data and theoretical results in restricted settings have shown that for selecting a good classifier from a set of classifiers (model selection), ten-fold cross-validation may be better than the more expensive leave-one-out cross-validation. We report on a large-scale experiment|over half a million runs of C4.5 and a Naive-Bayes algorithm|to estimate the effects of different parameters on these algorithms on real-world datasets. For cross-validation, we vary the number of folds and whether the folds are stratified or not; for bootstrap, we vary the number of bootstrap samples. Our results indicate that for real-word datasets similar to ours, the best method to use for model selection is ten-fold stratified cross validation, even if computation power allows using more folds. ",
"neighbors": [
885,
944,
1024,
1032,
1223,
1267,
1270,
1337,
1339,
1478,
1512,
1607
],
"mask": "Validation"
},
{
"node_id": 1336,
"label": 3,
"text": "Title: Scaling Up the Accuracy of Naive-Bayes Classifiers: a Decision-Tree Hybrid \nAbstract: Naive-Bayes induction algorithms were previously shown to be surprisingly accurate on many classification tasks even when the conditional independence assumption on which they are based is violated. However, most studies were done on small databases. We show that in some larger databases, the accuracy of Naive-Bayes does not scale up as well as decision trees. We then propose a new algorithm, NBTree, which induces a hybrid of decision-tree classifiers and Naive-Bayes classifiers: the decision-tree nodes contain uni-variate splits as regular decision-trees, but the leaves contain Naive-Bayesian classifiers. The approach retains the interpretability of Naive-Bayes and decision trees, while resulting in classifiers that frequently outperform both constituents, especially in the larger databases tested. ",
"neighbors": [
1027,
1478
],
"mask": "Validation"
},
{
"node_id": 1337,
"label": 6,
"text": "Title: MLC A Machine Learning Library in C \nAbstract: We present MLC ++ , a library of C ++ classes and tools for supervised Machine Learning. While MLC ++ provides general learning algorithms that can be used by end users, the main objective is to provide researchers and experts with a wide variety of tools that can accelerate algorithm development, increase software reliability, provide comparison tools, and display information visually. More than just a collection of existing algorithms, MLC ++ is an attempt to extract commonalities of algorithms and decompose them for a unified view that is simple, coherent, and extensible. In this paper we discuss the problems MLC ++ aims to solve, the design of MLC ++ , and the current functionality. ",
"neighbors": [
944,
1020,
1335,
2300,
2343
],
"mask": "Test"
},
{
"node_id": 1338,
"label": 3,
"text": "Title: Computing Nonparametric Hierarchical Models \nAbstract: Bayesian models involving Dirichlet process mixtures are at the heart of the modern nonparametric Bayesian movement. Much of the rapid development of these models in the last decade has been a direct result of advances in simulation-based computational methods. Some of the very early work in this area, circa 1988-1991, focused on the use of such nonparametric ideas and models in applications of otherwise standard hierarchical models. This chapter provides some historical review and perspective on these developments, with a prime focus on the use and integration of such nonparametric ideas in hierarchical models. We illustrate the ease with which the strict parametric assumptions common to most standard Bayesian hierarchical models can be relaxed to incorporate uncertainties about functional forms using Dirichlet process components, partly enabled by the approach to computation using MCMC methods. The resulting methology is illustrated with two examples taken from an unpublished 1992 report on the topic.",
"neighbors": [
784,
855,
917,
1015,
1654
],
"mask": "Train"
},
{
"node_id": 1339,
"label": 6,
"text": "Title: An Analysis of Bayesian Classifiers (1988), involves the formulation of average-case models for specific algorithms\nAbstract: In this paper we present an average-case analysis of the Bayesian classifier, a simple induction algorithm that fares remarkably well on many learning tasks. Our analysis assumes a monotone conjunctive target concept, and independent, noise-free Boolean attributes. We calculate the probability that the algorithm will induce an arbitrary pair of concept descriptions and then use this to compute the probability of correct classification over the instance space. The analysis takes into account the number of training instances, the number of attributes, the distribution of these attributes, and the level of class noise. We also explore the behavioral implications of the analysis by presenting predicted learning curves for artificial domains, and give experimental results on these domains as a check on our reasoning. One goal of research in machine learning is to discover principles that relate algorithms and domain characteristics to behavior. To this end, many researchers have carried out systematic experimentation with natural and artificial domains in search of empirical regularities (e.g., Kibler & Langley, 1988). Others have focused on theoretical analyses, often within the paradigm of probably approximately correct learning (e.g., Haus-sler, 1990). However, most experimental studies are based only on informal analyses of the learning task, whereas most formal analyses address the worst case, and thus bear little relation to empirical results. ber of attributes, and the class and attribute frequencies, they obtain predictions about the behavior of induction algorithms and used experiments to check their analyses. 1 However, their research does not focus on algorithms typically used by the experimental and practical sides of machine learning, and it is important that average-case analyses be extended to such methods. Recently, there has been growing interest in probabilistic approaches to inductive learning. For example, Fisher (1987) has described Cobweb, an incremental algorithm for conceptual clustering that draws heavily on Bayesian ideas, and the literature reports a number of systems that build on this work (e.g., Allen & Lang-ley, 1990; Iba & Gennari, 1991; Thompson & Langley, 1991). Cheeseman et al. (1988) have outlined Auto-Class, a nonincremental system that uses Bayesian methods to cluster instances into groups, and other researchers have focused on the induction of Bayesian inference networks (e.g., Cooper & Kerskovits, 1991). These recent Bayesian learning algorithms are complex and not easily amenable to analysis, but they share a common ancestor that is simpler and more tractable. This supervised algorithm, which we refer to simply as a Bayesian classifier, comes originally from work in pattern recognition (Duda & Hart, 1973). The method stores a probabilistic summary for each class; this summary contains the conditional probability of each attribute value given the class, as well as the probability (or base rate) of the class. This data structure approximates the representational power of a perceptron; it describes a single decision boundary through the instance space. When the algorithm encounters a new instance, it updates the probabilities stored with the specified class. Neither the order of training instances nor the occurrence of classification errors have any effect on this process. When given a test instance, the classifier uses an evaluation function (which we describe in detail later) to rank the alter ",
"neighbors": [
434,
1111,
1335,
1570,
1678,
2443,
2677
],
"mask": "Train"
},
{
"node_id": 1340,
"label": 2,
"text": "Title: ADAPTIVE REGULARIZATION \nAbstract: Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work we provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient descent in the estimated generalization error with respect to the regularization parameters. The scheme is implemented in our Designer Net framework for network training and pruning, i.e., is based on the diagonal Hessian approximation. The scheme does not require essential computational overhead in addition to what is needed for training and pruning. The viability of the approach is demonstrated in an experiment concerning prediction of the chaotic Mackey-Glass series. We find that the optimized weight decays are relatively large for densely connected networks in the initial pruning phase, while they decrease as pruning proceeds. ",
"neighbors": [
157,
427,
1075
],
"mask": "Validation"
},
{
"node_id": 1341,
"label": 2,
"text": "Title: Growing Layers of Perceptrons: Introducing the Extentron Algorithm \nAbstract: vations of perceptrons: (1) when the perceptron learning algorithm cycles among hyperplanes, the hyperplanes may be compared to select one that gives a best split of the examples, and (2) it is always possible for the perceptron to build a hyper- plane that separates at least one example from all the rest. We describe the Extentron which grows multi-layer networks capable of distinguishing non- linearly-separable data using the simple perceptron rule for linear threshold units. The resulting algorithm is simple, very fast, scales well to large prob - lems, retains the convergence properties of the perceptron, and can be completely specified using only two parameters. Results are presented comparing the Extentron to other neural network paradigms and to symbolic learning systems. ",
"neighbors": [
812,
1044
],
"mask": "Train"
},
{
"node_id": 1342,
"label": 2,
"text": "Title: Centering Neural Network Gradient Factors \nAbstract: Technical Report IDSIA-19-97 Abstract. It has long been known that neural networks can learn faster when their input and hidden unit activities are centered about zero; recently we have extended this approach to also encompass the centering of error signals [2]. Here we generalize this notion to all factors involved in the network's gradient, leading us to propose centering the slope of hidden unit activation functions as well. Slope centering removes the linear component of backpropagated error; this improves credit assignment in networks with shortcut connections. Benchmark results show that this can speed up learning significantly without adversely affecting the trained network's generalization ability. ",
"neighbors": [
359,
808,
1300,
2454
],
"mask": "Validation"
},
{
"node_id": 1343,
"label": 6,
"text": "Title: Randomly Fallible Teachers: Learning Monotone DNF with an Incomplete Membership Oracle \nAbstract: We introduce a new fault-tolerant model of algorithmic learning using an equivalence oracle and an incomplete membership oracle, in which the answers to a random subset of the learner's membership queries may be missing. We demonstrate that, with high probability, it is still possible to learn monotone DNF formulas in polynomial time, provided that the fraction of missing answers is bounded by some constant less than one. Even when half the membership queries are expected to yield no information, our algorithm will exactly identify m-term, n-variable monotone DNF formulas with an expected O(mn 2 ) queries. The same task has been shown to require exponential time using equivalence queries alone. We extend the algorithm to handle some one-sided errors, and discuss several other possible error models. It is hoped that this work may lead to a better understanding of the power of membership queries and the effects of faulty teachers on query models of concept learning. ",
"neighbors": [
672,
1003,
1004,
1363,
1364,
1456
],
"mask": "Train"
},
{
"node_id": 1344,
"label": 0,
"text": "Title: Discovery of Physical Principles from Design Experiences \nAbstract: One method for making analogies is to access and instantiate abstract domain principles, and one method for acquiring knowledge of abstract principles is to discover them from experience. We view generalization over experiences in the absence of any prior knowledge of the target principle as the task of hypothesis formation, a subtask of discovery. Also, we view the use of the hypothesized principles for analogical design as the task of hypothesis testing, another subtask of discovery. In this paper, we focus on discovery of physical principles by generalization over design experiences in the domain of physical devices. Some important issues in generalization from experiences are what to generalize from an experience, how far to generalize, and what methods to use. We represent a reasoner's comprehension of specific designs in the form of structure-behavior-function (SBF) models. An SBF model provides a functional and causal explanation of the working of a device. We represent domain principles as device-independent behavior-function (BF) models. We show that (i) the function of a device determines what to generalize from its SBF model, (ii) the SBF model itself suggests how far to generalize, and (iii) the typology of functions indicates what method to use. ",
"neighbors": [
1046,
1047,
1121,
1138,
1345
],
"mask": "Validation"
},
{
"node_id": 1345,
"label": 0,
"text": "Title: Use of Mental Models for Constraining Index Learning in Experience-Based Design \nAbstract: The power of the case-based method comes from the ability to retrieve the \"right\" case when a new problem is specified. This implies that learning the \"right\" indices to a case before storing it for potential reuse is crucial for the success of the method. A hierarchical organization of the case memory raises two distinct but related issues in index learning: learning the indexing vocabulary, and learning the right level of generalization. In this paper we show how the use of structure-behavior-function (SBF) models constrains index learning in the context of experience-based design of physical devices. The SBF model of a design provides the functional and causal explanation of how the structure of the design delivers its function. We describe how the SBF model of a design, together with a specification of the task for which the design case might be reused, provides the vocabulary for indexing the design case in memory. We also discuss how the prior design experiences stored in case-memory help to determine the level of index generalization. The KRITIK2 system implements and evaluates the model-based method for learning indices to design cases.",
"neighbors": [
1046,
1047,
1344,
1640
],
"mask": "Test"
},
{
"node_id": 1346,
"label": 2,
"text": "Title: References Linear Controller Design, Limits of Performance, \"The parallel projection operators of a nonlinear feedback\nAbstract: 13] Yang, Y., H.J. Sussmann, and E.D. Sontag, \"Stabilization of linear systems with bounded controls,\" in Proc. Nonlinear Control Systems Design Symp., Bordeaux, June 1992 (M. Fliess, Ed.), IFAC Publications, pp. 15-20. Journal version to appear in IEEE Trans. Autom. Control . ",
"neighbors": [
1272,
1281,
1451
],
"mask": "Test"
},
{
"node_id": 1347,
"label": 3,
"text": "Title: Markov Chain Monte Carlo Model Determination for Hierarchical and Graphical Log-linear Models \nAbstract: The Bayesian approach to comparing models involves calculating the posterior probability of each plausible model. For high-dimensional contingency tables, the set of plausible models is very large. We focus attention on reversible jump Markov chain Monte Carlo (Green, 1995) and develop strategies for calculating posterior probabilities of hierarchical, graphical or decomposable log-linear models. Even for tables of moderate size, these sets of models may be very large. The choice of suitable prior distributions for model parameters is also discussed in detail, and two examples are presented. For the first example, a 2 fi 3 fi 4 table, the model probabilities calculated using our reversible jump approach are compared with model probabilities calculated exactly or by using an alternative approximation. The second example is a 2 6 contingency table for which exact methods are infeasible, due to the large number of possible models. ",
"neighbors": [
84,
1147,
1240,
1241
],
"mask": "Train"
},
{
"node_id": 1348,
"label": 0,
"text": "Title: Learning Indices for Schema Selection \nAbstract: In addition to learning new knowledge, a system must be able to learn when the knowledge is likely to be applicable. An index is a piece of information which, when identified in a given situation, triggers the relevant piece of knowledge (or schema) in the system's memory. We discuss the issue of how indices may be learned automatically in the context of a story understanding task, and present a program that can learn new indices for existing explanatory schemas. We discuss two methods using which the system can identify the relevant schema even if the input does not directly match an existing index, and learn a new index to allow it to retrieve this schema more efficiently in the future.",
"neighbors": [
612,
1047,
1535,
1537
],
"mask": "Test"
},
{
"node_id": 1349,
"label": 4,
"text": "Title: Robust performance and adaptation using receding horizon H 1 control of time varying systems. \nAbstract: In this paper we construct suboptimal H 1 controllers which satisfy a new robust performance condition, using the receding horizon technique. A method is described for the synthesis of H 1 controllers online, making use of the exact plant model only on a finite interval extending into the future. Inequalities based on the two Riccati differential equation solution to the finite horizon H 1 problem are derived, and the resulting freedom is exploited to construct H 1 controllers which have a closed loop induced norm less than a prespecified value for all plants within a set, which is described in terms of the future variation of the plant. Dual results, with a possible adaptive interpretation, are also constructed. ",
"neighbors": [
1217
],
"mask": "Train"
},
{
"node_id": 1350,
"label": 0,
"text": "Title: IGLUE An Instance-based Learning System over Lattice Theory \nAbstract: Concept learning is one of the most studied areas in machine learning. A lot of work in this domain deals with decision trees. In this paper, we are concerned with a different kind of technique based on Galois lattices or concept lattices. We present a new semi-lattice based system, IGLUE, that uses the entropy function with a top-down approach to select concepts during the lattice construction. Then IGLUE generates new relevant numerical features by transforming initial boolean features over these concepts. IGLUE uses the new features to redescribe examples. Finally, IGLUE applies the Mahanalobis distance as a similarity measure between examples. Keywords : Multistrategy Learning, Instance-Based Learning, Galois lattice, Feature transformation ",
"neighbors": [
1151
],
"mask": "Test"
},
{
"node_id": 1351,
"label": 1,
"text": "Title: Hybrid Learning Using Genetic Algorithms and Decision Trees for Pattern Classification \nAbstract: This paper introduces a hybrid learning methodology that integrates genetic algorithms (GAs) and decision tree learning (ID3) in order to evolve optimal subsets of discriminatory features for robust pattern classification. A GA is used to search the space of all possible subsets of a large set of candidate discrimination features. For a given feature subset, ID3 is invoked to produce a decision tree. The classification performance of the decision tree on unseen data is used as a measure of fitness for the given feature set, which, in turn, is used by the GA to evolve better feature sets. This GA-ID3 process iterates until a feature subset is found with satisfactory classification performance. Experimental results are presented which illustrate the feasibility of our approach on difficult problems involving recognizing visual concepts in satellite and facial image data. The results also show improved classification performance and reduced description complexity when compared against standard methods for feature selection.",
"neighbors": [
900,
1498
],
"mask": "Validation"
},
{
"node_id": 1352,
"label": 2,
"text": "Title: FONN: Combining First Order Logic with Connectionist Learning \nAbstract: This paper presents a neural network architecture that can manage structured data and refine knowledge bases expressed in a first order logic language. The presented framework is well suited to classification problems in which concept de scriptions depend upon numerical features of the data. In fact, the main goal of the neural architecture is that of refining the numerical part of the knowledge base, without changing its structure. In particular, we discuss a method to translate a set of classification rules into neural computation units. Here, we focus our attention on the translation method and on algorithms to refine network weights on struc tured data. The classification theory to be refined can be manually handcrafted or automatically acquired by a symbolic relational learning system able to deal with numerical features. As a matter of fact, the primary goal is to bring into a neural network architecture the capability of dealing with structured data of unrestricted size, by allowing to dynamically bind the classification rules to different items occur ring in the input data. An extensive experimentation on a challenging artificial case study shows that the network converges quite fastly and generalizes much better than propositional learners on an equivalent task definition. ",
"neighbors": [
611,
1672,
2674
],
"mask": "Test"
},
{
"node_id": 1353,
"label": 1,
"text": "Title: Culling Teaching -1 Culling and Teaching in Neuro-evolution \nAbstract: The evolving population of neural nets contains information not only in terms of genes, but also in the collection of behaviors of the population members. Such information can be thought of as a kind of culture of the population. Two ways of exploiting that culture are explored in this paper: (1) Culling overlarge litters: Generate a large number of offspring with different crossovers, quickly evaluate them by comparing their performance to the population, and throw away those that appear poor. (2) Teaching: Use backpropagation to train offspring toward the performance of the population. Both techniques result in faster, more effective neuro-evolution, and they can be effectively combined, as is demonstrated on the inverted pendulum problem. Additional methods of cultural exploitation are possible and will be studied in future work. These results suggest that cultural exploitation is a powerful idea that allows leveraging several aspects of the genetic algorithm.",
"neighbors": [
294,
934,
2302,
2317
],
"mask": "Test"
},
{
"node_id": 1354,
"label": 0,
"text": "Title: The Structure-Mapping Engine: Algorithm and Examples \nAbstract: This paper describes the Structure-Mapping Engine (SME), a program for studying analogical processing. SME has been built to explore Gentner's Structure-mapping theory of analogy, and provides a \"tool kit\" for constructing matching algorithms consistent with this theory. Its flexibility enhances cognitive simulation studies by simplifying experimentation. Furthermore, SME is very efficient, making it a useful component in machine learning systems as well. We review the Structure-mapping theory and describe the design of the engine. We analyze the complexity of the algorithm, and demonstrate that most of the steps are polynomial, typically bounded by O (N 2 ). Next we demonstrate some examples of its operation taken from our cognitive simulation studies and work in machine learning. Finally, we compare SME to other analogy programs and discuss several areas for future work. This paper appeared in Artificial Intelligence, 41, 1989, pp 1-63. For more information, please contact forbus@ils.nwu.edu ",
"neighbors": [
75,
313,
479,
806,
911,
992,
994,
1001,
1039,
1040,
1047,
1089,
1123,
1176,
1188,
1304,
1317,
1420,
1426,
1465,
1674,
1680,
1695
],
"mask": "Train"
},
{
"node_id": 1355,
"label": 0,
"text": "Title: Modeling Invention by Analogy in ACT-R \nAbstract: We investigate some aspects of cognition involved in invention, more precisely in the invention of the telephone by Alexander Graham Bell. We propose the use of the Structure-Behavior-Function (SBF) language for the representation of invention knowledge; we claim that because SBF has been shown to support a wide range of reasoning about physical devices, it constitutes a plausible account of how an inventor might represent knowledge of an invention. We further propose the use of the ACT-R architecture for the implementation of this model. ACT-R has been shown to very precisely model a wide range of human cognition. We draw upon the architecture for execution of productions and matching of declarative knowledge through spreading activation. Thus we present a model which combines the well-established cognitive validity of ACT-R with the powerful, specialized model-based reasoning methods facilitated by SBF. ",
"neighbors": [
1047,
1148,
1640,
1648
],
"mask": "Train"
},
{
"node_id": 1356,
"label": 2,
"text": "Title: Constraint Tangent Distance for On-line Character Recognition \nAbstract: In on-line character recognition we can observe two kinds of intra-class variations: small geometric deformations and completely different writing styles. We propose a new approach to deal with these problems by defining an extension of tangent distance [9], well known in off-line character recognition. The system has been implemented with a k-nearest neighbor classifier and a so called diabolo classifier [6] respectively. Both classifiers are invariant under transformations like rotation, scale or slope and can deal with variations in stroke order and writing direction. Results are presented for our digit database with more than 200 writers. ",
"neighbors": [
667,
1430
],
"mask": "Train"
},
{
"node_id": 1357,
"label": 3,
"text": "Title: Decimatable Boltzmann Machines vs. Gibbs Sampling \nAbstract: Exact Boltzmann learning can be done in certain restricted networks by the technique of decimation. We have enlarged the set of dec-imatable Boltzmann machines by introducing a new and more general decimation rule. We have compared solutions of a probability density estimation problem with decimatable Boltzmann machines to the results obtained by Gibbs sampling in unrestricted (non-decimatable) ",
"neighbors": [
1461,
1511
],
"mask": "Train"
},
{
"node_id": 1358,
"label": 6,
"text": "Title: On the Complexity of Function Learning \nAbstract: The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range f0; 1g. Much less is known about the theory of learning functions with a larger range such as IN or IR. In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models. We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any \"learning\" model). Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes. Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from IR d into IR. Previously, the class of linear functions from IR d into IR was the only class of functions with multi-dimensional domain that was known to be learnable within the rigorous framework of a formal model for on-line learning. Finally we give a sufficient condition for an arbitrary class F of functions from IR into IR that allows us to learn the class of all functions that can be written as the pointwise maximum of k functions from F . This allows us to exhibit a number of further nontrivial classes of functions from IR into IR for which there exist efficient learning algorithms. ",
"neighbors": [
453,
591,
1567,
1661
],
"mask": "Train"
},
{
"node_id": 1359,
"label": 2,
"text": "Title: Extracting Comprehensible Concept Representations from Trained Neural Networks \nAbstract: Although they are applicable to a wide array of problems, and have demonstrated good performance on a number of difficult, real-world tasks, neural networks are not usually applied to problems in which comprehensibility of the acquired concepts is important. The concept representations formed by neural networks are hard to understand because they typically involve distributed, nonlinear relationships encoded by a large number of real-valued parameters. To address this limitation, we have been developing algorithms for extracting \"symbolic\" concept representations from trained neural networks. We first discuss why it is important to be able to understand the concept representations formed by neural networks. We then briefly describe our approach and discuss a number of issues pertaining to comprehensibility that have arisen in our work. Finally, we discuss choices that we have made in our research to date, and open research issues that we have not yet addressed. ",
"neighbors": [
1057
],
"mask": "Train"
},
{
"node_id": 1360,
"label": 6,
"text": "Title: Learning From a Consistently Ignorant Teacher \nAbstract: One view of computational learning theory is that of a learner acquiring the knowledge of a teacher. We introduce a formal model of learning capturing the idea that teachers may have gaps in their knowledge. The goal of the learner is still to acquire the knowledge of the teacher, but now the learner must also identify the gaps. This is the notion of learning from a consistently ignorant teacher. We consider the impact of knowledge gaps on learning, for example, monotone DNF and d-dimensional boxes, and show that learning is still possible. Negatively, we show that knowledge gaps make learning conjunctions of Horn clauses as hard as learning DNF. We also present general results describing when known learning algorithms can be used to obtain learning algorithms using a consistently ignorant teacher. ",
"neighbors": [
1095
],
"mask": "Validation"
},
{
"node_id": 1361,
"label": 6,
"text": "Title: An Efficient Method To Estimate Bagging's Generalization Error \nAbstract: In bagging [Bre94a] one uses bootstrap replicates of the training set [Efr79, ET93] to try to improve a learning algorithm's performance. The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive; for leave-one-out cross-validation one needs to train the underlying algorithm on the order of m- times, where m is the size of the training set and is the number of replicates. This paper presents several techniques for exploiting the bias-variance decomposition [GBD92, Wol96] to estimate the generalization error of a bagged learning algorithm without invoking yet more training of the underlying learning algorithm. The best of our estimators exploits stacking [Wol92]. In a set of experiments reported here, it was found to be more accurate than both the alternative cross-validation-based estimator of the bagged algorithm's error and the cross-validation-based estimator of the underlying algorithm's error. This improvement was particularly pronounced for small test sets. This suggests a novel justification for using bagging| im proved estimation of generalization error.",
"neighbors": [
1463
],
"mask": "Train"
},
{
"node_id": 1362,
"label": 1,
"text": "Title: Towards Automatic Discovery of Building Blocks in Genetic Programming \nAbstract: This paper presents an algorithm for the discovery of building blocks in genetic programming (GP) called adaptive representation through learning (ARL). The central idea of ARL is the adaptation of the problem representation, by extending the set of terminals and functions with a set of evolvable subroutines. The set of subroutines extracts common knowledge emerging during the evolutionary process and acquires the necessary structure for solving the problem. ARL supports subroutine creation and deletion. Subroutine creation or discovery is performed automatically based on the differential parent-offspring fitness and block activation. Subroutine deletion relies on a utility measure similar to schema fitness over a window of past generations. The technique described is tested on the problem of controlling an agent in a dynamic and non-deterministic environment. The automatic discovery of subroutines can help scale up the GP technique to complex problems. ",
"neighbors": [
163,
1178,
1184,
2175
],
"mask": "Train"
},
{
"node_id": 1363,
"label": 6,
"text": "Title: Exact Identification of Read-once Formulas Using Fixed Points of Amplification Functions \nAbstract: In this paper we describe a new technique for exactly identifying certain classes of read-once Boolean formulas. The method is based on sampling the input-output behavior of the target formula on a probability distribution that is determined by the fixed point of the formula's amplification function (defined as the probability that a 1 is output by the formula when each input bit is 1 independently with probability p). By performing various statistical tests on easily sampled variants of the fixed-point distribution, we are able to efficiently infer all structural information about any logarithmic-depth formula (with high probability). We apply our results to prove the existence of short universal identification sequences for large classes of formulas. We also describe extensions of our algorithms to handle high rates of noise, and to learn formulas of unbounded depth in Valiant's model with respect to specific distributions. Most of this research was carried out while all three authors were at MIT Laboratory for Computer Science with support provided by ARO Grant DAAL03-86-K-0171, DARPA Contract N00014-89-J-1988, NSF Grant CCR-88914428, and a grant from the Siemens Corporation. R. Schapire received additional support from AFOSR Grant 89-0506 while at Harvard University. S. Goldman is currently supported in part by a G.E. Foundation Junior Faculty Grant and NSF Grant CCR-9110108. ",
"neighbors": [
640,
672,
786,
1343,
1364,
2168,
2475,
2653
],
"mask": "Train"
},
{
"node_id": 1364,
"label": 6,
"text": "Title: Learning k-term DNF Formulas with an Incomplete Membership Oracle \nAbstract: We consider the problem of learning k-term DNF formulas using equivalence queries and incomplete membership queries as defined by Angluin and Slonim. We demonstrate that this model can be applied to non-monotone classes. Namely, we describe a polynomial-time algorithm that exactly identifies a k-term DNF formula with a k-term DNF hypothesis using incomplete membership queries and equivalence queries from the class of DNF formulas. ",
"neighbors": [
1003,
1004,
1343,
1363,
1469,
1705
],
"mask": "Validation"
},
{
"node_id": 1365,
"label": 2,
"text": "Title: Word Perfect Corp. A TRANSFORMATION FOR IMPLEMENTING NEURAL NETWORKS WITH LOCALIST PROPERTIES \nAbstract: Most Artificial Neural Networks (ANNs) have a fixed topology during learning, and typically suffer from a number of shortcomings as a result. Variations of ANNs that use dynamic topologies have shown ability to overcome many of these problems. This paper introduces Location-Independent Transformations (LITs) as a general strategy for implementing feedforward networks that use dynamic topologies. A LIT creates a set of location-independent nodes, where each node computes its part of the network output independent of other nodes, using local information. This type of transformation allows efficient support for adding and deleting nodes dynamically during learning. In particular, this paper presents LITs for the single-layer competitve learning network, and the counterpropagation network, which combines elements of supervised learning with competitive learning. These two networks are localist in the sense that ultimately one node is responsible for each output. LITs for other models are presented in other papers. ",
"neighbors": [
809,
812,
814,
1044
],
"mask": "Validation"
},
{
"node_id": 1366,
"label": 2,
"text": "Title: ``Learning Local Error Bars for Nonlinear Regression.'' Learning Local Error Bars for Nonlinear Regression \nAbstract: We present a new method for obtaining local error bars for nonlinear regression, i.e., estimates of the confidence in predicted values that depend on the input. We approach this problem by applying a maximum-likelihood framework to an assumed distribution of errors. We demonstrate our method first on computer-generated data with locally varying, normally distributed target noise. We then apply it to laser data from the Santa Fe Time Series Competition where the underlying system noise is known quantization error and the error bars give local estimates of model misspecification. In both cases, the method also provides a weighted-regression effect that improves generalization performance. ",
"neighbors": [
1373,
2239,
2373,
2374,
2413,
2414,
2513,
2562
],
"mask": "Train"
},
{
"node_id": 1367,
"label": 0,
"text": "Title: Learning to Refine Indexing by Introspective Reasoning \nAbstract: A significant problem for case-based reasoning (CBR) systems is deciding what features to use in judging case similarity for retrieval. We describe research that addresses the feature selection problem by using introspective reasoning to learn new features for indexing. Our method augments the CBR system with an introspective reasoning component which monitors system performance to detect poor retrievals, identifies features which would lead to retrieving cases requiring less adaptation, and refines the indices to include such features in order to avoid similar future failures. We explore the benefit of introspective reasoning by performing empirical tests on the implemented system. These tests examine the benefits of introspective index refinement and the effects of problem order on case and index learning, and show that introspective learning of new index features improves overall performance across the range of different problem orders.",
"neighbors": [
817
],
"mask": "Train"
},
{
"node_id": 1368,
"label": 0,
"text": "Title: Structure oriented case retrieval \nAbstract: ",
"neighbors": [
454,
991
],
"mask": "Train"
},
{
"node_id": 1369,
"label": 1,
"text": "Title: STRUCTURAL LEARNING OF FUZZY RULES FROM NOISED EXAMPLES \nAbstract: Inductive learning algorithms try to obtain the knowledge of a system from a set of examples. One of the most difficult problems in machine learning consists in getting the structure of this knowledge. We propose an algorithm able to manage with fuzzy information and able to learn the structure of the rules that represent the system. The algorithm gives a reasonable small set of fuzzy rules that represent the original set of examples. ",
"neighbors": [
1333
],
"mask": "Test"
},
{
"node_id": 1370,
"label": 5,
"text": "Title: From Theory Refinement to KB Maintenance: a Position Statement \nAbstract: Since we consider theory refinement (TR) as a possible key concept for a methodologically clear view of knowledge-base maintenance, we try to give a structured overview about the actual state-of-the-art in TR. This overview is arranged along the description of TR as a search problem. We explain the basic approach, show the variety of existing systems and try to give some hints about the direction future research should go. ",
"neighbors": [
136,
1102,
2692
],
"mask": "Train"
},
{
"node_id": 1371,
"label": 1,
"text": "Title: Self-Nonself Discrimination in a Computer \nAbstract: The problem of protecting computer systems can be viewed generally as the problem of learning to distinguish self from other. We describe a method for change detection which is based on the generation of T cells in the immune system. Mathematical analysis reveals computational costs of the system, and preliminary experiments illustrate how the method might be applied to the problem of computer viruses. ",
"neighbors": [
1114
],
"mask": "Train"
},
{
"node_id": 1372,
"label": 3,
"text": "Title: MCMC CONVERGENCE DIAGNOSTIC VIA THE CENTRAL LIMIT THEOREM \nAbstract: Markov Chain Monte Carlo (MCMC) methods, as introduced by Gelfand and Smith (1990), provide a simulation based strategy for statistical inference. The application fields related to these methods, as well as theoretical convergence properties, have been intensively studied in the recent literature. However, many improvements are still expected to provide workable and theoretically well-grounded solutions to the problem of monitoring the convergence of actual outputs from MCMC algorithms (i.e. the convergence assessment problem). In this paper, we introduce and discuss a methodology based on the Central Limit Theorem for Markov chains to assess convergence of MCMC algorithms. Instead of searching for approximate stationarity, we primarily intend to control the precision of estimates of the invariant probability measure, or of integrals of functions with respect to this measure, through confidence regions based on normal approximation. The first proposed control method tests the normality hypothesis for normalized averages of functions of the Markov chain over independent parallel chains. This normality control provides good guarantees that the whole state space has been explored, even in multimodal situations. It can lead to automated stopping rules. A second tool connected with the normality control is based on graphical monitoring of the stabilization of the variance after n iterations near the limiting variance appearing in the CLT. Both methods require no knowledge of the sampler driving the chain. In this paper, we mainly focus on finite state Markov chains, since this setting allows us to derive consistent estimates of both the limiting variance and the variance after n iterations. Heuristic procedures based on Berry-Esseen bounds are investigated. An extension to the continuous case is also proposed. Numerical simulations illustrating the performance of these methods are given for several examples: a finite chain with multimodal invariant probability, a finite state random walk for which the theoretical rate of convergence to stationarity is known, and a continuous state chain with multimodal invariant probability issued from a Gibbs sampler. ",
"neighbors": [
352,
896,
904
],
"mask": "Train"
},
{
"node_id": 1373,
"label": 2,
"text": "Title: Direct Multi-Step Time Series Prediction Using TD() \nAbstract: This paper explores the application of Temporal Difference (TD) learning (Sutton, 1988) to forecasting the behavior of dynamical systems with real-valued outputs (as opposed to game-like situations). The performance of TD learning in comparison to standard supervised learning depends on the amount of noise present in the data. In this paper, we use a deterministic chaotic time series from a low-noise laser. For the task of direct five-step ahead predictions, our experiments show that standard supervised learning is better than TD learning. The TD algorithm can be viewed as linking adjacent predictions. A similar effect can be obtained by sharing the internal representation in the network. We thus compare two architectures for both paradigms: the first architecture (separate hidden units) consists of individual networks for each of the five direct multi-step prediction tasks, the second (shared hidden units) has a single (larger) hidden layer that finds a representation from which all five predictions for the next five steps are generated. For this data set we do not find any significant difference between the two architectures. fl http://www.cs.colorado.edu/~andreas/Home.html. This paper is available as ftp://ftp.cs.colorado.edu/pub/Time-Series/MyPapers/kazlas.weigend nips7.ps.Z ",
"neighbors": [
565,
1366,
1718
],
"mask": "Train"
},
{
"node_id": 1374,
"label": 2,
"text": "Title: A simple algorithm that discovers efficient perceptual codes \nAbstract: We describe the \"wake-sleep\" algorithm that allows a multilayer, unsupervised, neural network to build a hierarchy of representations of sensory input. The network has bottom-up \"recognition\" connections that are used to convert sensory input into underlying representations. Unlike most artificial neural networks, it also has top-down \"generative\" connections that can be used to reconstruct the sensory input from the representations. In the \"wake\" phase of the learning algorithm, the network is driven by the bottom-up recognition connections and the top-down generative connections are trained to be better at reconstructing the sensory input from the representation chosen by the recognition process. In the \"sleep\" phase, the network is driven top-down by the generative connections to produce a fantasized representation and a fantasized sensory input. The recognition connections are then trained to be better at recovering the fantasized representation from the fantasized sensory input. In both phases, the synaptic learning rule is simple and local. The combined effect of the two phases is to create representations of the sensory input that are efficient in the following sense: On average, it takes more bits to describe each sensory input vector directly than to first describe the representation of the sensory input chosen by the recognition process and then describe the difference between the sensory input and its reconstruction from the chosen representation.",
"neighbors": [
869,
1548
],
"mask": "Train"
},
{
"node_id": 1375,
"label": 3,
"text": "Title: Priors for Infinite Networks \nAbstract: Technical Report CRG-TR-94-1 Department of Computer Science University of Toronto 10 King's College Road Toronto, Canada M5S 1A4 Abstract Bayesian inference begins with a prior distribution for model parameters that is meant to capture prior beliefs about the relationship being modeled. For multilayer perceptron networks, where the parameters are the connection weights, the prior lacks any direct meaning | what matters is the prior over functions computed by the network that is implied by this prior over weights. In this paper, I show that priors over weights can be defined in such a way that the corresponding priors over functions reach reasonable limits as the number of hidden units in the network goes to infinity. When using such priors, there is thus no need to limit the size of the network in order to avoid \"overfitting\". The infinite network limit also provides insight into the properties of different priors. A Gaussian prior for hidden-to-output weights results in a Gaussian process prior for functions, which can be smooth, Brownian, or fractional Brownian, depending on the hidden unit activation function and the prior for input-to-hidden weights. Quite different effects can be obtained using priors based on non-Gaussian stable distributions. In networks with more than one hidden layer, a combination of Gaussian and non-Gaussian priors appears most interesting. ",
"neighbors": [
157,
560,
1452
],
"mask": "Validation"
},
{
"node_id": 1376,
"label": 4,
"text": "Title: Near-Optimal Performance for Reinforcement Learning in Polynomial Time \nAbstract: We present new algorithms for reinforcement learning and prove that they have polynomial bounds on the resources required to achieve near-optimal return in general Markov decision processes. After observing that the number of actions required to approach the optimal return is lower bounded by the mixing time T of the optimal policy (in the undiscounted case) or by the horizon time T (in the discounted case), we then give algorithms requiring a number of actions and total computation time that are only polynomial in T and the number of states, for both the undiscounted and discounted cases. An interesting aspect of our algorithms is their explicit handling of the Exploration-Exploitation trade-off. These are the first results in the reinforcement learning literature giving algorithms that provably converge to near-optimal performance in polynomial time for general Markov decision processes. ",
"neighbors": [
306,
565,
738,
1546,
1727
],
"mask": "Train"
},
{
"node_id": 1377,
"label": 0,
"text": "Title: The case for cases: a call for purity in case-based reasoning inherently more difficult than\nAbstract: A basic premise of case-based reasoning (CBR) is that it involves reasoning from cases, which are representations of real episodes, rather than from rules, which are facts and if then structures with no stated connection to any real episodes. In fact, most CBR systems do not reason directly from cases | rather they reason from abstractions or simplifications of cases. In this paper, we argue for \"pure\" case-based reasoning, i.e., reasoning from representations that are both concrete and reasonably complete. We claim that working from representations that satisfy these criteria We illustrate our argument with examples from three previous systems, chef, swale, and hypo, as well as from cookie, a CBR system being developed by the first author.",
"neighbors": [
288,
313,
1642
],
"mask": "Train"
},
{
"node_id": 1378,
"label": 4,
"text": "Title: Generalization in Reinforcement Learning: Safely Approximating the Value Function \nAbstract: To appear in: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., Advances in Neural Information Processing Systems 7, MIT Press, Cambridge MA, 1995. A straightforward approach to the curse of dimensionality in reinforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neural net. Although this has been successful in the domain of backgammon, there is no guarantee of convergence. In this paper, we show that the combination of dynamic programming and function approximation is not robust, and in even very benign cases, may produce an entirely wrong policy. We then introduce Grow-Support, a new algorithm which is safe from divergence yet can still reap the benefits of successful generalization.",
"neighbors": [
21,
82,
173,
239,
559,
565,
575,
882,
970,
1440,
1540,
2485
],
"mask": "Train"
},
{
"node_id": 1379,
"label": 1,
"text": "Title: Modeling Simple Genetic Algorithms for Permutation Problems \nAbstract: An exact model of a simple genetic algorithm is developed for permutation based representations. Permutation based representations are used for scheduling problems and combinatorial problems such as the Traveling Salesman Problem. A remapping function is developed to remap the model to all permutations in the search space. The mixing matrices for various permutation based operators are also developed.",
"neighbors": [
163,
1611
],
"mask": "Train"
},
{
"node_id": 1380,
"label": 1,
"text": "Title: Evaluating Evolutionary Algorithms \nAbstract: Test functions are commonly used to evaluate the effectiveness of different search algorithms. However, the results of evaluation are as dependent on the test problems as they are on the algorithms that are the subject of comparison. Unfortunately, developing a test suite for evaluating competing search algorithms is difficult without clearly defined evaluation goals. In this paper we discuss some basic principles that can be used to develop test suites and we examine the role of test suites as they have been used to evaluate evolutionary search algorithms. Current test suites include functions that are easily solved by simple search methods such as greedy hill-climbers. Some test functions also have undesirable characteristics that are exaggerated as the dimensionality of the search space is increased. New methods are examined for constructing functions with different degrees of nonlinearity, where the interactions and the cost of evaluation scale with respect to the dimensionality of the search space.",
"neighbors": [
163,
793,
1113,
1611,
1717
],
"mask": "Train"
},
{
"node_id": 1381,
"label": 3,
"text": "Title: A Context-Sensitive Generalization of ICA \nAbstract: Source separation arises in a surprising number of signal processing applications, from speech recognition to EEG analysis. In the square linear blind source separation problem without time delays, one must find an unmixing matrix which can detangle the result of mixing n unknown independent sources through an unknown n fi n mixing matrix. The recently introduced ICA blind source separation algorithm (Baram and Roth 1994; Bell and Sejnowski 1995) is a powerful and surprisingly simple technique for solving this problem. ICA is all the more remarkable for performing so well despite making absolutely no use of the temporal structure of its input! This paper presents a new algorithm, contextual ICA, which derives from a maximum likelihood density estimation formulation of the problem. cICA can incorporate arbitrarily complex adaptive history-sensitive source models, and thereby make use of the temporal structure of its input. This allows it to separate in a number of situations where standard ICA cannot, including sources with low kurtosis, colored gaussian sources, and sources which have gaussian histograms. Since ICA is a special case of cICA, the MLE derivation provides as a corollary a rigorous derivation of classic ICA. ",
"neighbors": [
570,
576,
1524
],
"mask": "Test"
},
{
"node_id": 1382,
"label": 2,
"text": "Title: AN ADAPTIVE NEURAL NETWORK PARSER \nAbstract: We inv estigate the applicability of an adaptive neural network to problems with time-dependent input by demonstrating that a deterministic parser for natural language inputs of significant syntactic complexity can be developed using recurrent connectionist architectures. The traditional stacking mechanism, known to be necessary for proper treatment of context-free languages in symbolic systems, is absent from the design, having been subsumed by recurrency in the network.",
"neighbors": [
1285,
1313
],
"mask": "Validation"
},
{
"node_id": 1383,
"label": 2,
"text": "Title: Data-defined Problems and Multiversion Neural-net Systems \nAbstract: We inv estigate the applicability of an adaptive neural network to problems with time-dependent input by demonstrating that a deterministic parser for natural language inputs of significant syntactic complexity can be developed using recurrent connectionist architectures. The traditional stacking mechanism, known to be necessary for proper treatment of context-free languages in symbolic systems, is absent from the design, having been subsumed by recurrency in the network.",
"neighbors": [
152,
1384,
1398
],
"mask": "Validation"
},
{
"node_id": 1384,
"label": 2,
"text": "Title: Use of Methodological Diversity to Improve Neural Network Generalisation \nAbstract: Littlewood and Miller [1989] present a statistical framework for dealing with coincident failures in multiversion software systems. They develop a theoretical model that holds the promise of high system reliability through the use of multiple, diverse sets of alternative versions. In this paper we adapt their framework to investigate the feasibility of exploiting the diversity observable in multiple populations of neural networks developed using diverse methodologies. We evaluate the generalisation improvements achieved by a range of methodologically diverse network generation processes. We attempt to order the constituent methodological features with respect to their potential for use in the engineering of useful diversity. We also define and explore the use of relative measures of the diversity between version sets as a guide to the potential for exploiting inter-set diversity. ",
"neighbors": [
1383
],
"mask": "Test"
},
{
"node_id": 1385,
"label": 0,
"text": "Title: Learning Control Knowledge in Models of Expertise ECML'95 Workshop on Knowledge-Level Modelling and Machine Learning \nAbstract: During the development and the life-cycle of knowledge-based systems the requirements on the system and the knowledge in the system will change. One of the types of knowledge affected by changing requirements is control-knowledge, which prescribes the ordering of problem-solving steps. Machine-learning can aid developers of knowledge-based systems in adapting their systems to changing requirements. A number of machine-learning techniques for learning control-knowledge have been applied to problem-solvers (Prodigy-EBL, LEX). In knowledge engineering, the focus has shifted to the construction of knowledge-level models of problem-solving instead of directly constructing a knowledge-based system in a problem-solver. In this paper we describe work in progress on how to apply machine learning techniques to the KADS model of expertise.",
"neighbors": [
1653,
1706
],
"mask": "Train"
},
{
"node_id": 1386,
"label": 6,
"text": "Title: New Evidence Driven State Merging Algorithm \nAbstract: Results of the Abbadingo One DFA Learning Competition Abstract This paper first describes the structure and results of the Abbadingo One DFA Learning Competition. The competition was designed to encourage work on algorithms that scale wellboth to larger DFAs and to sparser training data. We then describe and discuss the winning algorithm of Rodney Price, which orders state merges according to the amount of evidence in their favor. A second winning algorithm, of Hugues and",
"neighbors": [
672,
1715,
2360
],
"mask": "Train"
},
{
"node_id": 1387,
"label": 2,
"text": "Title: Cortical activity flips among quasi stationary states \nAbstract: M. Abeles, H. Bergman and E. Vaadia, School of Medicine and Center for Neural Computation Hebrew University, POB 12272, Jerusalem 91120, Is-rael. E. Seidemann and I. Meilijson, School of Mathematical Sciences, Raymond and Beverly Sackler Faculty of Exact Sciences, and School of Medicine, Tel Aviv University, 69978 Tel Aviv, Israel. I. Gat and N. Tishby, Institute of Computer Science and Center for Neural Computation, Hebrew University, Jerusalem 91904, Israel. ",
"neighbors": [
1239
],
"mask": "Train"
},
{
"node_id": 1388,
"label": 6,
"text": "Title: A Fast, Bottom-Up Decision Tree Pruning Algorithm with Near-Optimal Generalization \nAbstract: In this work, we present a new bottom-up algorithm for decision tree pruning that is very efficient (requiring only a single pass through the given tree), and prove a strong performance guarantee for the generalization error of the resulting pruned tree. We work in the typical setting in which the given tree T may have been derived from the given training sample S, and thus may badly overfit S. In this setting, we give bounds on the amount of additional generalization error that our pruning suffers compared to the optimal pruning of T . More generally, our results show that if there is a pruning of T with small error, and whose size is small compared to jSj, then our algorithm will find a pruning whose error is not much larger. This style of result has been called an index of resolvability result by Barron and Cover in the context of density estimation. A novel feature of our algorithm is its locality | the decision to prune a subtree is based entirely on properties of that subtree and the sample reaching it. To analyze our algorithm, we develop tools of local uniform convergence, a generalization of the standard notion that may prove useful in other settings. ",
"neighbors": [
848,
1025,
1027,
1586
],
"mask": "Train"
},
{
"node_id": 1389,
"label": 2,
"text": "Title: Support Vector Machines: Training and Applications \nAbstract: The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Laboratories [3, 6, 8, 24]. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. The main idea behind the technique is to separate the classes with a surface that maximizes the margin between them. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle [23]. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Since Structural Risk Minimization is an inductive principle that aims at minimizing a bound on the generalization error of a model, rather than minimizing the Mean Square Error over the data set (as Empirical Risk Minimization methods do), training a SVM to obtain the maximum margin classifier requires a different objective function. This objective function is then optimized by solving a large-scale quadratic programming problem with linear and box constraints. The problem is considered challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results in Frontal Human Face Detection in images. This application opens many interesting questions and future research opportunities, both in the context of faster and better optimization algorithms, and in the use of SVM's in other pattern classification, recognition, and detection applications. This report describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. This research is sponsored by MURI grant N00014-95-1-0600; by a grant from ONR/ARPA under contract N00014-92-J-1879 and by the National Science Foundation under contract ASC-9217041 (this award includes funds from ARPA provided under the HPCC program). Edgar Osuna was supported by Fundacion Gran Mariscal de Ayacucho and Daimler Benz. Additional support is provided by Daimler-Benz, Eastman Kodak Company, Siemens Corporate Research, Inc. and AT&T. ",
"neighbors": [
821,
1050,
1079,
2707
],
"mask": "Test"
},
{
"node_id": 1390,
"label": 6,
"text": "Title: Learning Finite Automata Using Local Distinguishing Experiments \nAbstract: One of the open problems listed in [ Rivest and Schapire, 1989 ] is whether and how that the copies of L fl in their algorithm can be combined into one for better performance. This paper describes an algorithm called D fl that does that combination. The idea is to represent the states of the learned model using observable symbols as well as hidden symbols that are constructed during learning. These hidden symbols are created to reflect the distinct behaviors of the model states. The distinct behaviors are represented as local distinguishing experiments (LDEs) (not to be confused with global distinguishing sequences), and these LDEs are created when the learner's prediction mismatches the actual observation from the unknown machine. To synchronize the model with the environment, these LDEs can also be concatenated to form a homing sequence. It can be shown that D fl can learn, with probability 1 , a model that is an *-approximation of the unknown machine, in a number of actions polynomial in the size of the environment and ",
"neighbors": [
1491
],
"mask": "Train"
},
{
"node_id": 1391,
"label": 1,
"text": "Title: Models in Evolutionary Ecology and the Validation Problem Caswell for their guidance and support. evolutionary\nAbstract: All models of natural systems represent an abstraction and simplification of that system. Thus all models suffer a validation problem. Should we believe that the results of the model have any bearing on reality? This is a particularly acute problem for Alife models of evolution and ecosystems. The time scale of evolution and the complexity of ecosystems make controlled experiments difficult. If Alife is ever to contribute significantly to biology, we must find methods by which we can build confidence in our models. One alternative to experimental tests of a model is to validate it against previously verified theory. I have applied a series of ecological and evolutionary validation tests to a model of species diversification. Examination of the predator-prey dynamics, trophic cascades, competitive exclusion, adaptation, and the species-area curve in the model has shown that a course grained spatial structure was inadequate to capture the realistic dynamics of an ecosystem. Only when spatial structure was extended to the local patch dynamics did the model begin to behave realistically under a wide range of parameters. Validation of the ecological dynamics of the model provides indirect support for the evolutionary behavior of the species within the ecosystem. Every model is an abstraction and a simplification. The goal of a model is to capture the essence of a system in the real world such that the behavior of the model matches the behavior of the real system. Thus for any model we may ask if it is a valid representation of the real system. Answering this question is the problem of validation. Traditionally we can try to disprove the validity of the model by collecting data from the real system and comparing it to the predictions of the model. In artificial life we rarely have that luxury. Artificial life models tend to be highly abstract and general because the field is striving to discover general properties of life. This makes experimental validation extremely difficult. The time scale of evolution tends to restrict experiments to observation of the fossil record (Benton 1990, for example) or manipulation of organisms with extremely short life-cycles in simplified environments (Krukonis 1996, for example). Similarly, the complexity and size of ecosystems makes ecological experiments cumbersome and difficult to control. An alternative form of validation can be pursued indirectly through reference to ecological and evolutionary theory. Instead of asking if the model matches the experimental data, we can ask if the model matches our understanding of the dynamics of ecology and evolution. Then, to the extent that the theories of ecology and evolution have been validated by experimental observations, we can disprove the validity of a model when it fails to match those theories. What follows is an example of this technique applied to a model designed to examine the factors that impact the origin and maintenance of species diversity. While the purpose of this model is to explore new theoretical ground in biology, the ecological and evolutionary dynamics in the model have been validated against theories of predation, competition, adaptation and island biogeography. Hraber and Milne (1997) looked at genotype diversity under the presence or absence of selection and varying mutation rates in the ECHO model (Holland 1992; 1993). Mirroring Bedau et al.'s (1992) results, they found that genotypic diversity was greatest under ",
"neighbors": [
1319
],
"mask": "Validation"
},
{
"node_id": 1392,
"label": 1,
"text": "Title: Adapting Crossover in Evolutionary Algorithms \nAbstract: One of the issues in evolutionary algorithms (EAs) is the relative importance of two search operators: mutation and crossover. Genetic algorithms (GAs) and genetic programming (GP) stress the role of crossover, while evolutionary programming (EP) and evolution strategies (ESs) stress the role of mutation. The existence of many different forms of crossover further complicates the issue. Despite theoretical analysis, it appears difficult to decide a priori which form of crossover to use, or even if crossover should be used at all. One possible solution to this difficulty is to have the EA be self-adaptive, i.e., to have the EA dynamically modify which forms of crossover to use and how often to use them, as it solves a problem. This paper describes an adaptive mechanism for controlling the use of crossover in an EA and explores the behavior of this mechanism in a number of different situations. An improvement to the adaptive mechanism is then presented. Surprisingly this improvement can also be used to enhance performance in a non-adaptive EA. ",
"neighbors": [
1299
],
"mask": "Test"
},
{
"node_id": 1393,
"label": 3,
"text": "Title: Probabilistic Independence Networks for Hidden Markov Probability Models \nAbstract: Graphical techniques for modeling the dependencies of random variables have been explored in a variety of different areas including statistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics. Formalisms for manipulating these models have been developed relatively independently in these research communities. In this paper we explore hidden Markov models (HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper contains a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general inference algorithms for arbitrary PINs. Furthermore, the existence of inference and estimation algorithms for more general graphical models provides a set of analysis tools for HMM practitioners who wish to explore a richer class of HMM structures. Examples of relatively complex models to handle sensor fusion and coarticulation in speech recognition are introduced and treated within the graphical model framework to illustrate the advantages of the general approach.",
"neighbors": [
905,
976,
1097,
1128,
1397,
1414,
1437,
1502
],
"mask": "Train"
},
{
"node_id": 1394,
"label": 2,
"text": "Title: Vapnik-Chervonenkis entropy of the spherical perceptron \nAbstract: Perceptron learning of randomly labeled patterns is analyzed using a Gibbs distribution on the set of realizable labelings of the patterns. The entropy of this distribution is an extension of the Vapnik-Chervonenkis (VC) entropy, reducing to it exactly in the limit of infinite temperature. The close relationship between the VC and Gardner entropies can be seen within the replica formalism. There has been recent progress towards understanding the relationship between the statistical physics and Vapnik-Chervonenkis (VC) approaches to learning theory[1, 2, 3, 4]. The two approaches can be unified in a statistical mechanics based on the VC entropy. This paper treats the case of learning randomly labeled patterns, or the capacity problem, and extends some of the results of previous work[5, 6] to finite temperature. As will be explained in a companion paper, this extension is important for treating the generalization problem, which occurs in the context of learning patterns labeled by a target rule. Our general framework is illustrated for the simple perceptron sgn(w x), which maps an N -dimensional real-valued input x to a 1-valued output. Given a sample X = (x 1 ; : : : ; x m ) of inputs, the weight vector w determines a labeling L = (l 1 ; : : : ; l m ) of the sample via l i = sgn(w x i ). The weight vector w defines a normal hyperplane that separates the positive from the negative examples. The training error of a labeling L with respect to a reference labeling L 0 is defined by 1 m X 1 l i l 0 and is just the fraction of different labels in the two labelings. We consider the case in which the reference labeling is chosen at random, and address the issue of ",
"neighbors": [
967
],
"mask": "Validation"
},
{
"node_id": 1395,
"label": 1,
"text": "Title: Learning Where To Go without Knowing Where That Is: The Acquisition of a Non-reactive Mobot\nAbstract: In the path-imitation task, one agent traces out a path through a second agent's sensory field. The second agent then has to reproduce that path exactly, i.e. move through the sequence of locations visited by the first agent. This is a non-trivial behaviour whose acquisition might be expected to involve special-purpose (i.e., strongly biased) learning machinery. However, the present paper shows this is not the case. The behaviour can be acquired using a fairly primitive learning regime provided that the agent's environment can be made to pass through a specific sequence of dynamic states.",
"neighbors": [
1541
],
"mask": "Test"
},
{
"node_id": 1396,
"label": 1,
"text": "Title: Evolving a Generalised Behaviour: Artificial Ant Problem Revisited \nAbstract: This research aims to demonstrate that a solution for artificial ant problem [4] is very likely to be non-general and relying on the specific characteristics of the Santa Fe trail. It then presents a consistent method which promotes producing general solutions. Using the concepts of training and testing from machine learning research, the method can be useful in producing general behaviours for simulation environments.",
"neighbors": [
970
],
"mask": "Train"
},
{
"node_id": 1397,
"label": 3,
"text": "Title: Speech Recognition with Dynamic Bayesian Networks \nAbstract: Dynamic Bayesian networks (DBNs) are a useful tool for representing complex stochastic processes. Recent developments in inference and learning in DBNs allow their use in real-world applications. In this paper, we apply DBNs to the problem of speech recognition. The factored state representation enabled by DBNs allows us to explicitly represent long-term articulatory and acoustic context in addition to the phonetic-state information maintained by hidden Markov models (HMMs). Furthermore, it enables us to model the short-term correlations among multiple observation streams within single time-frames. Given a DBN structure capable of representing these long- and short-term correlations, we applied the EM algorithm to learn models with up to 500,000 parameters. The use of structured DBN models decreased the error rate by 12 to 29% on a large-vocabulary isolated-word recognition task, compared to a discrete HMM; it also improved significantly on other published results for the same task. This is the first successful application of DBNs to a large-scale speech recognition problem. Investigation of the learned models indicates that the hidden state variables are strongly correlated with acoustic properties of the speech signal. ",
"neighbors": [
905,
1287,
1393
],
"mask": "Train"
},
{
"node_id": 1398,
"label": 2,
"text": "Title: Self-Organizing Sets of Experts \nAbstract: We describe and evaluate multi-network connectionist systems composed of \"expert\" networks. By preprocessing training data with a competitive learning network, the system automatically organizes the process of decomposition into expert subtasks. Using several different types of challenging problem, we assess this approach | the degree to which the automatically generated experts really are specialists on a predictable subset of the overall task, and a comparison of such decompositions with equivalent single-networks. In addition, we assess the utility of this approach alongside, and in competition to, non-expert multiversion systems. Previously developed measures of `diversity' for such systems are also applied to provide a quantitative assessment of the degree of specialization obtained in an expert-net ensemble. We show that on both types of problem, abstract well-defined and data-defined, the automatic decomposition does produce an effective set of specialist networks which together can support a high level of performance. Curiously, the study does not provide any support for a differential of effectiveness within the two classes of problem: continuous, homogeneous functions and discrete, discontinuous functions.",
"neighbors": [
1383
],
"mask": "Test"
},
{
"node_id": 1399,
"label": 2,
"text": "Title: Parametric regression 1.1 Learning problem model f bw b w in turn is an estimator\nAbstract: Let us present briefly the learning problem we will address in this chapter and the following. The ultimate goal is the modelling of a mapping f : x 7! y from multidimensional input x to output y. The output can be multi-dimensional, but we will mostly address situations where it is a one dimensional real value. Furthermore, we should take into account the fact that we scarcely ever observe the actual true mapping y = f (x). This is due to perturbations such as e.g. observational noise. We will rather have a joint probability p (x; y). We expect this probability to be peaked for values of x and y corresponding to the mapping. We focus on automatic learning by example. A set D = of data sampled from the joint distribution p (x; y) = p (yjx) p (x) is collected. With the help of this set, we try to identify a model of the data, parameterised by a set of 1.2 Learning and optimisation The fit of the model to the system in a given point x is measured using a criterion representing the distance from the model prediction b y to the system, e (y; f w (x)). This is the local risk . The performance of the model is measured by the expected This quantity represents the ability to yield good performance for all the possible situations (i.e. (x; y) pairs) and is thus called generalisation error . The optimal set 1 parameters w: f w : x 7! b y.",
"neighbors": [
427,
1463
],
"mask": "Train"
},
{
"node_id": 1400,
"label": 6,
"text": "Title: Towards Robust Model Selection using Estimation and Approximation Error Bounds \nAbstract: Let us present briefly the learning problem we will address in this chapter and the following. The ultimate goal is the modelling of a mapping f : x 7! y from multidimensional input x to output y. The output can be multi-dimensional, but we will mostly address situations where it is a one dimensional real value. Furthermore, we should take into account the fact that we scarcely ever observe the actual true mapping y = f (x). This is due to perturbations such as e.g. observational noise. We will rather have a joint probability p (x; y). We expect this probability to be peaked for values of x and y corresponding to the mapping. We focus on automatic learning by example. A set D = of data sampled from the joint distribution p (x; y) = p (yjx) p (x) is collected. With the help of this set, we try to identify a model of the data, parameterised by a set of 1.2 Learning and optimisation The fit of the model to the system in a given point x is measured using a criterion representing the distance from the model prediction b y to the system, e (y; f w (x)). This is the local risk . The performance of the model is measured by the expected This quantity represents the ability to yield good performance for all the possible situations (i.e. (x; y) pairs) and is thus called generalisation error . The optimal set 1 parameters w: f w : x 7! b y.",
"neighbors": [
848,
967
],
"mask": "Test"
},
{
"node_id": 1401,
"label": 0,
"text": "Title: Using Case-Based Reasoning as a Reinforcement Learning Framework for Optimization with Changing Criteria \nAbstract: Practical optimization problems such as job-shop scheduling often involve optimization criteria that change over time. Repair-based frameworks have been identified as flexible computational paradigms for difficult combinatorial optimization problems. Since the control problem of repair-based optimization is severe, Reinforcement Learning (RL) techniques can be potentially helpful. However, some of the fundamental assumptions made by traditional RL algorithms are not valid for repair-based optimization. Case-Based Reasoning (CBR) compensates for some of the limitations of traditional RL approaches. In this paper, we present a Case-Based Reasoning RL approach, implemented in the C A B I N S system, for repair-based optimization. We chose job-shop scheduling as the testbed for our approach. Our experimental results show that C A B I N S is able to effectively solve problems with changing optimization criteria which are not known to the system and only exist implicitly in a extensional manner in the case base. ",
"neighbors": [
562,
565,
951,
1554,
2605
],
"mask": "Validation"
},
{
"node_id": 1402,
"label": 2,
"text": "Title: A model f w depending on a set of parameters w is used to estimate\nAbstract: We have introduced earlier the use of regularisation in the learning procedure. It should now be understood that regularisation is most often a necessity to increase the quality of the results. Even when the unregularised solution is acceptable, it is likely that some regularisation will produce an improvement in performance. There does not exist any method giving directly the best value for the regularisation parameter ~, even in the linear case. The topic of this chapter is thus to propose some methods to estimate the best value. The best ~ being the one that leads to the smallest generalisation error, the methods presented and compared here propose estimators of the generalisation error. This estimation can then be used to approximate the best regularisation level. In sections 3.2 to 3.4 we present validation-based techniques. They estimate the generalisation error on the basis of some extra data. In sections 3.6 to 3.9, we deal with algebraic estimates of this error, that do not use any extra data, but rely on a number of assumptions. The contribution of this chapter is to present all these techniques and analyse them on the same ground. We also present some short derivations clarifying the links between different estimators of generalisation error, as well as a comparison between them. During the course of this chapter, the error will be the quadratic difference. For the validation-based methods, it is possible to consider any kind of error without modification of the method. On the other hand, the algebraic estimates are specific to the quadratic cost. Adapting them to another cost function would require to derive new expressions for the estimators. ",
"neighbors": [
847
],
"mask": "Validation"
},
{
"node_id": 1403,
"label": 3,
"text": "Title: DART/HYESS Users Guide recursive covering approach to local learning \nAbstract: ",
"neighbors": [
1112
],
"mask": "Train"
},
{
"node_id": 1404,
"label": 1,
"text": "Title: A Hybrid GP/GA Approach for Co-evolving Controllers and Robot Bodies to Achieve Fitness-Specified Tasks \nAbstract: Evolutionary approaches have been advocated to automate robot design. Some research work has shown the success of evolving controllers for the robots by genetic approaches. As we can observe, however, not only the controller but also the robot body itself can affect the behavior of the robot in a robot system. In this paper, we develop a hybrid GP/GA approach to evolve both controllers and robot bodies to achieve behavior-specified tasks. In order to assess the performance of the developed approach, it is used to evolve a simulated agent, with its own controller and body, to do obstacle avoidance in the simulated environment. Experimental results show the promise of this work. In addition, the importance of co-evolving controllers and robot bodies is analyzed and discussed in this paper. ",
"neighbors": [
163,
219,
755,
1143
],
"mask": "Validation"
},
{
"node_id": 1405,
"label": 2,
"text": "Title: TH presentee par STATISTICAL LEARNING AND REGULARISATION FOR REGRESSION Application to system identification and time\nAbstract: Evolutionary approaches have been advocated to automate robot design. Some research work has shown the success of evolving controllers for the robots by genetic approaches. As we can observe, however, not only the controller but also the robot body itself can affect the behavior of the robot in a robot system. In this paper, we develop a hybrid GP/GA approach to evolve both controllers and robot bodies to achieve behavior-specified tasks. In order to assess the performance of the developed approach, it is used to evolve a simulated agent, with its own controller and body, to do obstacle avoidance in the simulated environment. Experimental results show the promise of this work. In addition, the importance of co-evolving controllers and robot bodies is analyzed and discussed in this paper. ",
"neighbors": [
427,
1463
],
"mask": "Train"
},
{
"node_id": 1406,
"label": 2,
"text": "Title: 82 Lag-space estimation in time-series modelling keep track of cases where the estimation of P\nAbstract: When m = 0 (no delays), we set A 0 (ffi) = f(j; k) ; j 6= kg, such that P m (*jffi) depends only on *. The estimated probabilities above become quite noisy when the number of elements in set A m and B m are small. For this reason, we estimate the standard deviation of P m (*jffi). Notice that this estimate is the empirical average of a binomial variable (either a given couple satisfied the conditions on ffi and *, or it does not). The standard deviation is then estimated easily by: Generally speaking, P m (*jffi) increases with * (laxer output test), and when ffi approaches 0 (stricter input condition). Let us now define by P m (*) the maximum over ffi of P m (*jffi): P m (*) = max ffi>0 P m (*jffi). The dependability index is defined as: P 0 (*) represents how much data passes the continuity test when no input information is available. This dependability index measures how much of the remaining continuity information is associated with involving input i m . This index is then averaged over * with respect to the probability (1 P 0 (*)): m (*) (1 P 0 (*)) d* (4.8) It is clear that m (*), and therefore its average, should be positive quantities. Furthermore, if the system is deterministic, the dependability is zero after a certain number of inputs, so the sum of averages saturates. If the system is also noise-free, they sum up to 1. For any m greater than the embedding dimension: refers to results obtained using this method. 4.6 Statistical variable selection Statistical variable selection (or feature selection) encompasses a number of techniques aimed at choosing a relevant subset of input variables in a regression or a classification problem. As in the rest of this document, we will limit ourselves to considerations related to the regression problem, even though most methods discussed below apply to classification as well. Variable selection can be seen as a part of the data analysis problem: the selection (or discard) of a variable tells us about the relevance of the associated measurement to the modelled system. In a general setting, this is a purely combinatorial problem: given V possible variables, there is 2 V possible subsets (including the empty set and the full set) of these variables. Given a performance measure, such as prediction error, the only optimal scheme is to test all these subset and choose the one that gives the best performance. It is easy to see that such an extensive scheme is only viable when the number of variables is rather low. Identifying 2 V models when we have more than a few variables requires too much computation. A number of techniques have been devised to overcome this combinatorial limit. Some of them use an iterative, locally optimal technique to construct an estimate of the relevant subset in a number of steps. We will refer to them as stepwise selection methods, not to be con fused with stepwise regression, a subset of these methods that we will address below. In forward selection, we start with an empty set of variables. At each step, we select a candidate variable using a selection criteria, check whether this variable should be added to the set, and iterate until a given stop condition is reached. On the contrary, backward elimination methods start with the full set of all input variables. At each step, the least significant variable is selected according to a selection criteria. If this variable is irrelevant, it is removed and the process is iterated until a stop condition is reached. It is easy to devise examples where the inclusion of a variable causes a previously included variable to become irrelevant. It thus seems appropriate to consider running a backward elimination each time a new variable is added by forward selection. This combination of both ap proaches is known as stepwise regression in the linear regression con",
"neighbors": [
1463
],
"mask": "Train"
},
{
"node_id": 1407,
"label": 0,
"text": "Title: ABSTRACTION CONSIDERED HARMFUL: LAZY LEARNING OF LANGUAGE PROCESSING \nAbstract: When m = 0 (no delays), we set A 0 (ffi) = f(j; k) ; j 6= kg, such that P m (*jffi) depends only on *. The estimated probabilities above become quite noisy when the number of elements in set A m and B m are small. For this reason, we estimate the standard deviation of P m (*jffi). Notice that this estimate is the empirical average of a binomial variable (either a given couple satisfied the conditions on ffi and *, or it does not). The standard deviation is then estimated easily by: Generally speaking, P m (*jffi) increases with * (laxer output test), and when ffi approaches 0 (stricter input condition). Let us now define by P m (*) the maximum over ffi of P m (*jffi): P m (*) = max ffi>0 P m (*jffi). The dependability index is defined as: P 0 (*) represents how much data passes the continuity test when no input information is available. This dependability index measures how much of the remaining continuity information is associated with involving input i m . This index is then averaged over * with respect to the probability (1 P 0 (*)): m (*) (1 P 0 (*)) d* (4.8) It is clear that m (*), and therefore its average, should be positive quantities. Furthermore, if the system is deterministic, the dependability is zero after a certain number of inputs, so the sum of averages saturates. If the system is also noise-free, they sum up to 1. For any m greater than the embedding dimension: refers to results obtained using this method. 4.6 Statistical variable selection Statistical variable selection (or feature selection) encompasses a number of techniques aimed at choosing a relevant subset of input variables in a regression or a classification problem. As in the rest of this document, we will limit ourselves to considerations related to the regression problem, even though most methods discussed below apply to classification as well. Variable selection can be seen as a part of the data analysis problem: the selection (or discard) of a variable tells us about the relevance of the associated measurement to the modelled system. In a general setting, this is a purely combinatorial problem: given V possible variables, there is 2 V possible subsets (including the empty set and the full set) of these variables. Given a performance measure, such as prediction error, the only optimal scheme is to test all these subset and choose the one that gives the best performance. It is easy to see that such an extensive scheme is only viable when the number of variables is rather low. Identifying 2 V models when we have more than a few variables requires too much computation. A number of techniques have been devised to overcome this combinatorial limit. Some of them use an iterative, locally optimal technique to construct an estimate of the relevant subset in a number of steps. We will refer to them as stepwise selection methods, not to be con fused with stepwise regression, a subset of these methods that we will address below. In forward selection, we start with an empty set of variables. At each step, we select a candidate variable using a selection criteria, check whether this variable should be added to the set, and iterate until a given stop condition is reached. On the contrary, backward elimination methods start with the full set of all input variables. At each step, the least significant variable is selected according to a selection criteria. If this variable is irrelevant, it is removed and the process is iterated until a stop condition is reached. It is easy to devise examples where the inclusion of a variable causes a previously included variable to become irrelevant. It thus seems appropriate to consider running a backward elimination each time a new variable is added by forward selection. This combination of both ap proaches is known as stepwise regression in the linear regression con",
"neighbors": [
783,
862,
1155,
1626,
1812
],
"mask": "Validation"
},
{
"node_id": 1408,
"label": 1,
"text": "Title: Use of Architecture-Altering Operations to Dynamically Adapt a Three-Way Analog Source Identification Circuit to Accommodate\nAbstract: We used genetic programming to evolve b o t h the topology and the sizing (numerical values) for each component of an analog electrical circuit that can correctly classify an incoming analog electrical signal into three categories. Then, the r e p e r t o i r e o f s o u r c e s w a s dynamically changed by adding a new source during the run. The p a p e r d e s c r i b e s h o w t h e ",
"neighbors": [
1249,
1921,
1931
],
"mask": "Validation"
},
{
"node_id": 1409,
"label": 1,
"text": "Title: Evolution of Mapmaking: Learning, planning, and memory using Genetic Programming \nAbstract: An essential component of an intelligent agent is the ability to observe, encode, and use information about its environment. Traditional approaches to Genetic Programming have focused on evolving functional or reactive programs with only a minimal use of state. This paper presents an approach for investigating the evolution of learning, planning, and memory using Genetic Programming. The approach uses a multi-phasic fitness environment that enforces the use of memory and allows fairly straightforward comprehension of the evolved representations . An illustrative problem of 'gold' collection is used to demonstrate the usefulness of the approach. The results indicate that the approach can evolve programs that store simple representations of their environments and use these representations to produce simple plans. ",
"neighbors": [
129,
789,
958,
1797,
2220,
2600,
2703
],
"mask": "Test"
},
{
"node_id": 1410,
"label": 1,
"text": "Title: Genetic Algorithm Programming Environments \nAbstract: Interest in Genetic algorithms is expanding rapidly. This paper reviews software environments for programming Genetic Algorithms ( GA s). As background, we initially preview genetic algorithms' models and their programming. Next we classify GA software environments into three main categories: Application-oriented, Algorithm-oriented and ToolKits. For each category of GA programming environment we review their common features and present a case study of a leading environment. ",
"neighbors": [
163,
1153
],
"mask": "Validation"
},
{
"node_id": 1411,
"label": 2,
"text": "Title: Connection Pruning with Static and Adaptive Pruning Schedules \nAbstract: Neural network pruning methods on the level of individual network parameters (e.g. connection weights) can improve generalization, as is shown in this empirical study. However, an open problem in the pruning methods known today (e.g. OBD, OBS, autoprune, epsiprune) is the selection of the number of parameters to be removed in each pruning step (pruning strength). This work presents a pruning method lprune that automatically adapts the pruning strength to the evolution of weights and loss of generalization during training. The method requires no algorithm parameter adjustment by the user. Results of statistical significance tests comparing autoprune, lprune, and static networks with early stopping are given, based on extensive experimentation with 14 different problems. The results indicate that training with pruning is often significantly better and rarely significantly worse than training with early stopping without pruning. Furthermore, lprune is often superior to autoprune (which is superior to OBD) on diagnosis tasks unless severe pruning early in the training process is required.",
"neighbors": [
881,
1203,
2203
],
"mask": "Train"
},
{
"node_id": 1412,
"label": 0,
"text": "Title: EXPLORING A FRAMEWORK FOR INSTANCE BASED LEARNING AND NAIVE BAYESIAN CLASSIFIERS \nAbstract: The relative performance of different methods for classifier learning varies across domains. Some recent Instance Based Learning (IBL) methods, such as IB1-MVDM* 10 , use similarity measures based on conditional class probabilities. These probabilities are a key component of Naive Bayes methods. Given this commonality of approach, it is of interest to consider how the differences between the two methods are linked to their relative performance in different domains. Here we interpret Naive Bayes in an IBL like framework, identifying differences between Naive Bayes and IB1-MVDM* in this framework. Experiments on variants of IB1-MVDM* that lie between it and Naive Bayes in the framework are conducted on sixteen domains. The results strongly suggest that the relative performance of Naive Bayes and IB1-MVDM* is linked to the extent to which each class can be satisfactorily represented by a single instance in the IBL framework. However, this is not the only factor that appears significant. ",
"neighbors": [
1111,
1328,
1431
],
"mask": "Train"
},
{
"node_id": 1413,
"label": 0,
"text": "Title: References Automatic student modeling and bug library construction using theory refinement. Ph.D. ml/ Symbolic revision\nAbstract: ASSERT demonstrates how theory refinement techniques developed in machine learning can be used to ef fec-tively build student models for intelligent tutoring systems. This application is unique since it inverts the normal goal of theory refinement from correcting errors in a knowledge base to introducing them. A comprehensive experiment involving a lar ge number of students interacting with an automated tutor for teaching concepts in C ++ programming was used to evaluate the approach. This experiment demonstrated the ability of theory refinement to generate more accurate student models than raw induction, as well as the ability of the resulting models to support individualized feedback that actually improves students subsequent performance. Carr, B. and Goldstein, I. (1977). Overlays: a theory of modeling for computer aided instruction. T echnical Report A. I. Memo 406, Cambridge, MA: MIT. Sandberg, J. and Barnard, Y . (1993). Education and technology: What do we know? And where is AI? Artificial Intelligence Communications, 6(1):47-58. ",
"neighbors": [
136,
1102
],
"mask": "Test"
},
{
"node_id": 1414,
"label": 3,
"text": "Title: Tractable Inference for Complex Stochastic Processes \nAbstract: The monitoring and control of any dynamic system depends crucially on the ability to reason about its current status and its future trajectory. In the case of a stochastic system, these tasks typically involve the use of a belief statea probability distribution over the state of the process at a given point in time. Unfortunately, the state spaces of complex processes are very large, making an explicit representation of a belief state intractable. Even in dynamic Bayesian networks (DBNs), where the process itself can be represented compactly, the representation of the belief state is intractable. We investigate the idea of maintaining a compact approximation to the true belief state, and analyze the conditions under which the errors due to the approximations taken over the lifetime of the process do not accumulate to make our answers completely irrelevant. We show that the error in a belief state contracts exponentially as the process evolves. Thus, even with multiple approximations, the error in our process remains bounded indefinitely. We show how the additional structure of a DBN can be used to design our approximation scheme, improving its performance significantly. We demonstrate the applicability of our ideas in the context of a monitoring task, showing that orders of magnitude faster inference can be achieved with only a small degradation in accuracy.",
"neighbors": [
788,
945,
1268,
1287,
1288,
1393
],
"mask": "Train"
},
{
"node_id": 1415,
"label": 3,
"text": "Title: A New Approach for Induction: From a Non-Axiomatic Logical Point of View \nAbstract: Non-Axiomatic Reasoning System (NARS) is designed to be a general-purpose intelligent reasoning system, which is adaptive and works under insufficient knowledge and resources. This paper focuses on the components of NARS that contribute to the system's induction capacity, and shows how the traditional problems in induction are addressed by the system. The NARS approach of induction uses an term-oriented formal language with an experience-grounded semantics that consistently interprets various types of uncertainty. An induction rule generates conclusions from common instance of terms, and a revision rule combines evidence from different sources. In NARS, induction and other types of inference, such as deduction and abduction, are based on the same semantic foundation, and they cooperate in inference activities of the system. The system's control mechanism makes knowledge-driven, context-dependent inference possible.",
"neighbors": [
1504,
1506,
1525
],
"mask": "Validation"
},
{
"node_id": 1416,
"label": 0,
"text": "Title: Synergy and Commonality in Case-Based and Constraint-Based Reasoning \nAbstract: Although Case-Based Reasoning (CBR) is a natural formulation for many problems, our previous work on CBR as applied to design made it apparent that there were elements of the CBR paradigm that prevented it from being more widely applied. At the same time, we were evaluating Constraint Satisfaction techniques for design, and found a commonality in motivation between repair-based constraint satisfaction problems (CSP) and case adaptation. This led us to combine the two methodologies in order to gain the advantages of CSP for case-based reasoning, allowing CBR to be more widely and flexibly applied. In combining the two methodologies, we found some unexpected synergy and commonality between the approaches. This paper describes the synergy and commonality that emerged as we combined case-based and constraint-based reasoning, and gives a brief overview of our continuing and future work on exploiting the emergent synergy when combining these reasoning modes. ",
"neighbors": [
922,
923
],
"mask": "Test"
},
{
"node_id": 1417,
"label": 2,
"text": "Title: Ridge Regression Learning Algorithm in Dual Variables \nAbstract: In this paper we study a dual version of the Ridge Regression procedure. It allows us to perform non-linear regression by constructing a linear regression function in a high dimensional feature space. The feature space representation can result in a large increase in the number of parameters used by the algorithm. In order to combat this \"curse of dimensionality\", the algorithm allows the use of kernel functions, as used in Support Vector methods. We also discuss a powerful family of kernel functions which is constructed using the ANOVA decomposition method from the kernel corresponding to splines with an infinite number of nodes. This paper introduces a regression estimation algorithm which is a combination of these two elements: the dual version of Ridge Regression is applied to the ANOVA enhancement of the infinite-node splines. Experimental results are then presented (based on the Boston Housing data set) which indicate the performance of this algorithm relative to other algorithms.",
"neighbors": [
1421
],
"mask": "Train"
},
{
"node_id": 1418,
"label": 2,
"text": "Title: BCM Network develops Orientation Selectivity and Ocular Dominance in Natural Scene Environment. \nAbstract: A two-eye visual environment is used in training a network of BCM neurons. We study the effect of misalignment between the synaptic density functions from the two eyes, on the formation of orientation selectivity and ocular dominance in a lateral inhibition network. The visual environment we use is composed of natural images. We show that for the BCM rule a natural image environment with binocular cortical misalignment is sufficient for producing networks with orientation selective cells and ocular dominance columns. This work is an extension of our previous single cell misalignment model (Shouval et al., 1996).",
"neighbors": [
726,
989,
2499
],
"mask": "Train"
},
{
"node_id": 1419,
"label": 3,
"text": "Title: BAYESIAN ESTIMATION OF THE VON MISES CONCENTRATION PARAMETER \nAbstract: The von Mises distribution is a maximum entropy distribution. It corresponds to the distribution of an angle of a compass needle in a uniform magnetic field of direction, , with concentration parameter, . The concentration parameter, , is the ratio of the field strength to the temperature of thermal fluctuations. Previously, we obtained a Bayesian estimator for the von Mises distribution parameters using the information-theoretic Minimum Message Length (MML) principle. Here, we examine a variety of Bayesian estimation techniques by examining the posterior distribution in both polar and Cartesian co-ordinates. We compare the MML estimator with these fellow Bayesian techniques, and a range of Classical estimators. We find that the Bayesian estimators outperform the Classical estimators. ",
"neighbors": [
525,
1550
],
"mask": "Validation"
},
{
"node_id": 1420,
"label": 0,
"text": "Title: Design, Analogy, and Creativity \nAbstract: : ",
"neighbors": [
1047,
1138,
1354
],
"mask": "Validation"
},
{
"node_id": 1421,
"label": 2,
"text": "Title: Support Vector Machines, Reproducing Kernel Hilbert Spaces and the Randomized GACV 1 \nAbstract: 1 Prepared for the NIPS 97 Workshop on Support Vector Machines. Research sponsored in part by NSF under Grant DMS-9704758 and in part by NEI under Grant R01 EY09946. This is a second revised and corrected version of a report of the same number and title dated November 29, 1997 ",
"neighbors": [
821,
1417
],
"mask": "Train"
},
{
"node_id": 1422,
"label": 2,
"text": "Title: Generating Accurate and Diverse Members of a Neural-Network Ensemble \nAbstract: Neural-network ensembles have been shown to be very accurate classification techniques. Previous work has shown that an effective ensemble should consist of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well. Most existing techniques, however, only indirectly address the problem of creating such a set of networks. In this paper we present a technique called Addemup that uses genetic algorithms to directly search for an accurate and diverse set of trained networks. Addemup works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are as accurate as possible while disagreeing with each other as much as possible. Experiments on three DNA problems show that Addemup is able to generate a set of trained networks that is more accurate than several existing approaches. Experiments also show that Addemup is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble.",
"neighbors": [
550,
826,
828,
1223,
1237,
1273,
1657
],
"mask": "Train"
},
{
"node_id": 1423,
"label": 2,
"text": "Title: Comparison of Regression Methods, Symbolic Induction Methods and Neural Networks in Morbidity Diagnosis and Mortality\nAbstract: Classifier induction algorithms differ on what inductive hypotheses they can represent, and on how they search their space of hypotheses. No classifier is better than another for all problems: they have selective superiority. This paper empirically compares six classifier induction algorithms on the diagnosis of equine colic and the prediction of its mortality. The classification is based on simultaneously analyzing sixteen features measured from a patient. The relative merits of the algorithms (linear regression, decision trees, nearest neighbor classifiers, the Model Class Selection system, logistic regression (with and without feature selection), and neural nets) are qualitatively discussed, and the generalization accuracies quantitatively analyzed. ",
"neighbors": [
1328,
2583
],
"mask": "Train"
},
{
"node_id": 1424,
"label": 1,
"text": "Title: Multi-parent's niche: n-ary crossovers on NK-landscapes \nAbstract: Using the multi-parent diagonal and scanning crossover in GAs reproduction operators obtain an adjustable arity. Hereby sexuality becomes a graded feature instead of a Boolean one. Our main objective is to relate the performance of GAs to the extent of sexuality used for reproduction on less arbitrary functions then those reported in the current literature. We investigate GA behaviour on Kauffman's NK-landscapes that allow for systematic characterization and user control of ruggedness of the fitness landscape. We test GAs with a varying extent of sexuality, ranging from asexual to 'very sexual'. Our tests were performed on two types of NK-landscapes: landscapes with random and landscapes with nearest neighbour epistasis. For both landscape types we selected landscapes from a range of ruggednesses. The results confirm the superiority of (very) sexual recombination on mildly epistatic problems.",
"neighbors": [
1218,
1530,
1799
],
"mask": "Train"
},
{
"node_id": 1425,
"label": 3,
"text": "Title: Unsupervised Learning Using MML \nAbstract: This paper discusses the unsupervised learning problem. An important part of the unsupervised learning problem is determining the number of constituent groups (components or classes) which best describes some data. We apply the Minimum Message Length (MML) criterion to the unsupervised learning problem, modifying an earlier such MML application. We give an empirical comparison of criteria prominent in the literature for estimating the number of components in a data set. We conclude that the Minimum Message Length criterion performs better than the alternatives on the data considered here for unsupervised learning tasks.",
"neighbors": [
525,
684,
1550
],
"mask": "Test"
},
{
"node_id": 1426,
"label": 0,
"text": "Title: REPRESENTING PHYSICAL AND DESIGN KNOWLEDGE IN INNOVATIVE DESIGN \nAbstract: This paper discusses the unsupervised learning problem. An important part of the unsupervised learning problem is determining the number of constituent groups (components or classes) which best describes some data. We apply the Minimum Message Length (MML) criterion to the unsupervised learning problem, modifying an earlier such MML application. We give an empirical comparison of criteria prominent in the literature for estimating the number of components in a data set. We conclude that the Minimum Message Length criterion performs better than the alternatives on the data considered here for unsupervised learning tasks.",
"neighbors": [
1354
],
"mask": "Train"
},
{
"node_id": 1427,
"label": 3,
"text": "Title: SINGLE FACTOR ANALYSIS BY MML ESTIMATION \nAbstract: The Minimum Message Length (MML) technique is applied to the problem of estimating the parameters of a multivariate Gaussian model in which the correlation structure is modelled by a single common factor. Implicit estimator equations are derived and compared with those obtained from a Maximum Likelihood (ML) analysis. Unlike ML, the MML estimators remain consistent when used to estimate both the factor loadings and factor scores. Tests on simulated data show the MML estimates to be on av erage more accurate than the ML estimates when the former exist. If the data show little evidence for a factor, the MML estimate collapses. It is shown that the condition for the existence of an MML estimate is essentially that the log likelihood ratio in favour of the factor model exceed the value expected under the null (no-factor) hypotheses. ",
"neighbors": [
525,
1550
],
"mask": "Train"
},
{
"node_id": 1428,
"label": 5,
"text": "Title: Inverse entailment and Progol \nAbstract: This paper firstly provides a re-appraisal of the development of techniques for inverting deduction, secondly introduces Mode-Directed Inverse Entailment (MDIE) as a generalisation and enhancement of previous approaches and thirdly describes an implementation of MDIE in the Progol system. Progol is implemented in C and available by anonymous ftp. The re-assessment of previous techniques in terms of inverse entailment leads to new results for learning from positive data and inverting implication between pairs of clauses. ",
"neighbors": [
1312,
1601,
1622,
2079,
2229,
2329,
2424,
2493,
2539,
2617
],
"mask": "Train"
},
{
"node_id": 1429,
"label": 5,
"text": "Title: Learning First-Order Definitions of Functions \nAbstract: First-order learning involves finding a clause-form definition of a relation from examples of the relation and relevant background information. In this paper, a particular first-order learning system is modified to customize it for finding definitions of functional relations. This restriction leads to faster learning times and, in some cases, to definitions that have higher predictive accuracy. Other first-order learning systems might benefit from similar specialization.",
"neighbors": [
675,
1208,
1601
],
"mask": "Test"
},
{
"node_id": 1430,
"label": 2,
"text": "Title: Adaptive Boosting of Neural Networks for Character Recognition \nAbstract: Technical Report #1072, D epartement d'Informatique et Recherche Op erationnelle, Universit e de Montr eal Abstract Boosting is a general method for improving the performance of any learning algorithm that consistently generates classifiers which need to perform only slightly better than random guessing. A recently proposed and very promising boosting algorithm is AdaBoost [5]. It has been applied with great success to several benchmark machine learning problems using rather simple learning algorithms [4], in particular decision trees [1, 2, 6]. In this paper we use AdaBoost to improve the performances of neural networks applied to character recognition tasks. We compare training methods based on sampling the training set and weighting the cost function. Our system achieves about 1.4% error on a data base of online handwritten digits from more than 200 writers. Adaptive boosting of a multi-layer network achieved 2% error on the UCI Letters offline characters data set.",
"neighbors": [
569,
1356,
1484
],
"mask": "Validation"
},
{
"node_id": 1431,
"label": 2,
"text": "Title: Learning to Represent Codons: A Challenge Problem for Constructive Induction \nAbstract: The ability of an inductive learning system to find a good solution to a given problem is dependent upon the representation used for the features of the problem. Systems that perform constructive induction are able to change their representation by constructing new features. We describe an important, real-world problem finding genes in DNA that we believe offers an interesting challenge to constructive-induction researchers. We report experiments that demonstrate that: (1) two different input representations for this task result in significantly different generalization performance for both neural networks and decision trees; and (2) both neural and symbolic methods for constructive induction fail to bridge the gap between these two representations. We believe that this real-world domain provides an interesting challenge problem for constructive induction because the relationship between the two representations is well known, and because the representational shift involved in construct ing the better representation is not imposing.",
"neighbors": [
698,
861,
1181,
1412
],
"mask": "Train"
},
{
"node_id": 1432,
"label": 1,
"text": "Title: EVOLUTIONARY ALGORITHMS IN ROBOTICS \nAbstract: ",
"neighbors": [
910,
964,
965,
1573
],
"mask": "Train"
},
{
"node_id": 1433,
"label": 6,
"text": "Title: Learning Boxes in High Dimension \nAbstract: We present exact learning algorithms that learn several classes of (discrete) boxes in f0; : : :; ` 1g n . In particular we learn: (1) The class of unions of O(log n) boxes in time poly(n; log `) (solving an open problem of [16, 12]; in [3] this class is shown to be learnable in time poly(n; `)). (2) The class of unions of disjoint boxes in time poly(n; t; log `), where t is the number of boxes. (Previously this was known only in the case where all boxes are disjoint in one of the dimensions; in [3] this class is shown to be learnable in time poly(n; t; `)). In particular our algorithm learns the class of decision trees over n variables, that take values in f0; : : :; ` 1g, with comparison nodes in time poly(n; t; log `), where t is the number of leaves (this was an open problem in [9] which was shown in [4] to be learnable in time poly(n; t; `)). (3) The class of unions of O(1)-degenerate boxes (that is, boxes that depend only on O(1) variables) in time poly(n; t; log `) (generalizing the learnability of O(1)-DNF and of boxes in O(1) dimensions). The algorithm for this class uses only equivalence queries and it can also be used to learn the class of unions of O(1) boxes (from equivalence queries only). fl A preliminary version of this paper appeared in the proceedings of the EuroCOLT '97 conference, published in volume 1208 of Lecture Notes in Artificial Intelligence, pages 3-15. Springer-Verlag, 1997. ",
"neighbors": [
798,
1026,
1095
],
"mask": "Validation"
},
{
"node_id": 1434,
"label": 5,
"text": "Title: Advantages of Decision Lists and Implicit Negatives in Inductive Logic Programming \nAbstract: This paper demonstrates the capabilities of Foidl, an inductive logic programming (ILP) system whose distinguishing characteristics are the ability to produce first-order decision lists, the use of an output completeness assumption as a substitute for negative examples, and the use of intensional background knowledge. The development of Foidl was originally motivated by the problem of learning to generate the past tense of English verbs; however, this paper demonstrates its superior performance on two different sets of benchmark ILP problems. Tests on the finite element mesh design problem show that Foidl's decision lists enable it to produce generally more accurate results than a range of methods previously applied to this problem. Tests with a selection of list-processing problems from Bratko's introductory Prolog text demonstrate that the combination of implicit negatives and intensionality allow Foidl to learn correct programs from far fewer examples than Foil.",
"neighbors": [
1208
],
"mask": "Test"
},
{
"node_id": 1435,
"label": 2,
"text": "Title: Advantages of Decision Lists and Implicit Negatives in Inductive Logic Programming \nAbstract: Further Results on Controllability Abstract This paper studies controllability properties of recurrent neural networks. The new contributions are: (1) an extension of a previous result to a slightly different model, (2) a formulation and proof of a necessary and sufficient condition, and (3) an analysis of a low-dimensional case for which the of Recurrent Neural Networks fl",
"neighbors": [
1028,
1042,
1043
],
"mask": "Test"
},
{
"node_id": 1436,
"label": 3,
"text": "Title: Advantages of Decision Lists and Implicit Negatives in Inductive Logic Programming \nAbstract: Figure 9: Results for various optimizations. Figure 10: Results with and without markov boundary scoring. ",
"neighbors": [
895
],
"mask": "Train"
},
{
"node_id": 1437,
"label": 3,
"text": "Title: Coupled hidden Markov models for complex action recognition \nAbstract: c flMIT Media Lab Perceptual Computing / Learning and Common Sense Technical Report 407 20nov96 Abstract We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and classifying dynamic behaviors, popular because they offer dynamic time warping, a training algorithm, and a clear Bayesian semantics. However, the Markovian framework makes strong restrictive assumptions about the system generating the signalthat it is a single process having a small number of states and an extremely limited state memory. The single-process model is often inappropriate for vision (and speech) applications, resulting in low ceilings on model performance. Coupled HMMs provide an efficient way to resolve many of these problems, and offer superior training speeds, model likelihoods, and robustness to initial conditions. ",
"neighbors": [
787,
891,
1287,
1393,
1593
],
"mask": "Validation"
},
{
"node_id": 1438,
"label": 4,
"text": "Title: Learning from undiscounted delayed rewards \nAbstract: The general framework of reinforcement learning has been proposed by several researchers for both the solution of optimization problems and the realization of adaptive control schemes. To allow for an efficient application of reinforcement learning in either of these areas, it is necessary to solve both the structural and the temporal credit assignment problem. In this paper, we concentrate on the latter which is usually tackled through the use of learning algorithms that employ discounted rewards. We argue that for realistic problems this kind of solution is not satisfactory, since it does not address the effect of noise originating from different experiences and does not allow for an easy explanation of the parameters involved in the learning process. As a possible solution, we propose to keep the delayed reward undiscounted, but to discount the actual adaptation rate. Empirical results show that dependent on the kind of discount used amore stable convergence and even an increase in performance can be obtained. ",
"neighbors": [
294,
565,
807
],
"mask": "Test"
},
{
"node_id": 1439,
"label": 1,
"text": "Title: Minimum-Perimeter Domain Assignment \nAbstract: For certain classes of problems defined over two-dimensional domains with grid structure, optimization problems involving the assignment of grid cells to processors present a nonlinear network model for the problem of partitioning tasks among processors so as to minimize interprocessor communication. Minimizing interprocessor communication in this context is shown to be equivalent to tiling the domain so as to minimize total tile perimeter, where each tile corresponds to the collection of tasks assigned to some processor. A tight lower bound on the perimeter of a tile as a function of its area is developed. We then show how to generate minimum-perimeter tiles. By using assignments corresponding to near-rectangular minimum-perimeter tiles, closed form solutions are developed for certain classes of domains. We conclude with computational results with parallel high-level genetic algorithms that have produced good (and sometimes provably optimal) solutions for very large perimeter minimization problems.",
"neighbors": [
53,
803
],
"mask": "Validation"
},
{
"node_id": 1440,
"label": 4,
"text": "Title: Value Function Approximations and Job-Shop Scheduling \nAbstract: We report a successful application of TD() with value function approximation to the task of job-shop scheduling. Our scheduling problems are based on the problem of scheduling payload processing steps for the NASA space shuttle program. The value function is approximated by a 2-layer feedforward network of sigmoid units. A one-step lookahead greedy algorithm using the learned evaluation function outperforms the best existing algorithm for this task, which is an iterative repair method incorporating simulated annealing. To understand the reasons for this performance improvement, this paper introduces several measurements of the learning process and discusses several hypotheses suggested by these measurements. We conclude that the use of value function approximation is not a source of difficulty for our method, and in fact, it may explain the success of the method independent of the use of value iteration. Additional experiments are required to discriminate among our hypotheses. ",
"neighbors": [
82,
565,
1378
],
"mask": "Validation"
},
{
"node_id": 1441,
"label": 1,
"text": "Title: Nonlinearity, Hyperplane Ranking and the Simple Genetic Algorithm \nAbstract: Several metrics are used in empirical studies to explore the mechanisms of convergence of genetic algorithms. The metric is designed to measure the consistency of an arbitrary ranking of hyperplanes in a partition with respect to a target string. Walsh coefficients can be calculated for small functions in order to characterize sources of linear and nonlinear interactions. A simple deception measure is also developed to look closely at the effects of increasing nonlinearity of functions. Correlations between the metric and deception measure are discussed and relationships between and convergence behavior of a simple genetic algorithm are studied over large sets of functions with varying degrees of nonlinearity.",
"neighbors": [
941,
1638,
1717
],
"mask": "Test"
},
{
"node_id": 1442,
"label": 5,
"text": "Title: Learning Goal-Decomposition Rules using Exercises \nAbstract: Exercises are problems ordered in increasing order of difficulty. Teaching problem-solving through exercises is a widely used pedagogic technique. A computational reason for this is that the knowledge gained by solving simple problems is useful in efficiently solving more difficult problems. We adopt this approach of learning from exercises to acquire search-control knowledge in the form of goal-decomposition rules (d-rules). D-rules are first order, and are learned using a new \"generalize-and-test\" algorithm which is based on inductive logic programming techniques. We demonstrate the feasibility of the approach by applying it in two planning do mains.",
"neighbors": [
344,
414,
675,
1135,
1444,
1445
],
"mask": "Validation"
},
{
"node_id": 1443,
"label": 4,
"text": "Title: Residual Q-Learning Applied to Visual Attention \nAbstract: Foveal vision features imagers with graded acuity coupled with context sensitive sensor gaze control, analogous to that prevalent throughout vertebrate vision. Foveal vision operates more efficiently than uniform acuity vision because resolution is treated as a dynamically allocatable resource, but requires a more refined visual attention mechanism. We demonstrate that reinforcement learning (RL) significantly improves the performance of foveal visual attention, and of the overall vision system, for the task of model based target recognition. A simulated foveal vision system is shown to classify targets with fewer fixations by learning strategies for the acquisition of visual information relevant to the task, and learning how to generalize these strategies in ambiguous and unexpected scenario conditions.",
"neighbors": [
1540
],
"mask": "Test"
},
{
"node_id": 1444,
"label": 6,
"text": "Title: Learning Horn Definitions with Equivalence and Membership Queries \nAbstract: A Horn definition is a set of Horn clauses with the same head literal. In this paper, we consider learning non-recursive, function-free first-order Horn definitions. We show that this class is exactly learnable from equivalence and membership queries. It follows then that this class is PAC learnable using examples and membership queries. Our results have been shown to be applicable to learning efficient goal-decomposition rules in planning domains.",
"neighbors": [
1135,
1442
],
"mask": "Train"
},
{
"node_id": 1445,
"label": 5,
"text": "Title: Theory-guided Empirical Speedup Learning of Goal Decomposition Rules \nAbstract: Speedup learning is the study of improving the problem-solving performance with experience and from outside guidance. We describe here a system that successfully combines the best features of Explanation-based learning and empirical learning to learn goal decomposition rules from examples of successful problem solving and membership queries. We demonstrate that our system can efficiently learn effective decomposition rules in three different domains. Our results suggest that theory-guided empirical learning can overcome the problems of purely explanation-based learning and purely empirical learning, and be an effective speedup learning method.",
"neighbors": [
344,
414,
675,
1442
],
"mask": "Test"
},
{
"node_id": 1446,
"label": 2,
"text": "Title: STABILIZATION WITH SATURATED ACTUATORS, A WORKED EXAMPLE:F-8 LONGITUDINAL FLIGHT CONTROL \nAbstract: The authors and coworkers recently proved general theorems on the global stabilization of linear systems subject to control saturation. This paper develops in detail an explicit design for the linearized equations of longitudinal flight control for an F-8 aircraft, and tests the obtained controller on the original nonlinear model. This paper represents the first detailed derivation of a controller using the techniques in question, and the results are very encouraging. ",
"neighbors": [
948,
1282
],
"mask": "Train"
},
{
"node_id": 1447,
"label": 4,
"text": "Title: Model of the Environment to Avoid Local Learning \nAbstract: Pier Luca Lanzi Technical Report N. 97.46 December 20 th , 1997 ",
"neighbors": [
566,
1515,
1581
],
"mask": "Test"
},
{
"node_id": 1448,
"label": 0,
"text": "Title: Automatic Storage and Indexing of Plan Derivations based on Replay Failures \nAbstract: When a case-based planner is retrieving a previous case in preparation for solving a new similar problem, it is often not aware of all of the implicit features of the new problem situation which determine if a particular case may be successfully applied. This means that some cases may fail to improve the planner's performance. By detecting and explaining these case failures as they occur, retrieval may be improved incrementally. In this paper we provide a definition of case failure for the case-based planner, dersnlp (derivation replay in snlp), which solves new problems by replaying its previous plan derivations. We provide explanation-based learning (EBL) techniques for detecting and constructing the reasons for the case failure. We also describe how the case library is organized so as to incorporate this failure information as it is produced. Finally we present an empirical study which demonstrates the effectiveness of this approach in improving the performance of dersnlp.",
"neighbors": [
1621
],
"mask": "Train"
},
{
"node_id": 1449,
"label": 6,
"text": "Title: Adaptive Mixtures of Probabilistic Transducers \nAbstract: We describe and analyze a mixture model for supervised learning of probabilistic transducers. We devise an on-line learning algorithm that efficiently infers the structure and estimates the parameters of each probabilistic transducer in the mixture. Theoretical analysis and comparative simulations indicate that the learning algorithm tracks the best transducer from an arbitrarily large (possibly infinite) pool of models. We also present an application of the model for inducing a noun phrase recognizer.",
"neighbors": [
453,
1025
],
"mask": "Train"
},
{
"node_id": 1450,
"label": 2,
"text": "Title: Plasticity-Mediated Competitive Learning \nAbstract: Differentiation between the nodes of a competitive learning network is conventionally achieved through competition on the basis of neural activity. Simple inhibitory mechanisms are limited to sparse representations, while decorrelation and factorization schemes that support distributed representations are computation-ally unattractive. By letting neural plasticity mediate the competitive interaction instead, we obtain diffuse, nonadaptive alternatives for fully distributed representations. We use this technique to simplify and improve our binary information gain optimization algorithm for feature extraction (Schraudolph and Sejnowski, 1993); the same approach could be used to improve other learning algorithms.",
"neighbors": [
731,
808,
1710
],
"mask": "Train"
},
{
"node_id": 1451,
"label": 2,
"text": "Title: On the Computation of the Induced L 2 Norm of Single Input Linear Systems with Saturation \nAbstract: Differentiation between the nodes of a competitive learning network is conventionally achieved through competition on the basis of neural activity. Simple inhibitory mechanisms are limited to sparse representations, while decorrelation and factorization schemes that support distributed representations are computation-ally unattractive. By letting neural plasticity mediate the competitive interaction instead, we obtain diffuse, nonadaptive alternatives for fully distributed representations. We use this technique to simplify and improve our binary information gain optimization algorithm for feature extraction (Schraudolph and Sejnowski, 1993); the same approach could be used to improve other learning algorithms.",
"neighbors": [
1272,
1281,
1346,
1604
],
"mask": "Train"
},
{
"node_id": 1452,
"label": 2,
"text": "Title: Bayesian Training of Backpropagation Networks by the Hybrid Monte Carlo Method \nAbstract: It is shown that Bayesian training of backpropagation neural networks can feasibly be performed by the \"Hybrid Monte Carlo\" method. This approach allows the true predictive distribution for a test case given a set of training cases to be approximated arbitrarily closely, in contrast to previous approaches which approximate the posterior weight distribution by a Gaussian. In this work, the Hybrid Monte Carlo method is implemented in conjunction with simulated annealing, in order to speed relaxation to a good region of parameter space. The method has been applied to a test problem, demonstrating that it can produce good predictions, as well as an indication of the uncertainty of these predictions. Appropriate weight scaling factors are found automatically. By applying known techniques for calculation of \"free energy\" differences, it should also be possible to compare the merits of different network architectures. The work described here should also be applicable to a wide variety of statistical models other than neural networks. ",
"neighbors": [
157,
560,
1289,
1375
],
"mask": "Train"
},
{
"node_id": 1453,
"label": 0,
"text": "Title: Automatic Indexing, Retrieval and Reuse of Topologies in Architectual Layouts \nAbstract: Former layouts contain much of the know-how of architects. A generic and automatic way to formalize this know-how in order to use it by a computer would save a lot of effort and money. However, there seems to be no such way. The only access to the know-how are the layouts themselves. Developing a generic software tool to reuse former layouts you cannot consider every part of the architectual domain or things like personal style. Tools used today only consider small parts of the architectual domain. Any personal style is ignored. Isn't it possible to build a basic tool which is adjusted by the content of the former layouts, but may be extended incremently by modeling as much of the domain as desirable? This paper will describe a reuse tool to perform this task focusing on topological and geometrical binary relations.",
"neighbors": [
539,
1152,
1210
],
"mask": "Train"
},
{
"node_id": 1454,
"label": 2,
"text": "Title: A Neural Network Model for Prognostic Prediction \nAbstract: An important and difficult prediction task in many domains, particularly medical decision making, is that of prognosis. Prognosis presents a unique set of problems to a learning system when some of the outputs are unknown. This paper presents a new approach to prognostic prediction, using ideas from nonparametric statistics to fully utilize all of the available information in a neural architecture. The technique is applied to breast cancer prognosis, resulting in flexible, accurate models that may play a role in prevent ing unnecessary surgeries.",
"neighbors": [
524,
1169
],
"mask": "Train"
},
{
"node_id": 1455,
"label": 1,
"text": "Title: Self-Adaptation in Genetic Algorithms of external parameters of a GA is seen as a first\nAbstract: In this paper a new approach is presented, which transfers a basic idea from Evolution Strategies (ESs) to GAs. Mutation rates are changed into endogeneous items which are adapting during the search process. First experimental results are presented, which indicate that environment-dependent self-adaptation of appropriate settings for the mutation rate is possible even for GAs. ",
"neighbors": [
793,
1069,
1153,
1685,
1694
],
"mask": "Train"
},
{
"node_id": 1456,
"label": 6,
"text": "Title: An Interactive Model of Teaching \nAbstract: Previous teaching models in the learning theory community have been batch models. That is, in these models the teacher has generated a single set of helpful examples to present to the learner. In this paper we present an interactive model in which the learner has the ability to ask queries as in the query learning model of Angluin [1]. We show that this model is at least as powerful as previous teaching models. We also show that anything learnable with queries, even by a randomized learner, is teachable in our model. In all previous teaching models, all classes shown to be teachable are known to be efficiently learnable. An important concept class that is not known to be learnable is DNF formulas. We demonstrate the power of our approach by providing a deterministic teacher and learner for the class of DNF formulas. The learner makes only equivalence queries and all hypotheses are also DNF formulas. ",
"neighbors": [
308,
1095,
1343,
1469
],
"mask": "Train"
},
{
"node_id": 1457,
"label": 2,
"text": "Title: Actively Searching for an Effective Neural-Network Ensemble \nAbstract: A neural-network ensemble is a very successful technique where the outputs of a set of separately trained neural network are combined to form one unified prediction. An effective ensemble should consist of a set of networks that are not only highly correct, but ones that make their errors on different parts of the input space as well; however, most existing techniques only indirectly address the problem of creating such a set. We present an algorithm called Addemup that uses genetic algorithms to explicitly search for a highly diverse set of accurate trained networks. Addemup works by first creating an initial population, then uses genetic operators to continually create new networks, keeping the set of networks that are highly accurate while disagreeing with each other as much as possible. Experiments on four real-world domains show that Addemup is able to generate a set of trained networks that is more accurate than several existing ensemble approaches. Experiments also show that Addemup is able to effectively incorporate prior knowledge, if available, to improve the quality of its ensemble. ",
"neighbors": [
163,
569,
826,
828,
1237,
1462
],
"mask": "Train"
},
{
"node_id": 1458,
"label": 3,
"text": "Title: Sequential Thresholds: Context Sensitive Default Extensions \nAbstract: Default logic encounters some conceptual difficulties in representing common sense reasoning tasks. We argue that we should not try to formulate modular default rules that are presumed to work in all or most circumstances. We need to take into account the importance of the context which is continuously evolving during the reasoning process. Sequential thresholding is a quantitative counterpart of default logic which makes explicit the role context plays in the construction of a non-monotonic extension. We present a semantic characterization of generic non-monotonic reasoning, as well as the instan-tiations pertaining to default logic and sequential thresholding. This provides a link between the two mechanisms as well as a way to integrate the two that can be beneficial to both.",
"neighbors": [
838,
1714
],
"mask": "Train"
},
{
"node_id": 1459,
"label": 4,
"text": "Title: Generalized Markov Decision Processes: Dynamic-programming and Reinforcement-learning Algorithms \nAbstract: The problem of maximizing the expected total discounted reward in a completely observable Markovian environment, i.e., a Markov decision process (mdp), models a particular class of sequential decision problems. Algorithms have been developed for making optimal decisions in mdps given either an mdp specification or the opportunity to interact with the mdp over time. Recently, other sequential decision-making problems have been studied prompting the development of new algorithms and analyses. We describe a new generalized model that subsumes mdps as well as many of the recent variations. We prove some basic results concerning this model and develop generalizations of value iteration, policy iteration, model-based reinforcement-learning, and Q-learning that can be used to make optimal decisions in the generalized model under various assumptions. Applications of the theory to particular models are described, including risk-averse mdps, exploration-sensitive mdps, sarsa, Q-learning with spreading, two-player games, and approximate max picking via sampling. Central to the results are the contraction property of the value operator and a stochastic-approximation theorem that reduces asynchronous convergence to synchronous convergence. ",
"neighbors": [
45,
57,
167,
210,
473,
566,
633,
671,
749,
775,
804,
1137,
1540,
1687,
2078,
2221,
2404
],
"mask": "Train"
},
{
"node_id": 1460,
"label": 6,
"text": "Title: Learning from Queries and Examples with Tree-structured Bias \nAbstract: Incorporating declarative bias or prior knowledge into learning is an active research topic in machine learning. Tree-structured bias specifies the prior knowledge as a tree of \"relevance\" relationships between attributes. This paper presents a learning algorithm that implements tree-structured bias, i.e., learns any target function probably approximately correctly from random examples and membership queries if it obeys a given tree-structured bias. The theoretical predictions of the paper are em pirically validated.",
"neighbors": [
638,
672,
924
],
"mask": "Train"
},
{
"node_id": 1461,
"label": 2,
"text": "Title: Learning in Boltzmann Trees \nAbstract: We introduce a large family of Boltzmann machines that can be trained using standard gradient descent. The networks can have one or more layers of hidden units, with tree-like connectivity. We show how to implement the supervised learning algorithm for these Boltzmann machines exactly, without resort to simulated or mean-field annealing. The stochastic averages that yield the gradients in weight space are computed by the technique of decimation. We present results on the problems of N -bit parity and the detection of hidden symmetries.",
"neighbors": [
304,
954,
1288,
1357,
1593
],
"mask": "Test"
},
{
"node_id": 1462,
"label": 2,
"text": "Title: Learning from Bad Data \nAbstract: The data describing resolutions to telephone network local loop \"troubles,\" from which we wish to learn rules for dispatching technicians, are notoriously unreliable. Anecdotes abound detailing reasons why a resolution entered by a technician would not be valid, ranging from sympathy to fear to ignorance to negligence to management pressure. In this paper, we describe four different approaches to dealing with the problem of \"bad\" data in order first to determine whether machine learning has promise in this domain, and then to determine how well machine learning might perform. We then offer evidence that machine learning can help to build a dispatching method that will perform better than the system currently in place.",
"neighbors": [
1457,
1657
],
"mask": "Train"
},
{
"node_id": 1463,
"label": 3,
"text": "Title: Bias, variance and prediction error for classification rules \nAbstract: We study the notions of bias and variance for classification rules. Following Efron (1978) we develop a decomposition of prediction error into its natural components. Then we derive bootstrap estimates of these components and illustrate how they can be used to describe the error behaviour of a classifier in practice. In the process we also obtain a bootstrap estimate of the error of a \"bagged\" classifier. ",
"neighbors": [
931,
999,
1053,
1361,
1399,
1405,
1406,
1512
],
"mask": "Train"
},
{
"node_id": 1464,
"label": 2,
"text": "Title: LINEAR SYSTEMS WITH SIGN-OBSERVATIONS \nAbstract: This paper deals with systems that are obtained from linear time-invariant continuous-or discrete-time devices followed by a function that just provides the sign of each output. Such systems appear naturally in the study of quantized observations as well as in signal processing and neural network theory. Results are given on observability, minimal realizations, and other system-theoretic concepts. Certain major differences exist with the linear case, and other results generalize in a surprisingly straightforward manner. ",
"neighbors": [
200,
1021,
1100,
1254,
1470
],
"mask": "Train"
},
{
"node_id": 1465,
"label": 0,
"text": "Title: REPRO: Supporting Flowsheet Design by Case-Base Retrieval \nAbstract: Case-Based Reasoning (CBR) paradigm is very close to the designer behavior during the conceptual design, and seems to be a fruitable computer aided-design approach if a library of design cases is available. The goal of this paper is to presents the general framework of a case-based retrieval system: REPRO, that supports chemical process design. The crucial problems like the case representation and structural similarity measure are widely described. The presented experimental results and the expert evaluation shows usefulness of the described system in real world problems. The papers ends with discussion concerning research problems and future work.",
"neighbors": [
1354
],
"mask": "Train"
},
{
"node_id": 1466,
"label": 1,
"text": "Title: A STUDY OF CROSSOVER OPERATORS IN GENETIC PROGRAMMING \nAbstract: Holland's analysis of the sources of power of genetic algorithms has served as guidance for the applications of genetic algorithms for more than 15 years. The technique of applying a recombination operator (crossover) to a population of individuals is a key to that power. Neverless, there have been a number of contradictory results concerning crossover operators with respect to overall performance. Recently, for example, genetic algorithms were used to design neural network modules and their control circuits. In these studies, a genetic algorithm without crossover outperformed a genetic algorithm with crossover. This report re-examines these studies, and concludes that the results were caused by a small population size. New results are presented that illustrate the effectiveness of crossover when the population size is larger. From a performance view, the results indicate that better neural networks can be evolved in a shorter time if the genetic algorithm uses crossover. ",
"neighbors": [
728,
943,
1016,
1650
],
"mask": "Test"
},
{
"node_id": 1467,
"label": 1,
"text": "Title: Adaptive Strategy Selection for Concept Learning \nAbstract: In this paper, we explore the use of genetic algorithms (GAs) to construct a system called GABIL that continually learns and refines concept classification rules from its interac - tion with the environment. The performance of this system is compared with that of two other concept learners (NEWGEM and C4.5) on a suite of target concepts. From this comparison, we identify strategies responsible for the success of these concept learners. We then implement a subset of these strategies within GABIL to produce a multistrategy concept learner. Finally, this multistrategy concept learner is further enhanced by allowing the GAs to adaptively select the appropriate strategies. ",
"neighbors": [
163,
793,
1333
],
"mask": "Train"
},
{
"node_id": 1468,
"label": 6,
"text": "Title: Preventing \"Overfitting\" of Cross-Validation Data \nAbstract: Suppose that, for a learning task, we have to select one hypothesis out of a set of hypotheses (that may, for example, have been generated by multiple applications of a randomized learning algorithm). A common approach is to evaluate each hypothesis in the set on some previously unseen cross-validation data, and then to select the hypothesis that had the lowest cross-validation error. But when the cross-validation data is partially corrupted such as by noise, and if the set of hypotheses we are selecting from is large, then \"folklore\" also warns about \"overfitting\" the cross-validation data [Klockars and Sax, 1986, Tukey, 1949, Tukey, 1953]. In this paper, we explain how this \"overfitting\" really occurs, and show the surprising result that it can be overcome by selecting a hypothesis with a higher cross-validation error, over others with lower cross-validation errors. We give reasons for not selecting the hypothesis with the lowest cross-validation error, and propose a new algorithm, LOOCVCV, that uses a computa-tionally efficient form of leave-one-out cross-validation to select such a hypothesis. Finally, we present experimental results for one domain, that show LOOCVCV consistently beating picking the hypothesis with the lowest cross-validation error, even when using reasonably large cross-validation sets. ",
"neighbors": [
638,
847,
848
],
"mask": "Test"
},
{
"node_id": 1469,
"label": 6,
"text": "Title: Warning: missing six few referencesfixed in proceedings. Learning with Queries but Incomplete Information (Extended Abstract) \nAbstract: We investigate learning with membership and equivalence queries assuming that the information provided to the learner is incomplete. By incomplete we mean that some of the membership queries may be answered by I don't know. This model is a worst-case version of the incomplete membership query model of Angluin and Slonim. It attempts to model practical learning situations, including an experiment of Lang and Baum that we describe, where the teacher may be unable to answer reliably some queries that are critical for the learning algorithm. We present algorithms to learn monotone k-term DNF with membership queries only, and to learn monotone DNF with membership and equivalence queries. Compared to the complete information case, the query complexity increases by an additive term linear in the number of I don't know answers received. We also observe that the blowup in the number of queries can in general be exponential for both our new model and the incomplete membership model.",
"neighbors": [
1003,
1004,
1364,
1456,
1661,
1705
],
"mask": "Test"
},
{
"node_id": 1470,
"label": 2,
"text": "Title: Interconnected Automata and Linear Systems: A Theoretical Framework in Discrete-Time In Hybrid Systems III: Verification\nAbstract: This paper summarizes the definitions and several of the main results of an approach to hybrid systems, which combines finite automata and linear systems, developed by the author in the early 1980s. Some related more recent results are briefly mentioned as well. ",
"neighbors": [
411,
1037,
1464
],
"mask": "Test"
},
{
"node_id": 1471,
"label": 2,
"text": "Title: New Characterizations of Input to State Stability \nAbstract: We present new characterizations of the Input to State Stability property. As a consequence of these results, we show the equivalence between the ISS property and several (apparent) variations proposed in the literature. ",
"neighbors": [
447,
693,
1281,
1282,
1501,
1633
],
"mask": "Train"
},
{
"node_id": 1472,
"label": 2,
"text": "Title: A SUCCESSIVE LINEAR PROGRAMMING APPROACH FOR INITIALIZATION AND REINITIALIZATION AFTER DISCONTINUITIES OF DIFFERENTIAL ALGEBRAIC EQUATIONS \nAbstract: Determination of consistent initial conditions is an important aspect of the solution of differential algebraic equations (DAEs). Specification of inconsistent initial conditions, even if they are slightly inconsistent, often leads to a failure in the initialization problem. In this paper, we present a Successive Linear Programming (SLP) approach for the solution of the DAE derivative array equations for the initialization problem. The SLP formulation handles roundoff errors and inconsistent user specifications among others and allows for reliable convergence strategies that incorporate variable bounds and trust region concepts. A new consistent set of initial conditions is obtained by minimizing the deviation of the variable values from the specified ones. For problems with discontinuities caused by a step change in the input functions, a new criterion is presented for identifying the subset of variables which are continuous across the discontinuity. The LP formulation is then applied to determine a consistent set of initial conditions for further solution of the problem in the domain after the discontinuity. Numerous example problems are solved to illustrate these concepts. ",
"neighbors": [
878
],
"mask": "Test"
},
{
"node_id": 1473,
"label": 1,
"text": "Title: Evolving Non-Determinism: An Inventive and Efficient Tool for Optimization and Discovery of Strategies \nAbstract: In the field of optimization and machine learning techniques, some very efficient and promising tools like Genetic Algorithms (GAs) and Hill-Climbing have been designed. In this same field, the Evolving Non-Determinism (END) model proposes an inventive way to explore the space of states that, combined with the use of simulated co-evolution, remedies some drawbacks of these previous techniques and even allow this model to outperform them on some difficult problems. This new model has been applied to the sorting network problem, a reference problem that challenged many computer scientists, and an original one-player game named Solitaire. For the first problem, the END model has been able to build from scratch some sorting networks as good as the best known for the 16-input problem. It even improved by one comparator a 25 years old result for the 13-input problem! For the Solitaire game, END evolved a strategy comparable to a human designed strategy. ",
"neighbors": [
380,
1249,
1474
],
"mask": "Train"
},
{
"node_id": 1474,
"label": 1,
"text": "Title: Incremental Co-evolution of Organisms: A New Approach for Optimization and Discovery of Strategies \nAbstract: In the field of optimization and machine learning techniques, some very efficient and promising tools like Genetic Algorithms (GAs) and Hill-Climbing have been designed. In this same field, the Evolving Non-Determinism (END) model presented in this paper proposes an inventive way to explore the space of states that, using the simulated \"incremental\" co-evolution of some organisms, remedies some drawbacks of these previous techniques and even allow this model to outperform them on some difficult problems. This new model has been applied to the sorting network problem, a reference problem that challenged many computer scientists, and an original one-player game named Solitaire. For the first problem, the END model has been able to build from \"scratch\" some sorting networks as good as the best known for the 16-input problem. It even improved by one comparator a 25 years old result for the 13-input problem. For the Solitaire game, END evolved a strategy comparable to a human designed strategy. ",
"neighbors": [
380,
1249,
1473,
1728
],
"mask": "Train"
},
{
"node_id": 1475,
"label": 0,
"text": "Title: Within the Letter of the Law: open-textured planning \nAbstract: Most case-based reasoning systems have used a single \"best\" or \"most similar\" case as the basis for a solution. For many problems, however, there is no single exact solution. Rather, there is a range of acceptable answers. We use cases not only as a basis for a solution, but also to indicate the boundaries within which a solution can be found. We solve problems by choosing some point within those boundaries. In this paper, I discuss this use of cases with illustrations from chiron, a system I have implemented in the domain of personal income tax planning.",
"neighbors": [
801,
1642
],
"mask": "Validation"
},
{
"node_id": 1476,
"label": 1,
"text": "Title: Program Optimization for Faster Genetic Programming \nAbstract: We have used genetic programming to develop efficient image processing software. The ultimate goal of our work is to detect certain signs of breast cancer that cannot be detected with current segmentation and classification methods. Traditional techniques do a relatively good job of segmenting and classifying small-scale features of mammo-grams, such as micro-calcification clusters. Our strongly-typed genetic programs work on a multi-resolution representation of the mammogram, and they are aimed at handling features at medium and large scales, such as stel-lated lesions and architectural distortions. The main problem is efficiency. We employ program optimizations that speed up the evolution process by more than a factor of ten. In this paper we present our genetic programming system, and we describe our optimization techniques.",
"neighbors": [
1178,
1277
],
"mask": "Train"
},
{
"node_id": 1477,
"label": 2,
"text": "Title: Two Constructive Methods for Designing Compact Feedforward Networks of Threshold Units \nAbstract: We propose two algorithms for constructing and training compact feedforward networks of linear threshold units. The Shift procedure constructs networks with a single hidden layer while the PTI constructs multilayered networks. The resulting networks are guaranteed to perform any given task with binary or real-valued inputs. The various experimental results reported for tasks with binary and real inputs indicate that our methods compare favorably with alternative procedures deriving from similar strategies, both in terms of size of the resulting networks and of their generalization properties. ",
"neighbors": [
1252
],
"mask": "Validation"
},
{
"node_id": 1478,
"label": 3,
"text": "Title: Lazy Bayesian Trees \nAbstract: The naive Bayesian classifier is simple and effective, but its attribute independence assumption is often violated in the real world. A number of approaches have been developed that seek to alleviate this problem. A Bayesian tree learning algorithm builds a decision tree, and generates a local Bayesian classifier at each leaf instead of predicting a single class. However, Bayesian tree learning still suffers from the replication and fragmentation problems of tree learning. While inferred Bayesian trees demonstrate low average prediction error rates, there is reason to believe that error rates will be higher for those leaves with few training examples. This paper proposes a novel lazy Bayesian tree learning algorithm. For each test example, it conceptually builds a most appropriate Bayesian tree. In practice, only one path with a local Bayesian classifier at its leaf is created. Experiments with a wide variety of real-world and artificial domains show that this new algorithm has significantly lower overall prediction error rates than a naive Bayesian classifier, C4.5, and a Bayesian tree learning algorithm. ",
"neighbors": [
1335,
1336,
2338
],
"mask": "Train"
},
{
"node_id": 1479,
"label": 3,
"text": "Title: Revising Bayesian Network Parameters Using Backpropagation \nAbstract: The problem of learning Bayesian networks with hidden variables is known to be a hard problem. Even the simpler task of learning just the conditional probabilities on a Bayesian network with hidden variables is hard. In this paper, we present an approach that learns the conditional probabilities on a Bayesian network with hidden variables by transforming it into a multi-layer feedforward neural network (ANN). The conditional probabilities are mapped onto weights in the ANN, which are then learned using standard backpropagation techniques. To avoid the problem of exponentially large ANNs, we focus on Bayesian networks with noisy-or and noisy-and nodes. Experiments on real world classification problems demonstrate the effectiveness of our technique. ",
"neighbors": [
136,
1102,
2017,
2543
],
"mask": "Test"
},
{
"node_id": 1480,
"label": 2,
"text": "Title: Learning in the Presence of Prior Knowledge: A Case Study Using Model Calibration \nAbstract: Computational models of natural systems often contain free parameters that must be set to optimize the predictive accuracy of the models. This process|called calibration|can be viewed as a form of supervised learning in the presence of prior knowledge. In this view, the fixed aspects of the model constitute the prior knowledge, and the goal is to learn correct values for the free parameters. We report on a series of attempts to learn parameter values for a global vegetation model called MAPSS (Mapped Atmosphere-Plant-Soil System) developed by our collaborator, Ron Neilson. Unfortunately, attempts to apply standard machine learning methods|specifically global error functions and gradient descent search|do not work with MAPSS, because the constraints introduced by the structure of the model (the prior knowledge) create a very difficult non-linear optimization problem. Successful calibration of MAPSS required taking a divide-and-conquer approach in which subsets of the parameters were calibrated while others were held constant. This approach was made possible by carefully selecting training sets that exercised only portions of the model and by designing error functions for each part that had desirable properties. The automated calibration tool that we have developed is currently being applied to calibrate MAPSS against a global climate data set. ",
"neighbors": [
924,
1532
],
"mask": "Test"
},
{
"node_id": 1481,
"label": 1,
"text": "Title: An Evolutionary Approach to Learning in Robots \nAbstract: Evolutionary learning methods have been found to be useful in several areas in the development of intelligent robots. In the approach described here, evolutionary algorithms are used to explore alternative robot behaviors within a simulation model as a way of reducing the overall knowledge engineering effort. This paper presents some initial results of applying the SAMUEL genetic learning system to a collision avoidance and navigation task for mobile robots.",
"neighbors": [
910,
964,
965,
966,
1573
],
"mask": "Validation"
},
{
"node_id": 1482,
"label": 6,
"text": "Title: Generalizations of the Bias/Variance Decomposition for Prediction Error \nAbstract: The bias and variance of a real valued random variable, using squared error loss, are well understood. However because of recent developments in classification techniques it has become desirable to extend these concepts to general random variables and loss functions. The 0-1 (misclassification) loss function with categorical random variables has been of particular interest. We explore the concepts of variance and bias and develop a decomposition of the prediction error into functions of the systematic and variable parts of our predictor. After providing some examples we conclude with a discussion of the various definitions that have been proposed.",
"neighbors": [
1484
],
"mask": "Train"
},
{
"node_id": 1483,
"label": 0,
"text": "Title: Context-Based Similarity Applied to Retrieval of Relevant Cases \nAbstract: Retrieving relevant cases is a crucial component of case-based reasoning systems. The task is to use user-defined query to retrieve useful information, i.e., exact matches or partial matches which are close to query-defined request according to certain measures. The difficulty stems from the fact that it may not be easy (or it may be even impossible) to specify query requests precisely and completely resulting in a situation known as a fuzzy-querying. It is usually not a problem for small domains, but for a large repositories which store various information (multifunctional information bases or a federated databases), a request specification becomes a bottleneck. Thus, a flexible retrieval algorithm is required, allowing for imprecise query specification and for changing the viewpoint. Efficient database techniques exists for locating exact matches. Finding relevant partial matches might be a problem. This document proposes a context-based similarity as a basis for flexible retrieval. Historical bacground on research in similarity assessment is presented and is used as a motivation for formal definition of context-based similarity. We also describe a similarity-based retrieval system for multifunctinal information bases. ",
"neighbors": [
857,
1123,
1125,
1498,
2052,
2060
],
"mask": "Test"
},
{
"node_id": 1484,
"label": 6,
"text": "Title: Experiments with a New Boosting Algorithm \nAbstract: In an earlier paper, we introduced a new boosting algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the related notion of a pseudo-loss which is a method for forcing a learning algorithm of multi-label concepts to concentrate on the labels that are hardest to discriminate. In this paper, we describe experiments we carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems. We performed two sets of experiments. The first set compared boosting to Breiman's bagging method when used to aggregate various classifiers (including decision trees and single attribute-value tests). We compared the performance of the two methods on a collection of machine-learning benchmarks. In the second set of experiments, we studied in more detail the performance of boosting using a nearest-neighbor classifier on an OCR problem. ",
"neighbors": [
822,
931,
1057,
1092,
1197,
1220,
1237,
1430,
1482,
1500,
1521,
1522,
1692
],
"mask": "Train"
},
{
"node_id": 1485,
"label": 2,
"text": "Title: Maximizing the Robustness of a Linear Threshold Classifier with Discrete Weights \nAbstract: Quantization of the parameters of a Perceptron is a central problem in hardware implementation of neural networks using a numerical technology. An interesting property of neural networks used as classifiers is their ability to provide some robustness on input noise. This paper presents efficient learning algorithms for the maximization of the robustness of a Perceptron and especially designed to tackle the combinatorial problem arising from the discrete weights. ",
"neighbors": [
1159,
1252
],
"mask": "Train"
},
{
"node_id": 1486,
"label": 5,
"text": "Title: Induction of decision trees and Bayesian classification applied to diagnosis of sport injuries \nAbstract: Machine learning techniques can be used to extract knowledge from data stored in medical databases. In our application, various machine learning algorithms were used to extract diagnostic knowledge to support the diagnosis of sport injuries. The applied methods include variants of the Assistant algorithm for top-down induction of decision trees, and variants of the Bayesian classifier. The available dataset was insufficent for reliable diagnosis of all sport injuries considered by the system. Consequently, expert-defined diagnostic rules were added and used as pre-classifiers or as generators of additional training instances for injuries with few training examples. Experimental results show that the classification accuracy and the explanation capability of the naive Bayesian classifier with the fuzzy discretization of numerical attributes was superior to other methods and was estimated as the most appro priate for practical use.",
"neighbors": [
426,
1008,
1569
],
"mask": "Train"
},
{
"node_id": 1487,
"label": 2,
"text": "Title: A Divide-and-Conquer Approach to Learning from Prior Knowledge \nAbstract: This paper introduces a new machine learning task|model calibration|and presents a method for solving a particularly difficult model calibration task that arose as part of a global climate change research project. The model calibration task is the problem of training the free parameters of a scientific model in order to optimize the accuracy of the model for making future predictions. It is a form of supervised learning from examples in the presence of prior knowledge. An obvious approach to solving calibration problems is to formulate them as global optimization problems in which the goal is to find values for the free parameters that minimize the error of the model on training data. Unfortunately, this global optimization approach becomes computationally infeasible when the model is highly nonlinear. This paper presents a new divide-and-conquer method that analyzes the model to identify a series of smaller optimization problems whose sequential solution solves the global calibration problem. This paper argues that methods of this kind|rather than global optimization techniques|will be required in order for agents with large amounts of prior knowledge to learn efficiently. ",
"neighbors": [
1528,
1532
],
"mask": "Train"
},
{
"node_id": 1488,
"label": 2,
"text": "Title: Identification and Control of Nonlinear Systems Using Neural Network Models: Design and Stability Analysis \nAbstract: Report 91-09-01 September 1991 (revised) May 1994 ",
"neighbors": [
206,
427,
611,
980,
1490,
1668
],
"mask": "Test"
},
{
"node_id": 1489,
"label": 5,
"text": "Title: Dlab: A Declarative Language Bias Formalism \nAbstract: We describe the principles and functionalities of Dlab (Declarative LAnguage Bias). Dlab can be used in inductive learning systems to define syntactically and traverse efficiently finite subspaces of first order clausal logic, be it a set of propositional formulae, association rules, Horn clauses, or full clauses. A Prolog implementation of Dlab is available by ftp access. Keywords: declarative language bias, concept learning, knowledge dis covery",
"neighbors": [
177,
837
],
"mask": "Train"
},
{
"node_id": 1490,
"label": 2,
"text": "Title: FEEDBACK STABILIZATION USING TWO-HIDDEN-LAYER NETS \nAbstract: This paper compares the representational capabilities of one hidden layer and two hidden layer nets consisting of feedforward interconnections of linear threshold units. It is remarked that for certain problems two hidden layers are required, contrary to what might be in principle expected from the known approximation theorems. The differences are not based on numerical accuracy or number of units needed, nor on capabilities for feature extraction, but rather on a much more basic classification into \"direct\" and \"inverse\" problems. The former correspond to the approximation of continuous functions, while the latter are concerned with approximating one-sided inverses of continuous functions |and are often encountered in the context of inverse kinematics determination or in control questions. A general result is given showing that nonlinear control systems can be stabilized using two hidden layers, but not in general using just one. ",
"neighbors": [
206,
531,
1488
],
"mask": "Train"
},
{
"node_id": 1491,
"label": 6,
"text": "Title: Discovery as Autonomous Learning from the Environment \nAbstract: Discovery involves collaboration among many intelligent activities. However, little is known about how and in what form such collaboration occurs. In this paper, a framework is proposed for autonomous systems that learn and discover from their environment. Within this framework, many intelligent activities such as perception, action, exploration, experimentation, learning, problem solving, and new term construction can be integrated in a coherent way. The framework is presented in detail through an implemented system called LIVE, and is evaluated through the performance of LIVE on several discovery tasks. The conclusion is that autonomous learning from the environment is a feasible approach for integrating the activities involved in a discovery process.",
"neighbors": [
851,
903,
1390,
1605
],
"mask": "Train"
},
{
"node_id": 1492,
"label": 6,
"text": "Title: Predicting Time Series with Support Vector Machines \nAbstract: Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an * insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves the best known result on the benchmark by a factor of 37%.",
"neighbors": [
1050,
1724
],
"mask": "Train"
},
{
"node_id": 1493,
"label": 2,
"text": "Title: Evaluation of Pattern Classifiers for Fingerprint and OCR Applications \nAbstract: In this paper we evaluate the classification accuracy of four statistical and three neural network classifiers for two image based pattern classification problems. These are fingerprint classification and optical character recognition (OCR) for isolated handprinted digits. The evaluation results reported here should be useful for designers of practical systems for these two important commercial applications. For the OCR problem, the Karhunen-Loeve (K-L) transform of the images is used to generate the input feature set. Similarly for the fingerprint problem, the K-L transform of the ridge directions is used to generate the input feature set. The statistical classifiers used were Euclidean minimum distance, quadratic minimum distance, normal, and k-nearest neighbor. The neural network classifiers used were multilayer perceptron, radial basis function, and probabilistic. The OCR data consisted of 7,480 digit images for training and 23,140 digit images for testing. The fingerprint data consisted of 2,000 training and 2,000 testing images. In addition to evaluation for accuracy, the multilayer perceptron and radial basis function networks were evaluated for size and generalization capability. For the evaluated datasets the best accuracy obtained for either problem was provided by the probabilistic neural network, where the minimum classification error was 2.5% for OCR and 7.2% for fingerprints. ",
"neighbors": [
611,
774,
867,
1732
],
"mask": "Train"
},
{
"node_id": 1494,
"label": 2,
"text": "Title: Avoiding Saturation By Trajectory Reparameterization \nAbstract: The problem of trajectory tracking in the presence of input constraints is considered. The desired trajectory is reparameterized on a slower time scale in order to avoid input saturation. Necessary conditions that the reparameterizing function must satisfy are derived. The deviation from the nominal trajectory is minimized by formulating the problem as an optimal control problem. ",
"neighbors": [
1282
],
"mask": "Validation"
},
{
"node_id": 1495,
"label": 1,
"text": "Title: Clique Detection via Genetic Programming Topics in Combinatorial Optimization \nAbstract: Genetic Programming is utilized as a technique for detecting cliques in a network. Candidate cliques are represented in lists, and the lists are manipulated such that larger cliques are formed from the candidates. The clique detection problem has some interesting implications to the Strongly Typed Genetic Programming paradigm, namely in forming a class hierarchy. The problem is also useful in that it is easy to add noise. ",
"neighbors": [
995,
1231
],
"mask": "Validation"
},
{
"node_id": 1496,
"label": 0,
"text": "Title: Adaptive Similarity Assessment for Case-Based Explanation.* \nAbstract: Genetic Programming is utilized as a technique for detecting cliques in a network. Candidate cliques are represented in lists, and the lists are manipulated such that larger cliques are formed from the candidates. The clique detection problem has some interesting implications to the Strongly Typed Genetic Programming paradigm, namely in forming a class hierarchy. The problem is also useful in that it is easy to add noise. ",
"neighbors": [
1125
],
"mask": "Train"
},
{
"node_id": 1497,
"label": 0,
"text": "Title: Combining Rules and Cases to Learn Case Adaptation \nAbstract: Computer models of case-based reasoning (CBR) generally guide case adaptation using a fixed set of adaptation rules. A difficult practical problem is how to identify the knowledge required to guide adaptation for particular tasks. Likewise, an open issue for CBR as a cognitive model is how case adaptation knowledge is learned. We describe a new approach to acquiring case adaptation knowledge. In this approach, adaptation problems are initially solved by reasoning from scratch, using abstract rules about structural transformations and general memory search heuristics. Traces of the processing used for successful rule-based adaptation are stored as cases to enable future adaptation to be done by case-based reasoning. When similar adaptation problems are encountered in the future, these adaptation cases provide task- and domain-specific guidance for the case adaptation process. We present the tenets of the approach concerning the relationship between memory search and case adaptation, the memory search process, and the storage and reuse of cases representing adaptation episodes. These points are discussed in the context of ongoing research on DIAL, a computer model that learns case adaptation knowledge for case-based disaster response planning. ",
"neighbors": [
580,
901,
1126,
1163,
1212
],
"mask": "Test"
},
{
"node_id": 1498,
"label": 0,
"text": "Title: INFERENTIAL THEORY OF LEARNING: Developing Foundations for Multistrategy Learning \nAbstract: The development of multistrategy learning systems should be based on a clear understanding of the roles and the applicability conditions of different learning strategies. To this end, this chapter introduces the Inferential Theory of Learning that provides a conceptual framework for explaining logical capabilities of learning strategies, i.e., their competence. Viewing learning as a process of modifying the learners knowledge by exploring the learners experience, the theory postulates that any such process can be described as a search in a knowledge space, triggered by the learners experience and guided by learning goals. The search operators are instantiations of knowledge transmutations, which are generic patterns of knowledge change. Transmutations may employ any basic type of inferencededuction, induction or analogy. Several fundamental knowledge transmutations are described in a novel and general way, such as generalization, abstraction, explanation and similization, and their counterparts, specialization, concretion, prediction and dissimilization, respectively. Generalization enlarges the reference set of a description (the set of entities that are being described). Abstraction reduces the amount of the detail about the reference set. Explanation generates premises that explain (or imply) the given properties of the reference set. Similization transfers knowledge from one reference set to a similar reference set. Using concepts of the theory, a multistrategy task-adaptive learning (MTL) methodology is outlined, and illustrated b y an example. MTL dynamically adapts strategies to the learning task, defined by the input information, learners background knowledge, and the learning goal. It aims at synergistically integrating a whole range of inferential learning strategies, such as empirical generalization, constructive induction, deductive generalization, explanation, prediction, abstraction, and similization. ",
"neighbors": [
163,
289,
582,
1049,
1071,
1163,
1351,
1483,
1534,
2158,
2398,
2450
],
"mask": "Validation"
},
{
"node_id": 1499,
"label": 2,
"text": "Title: Comparing Support Vector Machines with Gaussian Kernels to Radial Basis Function Classifiers \nAbstract: The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by k-means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well-founded, but also superior in a practical application. This report describes research done at the Center for Biological and Computational Learning, the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology, and at AT&T Bell Laboratories (now AT&T Research, and Lucent Technologies Bell Laboratories). Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. BS thanks the M.I.T. for hospitality during a three-week visit in March 1995, where this work was started. At the time of the study, BS, CB, and VV were with AT&T Bell Laboratories, NJ; KS, FG, PN, and TP were with the Massachusetts Institute of Technology. KS is now with the Department of Information Systems and Computer Science at the National University of Singapore, Lower Kent Ridge Road, Singapore 0511; CB and PN are with Lucent Technologies, Bell Laboratories, NJ; VV is with AT&T Research, NJ. BS was supported by the Studienstiftung des deutschen Volkes; CB was supported by ARPA under ONR contract number N00014-94-C-0186. We thank A. Smola for useful discussions. Please direct correspondence to Bernhard Scholkopf, bs@mpik-tueb.mpg.de, Max-Planck-Institut fur biologische Kybernetik, Spemannstr. 38, 72076 Tubingen, Germany. ",
"neighbors": [
611,
1050,
1306,
1310
],
"mask": "Train"
},
{
"node_id": 1500,
"label": 6,
"text": "Title: On the Induction of Intelligible Ensembles \nAbstract: Ensembles of classifiers, e.g. decision trees, often exhibit greater predictive accuracy than single classifiers alone. Bagging and boosting are two standard ways of generating and combining multiple classifiers. Unfortunately, the increase in predictive performance is usually linked to a dramatic decrease in intelligibility: ensembles are more or less black boxes comparable to neural networks. So far attempts at pruning of ensembles have not been very successful, approximately reducing ensembles into half. This paper describes a different approach which both tries to keep ensemble-sizes small during induction already and also limits the complexity of single classifiers rigorously. Single classifiers are decision-stumps of a prespecified maximal depth. They are combined by majority voting. Ensembles are induced and pruned by a simple hill-climbing procedure. These ensembles can reasonably be transformed into equivalent decision trees. We conduct some empirical evaluation to investigate both predictive accuracies and classifier complexities.",
"neighbors": [
1238,
1267,
1484,
2180
],
"mask": "Train"
},
{
"node_id": 1501,
"label": 2,
"text": "Title: A Characterization of Integral Input to State Stability \nAbstract: Just as input to state stability (iss) generalizes the idea of finite gains with respect to supremum norms, the new notion of integral input to state stability (iiss) generalizes the concept of finite gain when using an integral norm on inputs. In this paper, we obtain a necessary and sufficient characterization of the iiss property, expressed in terms of dissipation inequalities. ",
"neighbors": [
447,
693,
1471
],
"mask": "Validation"
},
{
"node_id": 1502,
"label": 3,
"text": "Title: Belief Networks, Hidden Markov Models, and Markov Random Fields: a Unifying View \nAbstract: The use of graphs to represent independence structure in multivariate probability models has been pursued in a relatively independent fashion across a wide variety of research disciplines since the beginning of this century. This paper provides a brief overview of the current status of such research with particular attention to recent developments which have served to unify such seemingly disparate topics as probabilistic expert systems, statistical physics, image analysis, genetics, decoding of error-correcting codes, Kalman filters, and speech recognition with Markov models.",
"neighbors": [
577,
772,
1288,
1393
],
"mask": "Validation"
},
{
"node_id": 1503,
"label": 3,
"text": "Title: Belief Revision in Probability Theory \nAbstract: In a probability-based reasoning system, Bayes' theorem and its variations are often used to revise the system's beliefs. However, if the explicit conditions and the implicit conditions of probability assignments are properly distinguished, it follows that Bayes' theorem is not a generally applicable revision rule. Upon properly distinguishing belief revision from belief updating, we see that Jeffrey's rule and its variations are not revision rules, either. Without these distinctions, the limitation of the Bayesian approach is often ignored or underestimated. Revision, in its general form, cannot be done in the Bayesian approach, because a probability distribution function alone does not contain the information needed by the operation.",
"neighbors": [
1108,
1276,
1308,
1504,
1507,
1525
],
"mask": "Train"
},
{
"node_id": 1504,
"label": 3,
"text": "Title: From Inheritance Relation to Non-Axiomatic Logic \nAbstract: At the beginning of the paper, three binary term logics are defined. The first is based only on an inheritance relation. The second and the third suggest a novel way to process extension and intension, and they also have interesting relations with Aristotle's syllogistic logic. Based on the three simple systems, a Non-Axiomatic Logic is defined. It has a term-oriented language and an experience-grounded semantics. It can uniformly represents and processes randomness, fuzziness, and ignorance. It can also uniformly carries out deduction, abduction, induction, and revision. ",
"neighbors": [
1108,
1276,
1308,
1415,
1503,
1506,
1507,
1525
],
"mask": "Train"
},
{
"node_id": 1505,
"label": 6,
"text": "Title: Probably Approximately Optimal Derivation Strategies \nAbstract: An inference graph can have many \"derivation strategies\", each a particular ordering of the steps involved in reducing a given query to a sequence of database retrievals. An \"optimal strategy\" for a given distribution of queries is a complete strategy whose \"expected cost\" is minimal, where the expected cost depends on the conditional probabilities that each requested retrieval succeeds, given that a member of this class of queries is posed. This paper describes the PAO algorithm that first uses a set of training examples to approximate these probability values, and then uses these estimates to produce a \"probably approximately optimal\" strategy | i.e., given any *; ffi > 0, PAO produces a strategy whose cost is within * of the cost of the optimal strategy, with probability greater than 1 ffi. This paper also shows how to obtain these strategies in time polynomial in 1=*, 1=ffi and the size of the inference graph, for many important classes of graphs, including all and-or trees. ",
"neighbors": [
251,
865,
932,
2560
],
"mask": "Train"
},
{
"node_id": 1506,
"label": 3,
"text": "Title: Non-Axiomatic Reasoning System (Version 2.2) used to show how the system works. The limitations of\nAbstract: NARS uses a new form of term logic, or an extended syllogism, in which several types of uncertainties can be represented and processed, and in which deduction, induction, abduction, and revision are carried out in a unified format. The system works in an asynchronously parallel way. The memory of the system is dynamically organized, and can also be interpreted as a network. ",
"neighbors": [
1108,
1276,
1308,
1415,
1504,
1507,
1525
],
"mask": "Train"
},
{
"node_id": 1507,
"label": 3,
"text": "Title: A Unified Treatment of Uncertainties \nAbstract: Uncertainty in artificial intelligence\" is an active research field, where several approaches have been suggested and studied for dealing with various types of uncertainty. However, it's hard to rank the approaches in general, because each of them is usually aimed at a special application environment. This paper begins by defining such an environment, then show why some existing approaches cannot be used in such a situation. Then a new approach, Non-Axiomatic Reasoning System, is introduced to work in the environment. The system is designed under the assumption that the system's knowledge and resources are usually insufficient to handle the tasks imposed by its environment. The system can consistently represent several types of uncertainty, and can carry out multiple operations on these uncertainties. Finally, the new approach is compared with the previous approaches in terms of uncertainty representation and interpretation.",
"neighbors": [
1108,
1308,
1503,
1504,
1506
],
"mask": "Train"
},
{
"node_id": 1508,
"label": 2,
"text": "Title: Segmenting Time Series using Gated Experts with Simulated Annealing \nAbstract: Many real-world time series are multi-stationary, where the underlying data generating process (DGP) switches between different stationary subprocesses, or modes of operation. An important problem in modeling such systems is to discover the underlying switching process, which entails identifying the number of subprocesses and the dynamics of each subprocess. For many time series, this problem is ill-defined, since there are often no obvious means to distinguish the different subprocesses. We discuss the use of nonlinear gated experts to perform the segmentation and system identification of the time series. Unlike standard gated experts methods, however, we use concepts from statistical physics to enhance the segmentation for high-noise problems where only a few experts are required.",
"neighbors": [
668,
1724
],
"mask": "Validation"
},
{
"node_id": 1509,
"label": 2,
"text": "Title: Synthetic Aperture Radar Processing by a Multiple Scale Neural System for Boundary and Surface Representation \nAbstract: Many real-world time series are multi-stationary, where the underlying data generating process (DGP) switches between different stationary subprocesses, or modes of operation. An important problem in modeling such systems is to discover the underlying switching process, which entails identifying the number of subprocesses and the dynamics of each subprocess. For many time series, this problem is ill-defined, since there are often no obvious means to distinguish the different subprocesses. We discuss the use of nonlinear gated experts to perform the segmentation and system identification of the time series. Unlike standard gated experts methods, however, we use concepts from statistical physics to enhance the segmentation for high-noise problems where only a few experts are required.",
"neighbors": [
589,
592,
1144
],
"mask": "Train"
},
{
"node_id": 1510,
"label": 6,
"text": "Title: The Problem with Noise and Small Disjuncts \nAbstract: Systems that learn from examples often create a disjunctive concept definition. The disjuncts in the concept definition which cover only a few training examples are referred to as small disjuncts. The problem with small disjuncts is that they are more error prone than large disjuncts, but may be necessary to achieve a high level of predictive accuracy [Holte, Acker, and Porter, 1989]. This paper extends previous work done on the problem of small disjuncts by taking noise into account. It investigates the assertion that it is hard to learn from noisy data because it is difficult to distinguish between noise and true exceptions. In the process of evaluating this assertion, insights are gained into the mechanisms by which noise affects learning. Two domains are investigated. The experimental results in this paper suggest that for both Shapiro's chess endgame domain [Shapiro, 1987] and for the Wisconsin breast cancer domain [Wolberg, 1990], the assertion is true, at least for low levels (5-10%) of class noise. ",
"neighbors": [
790,
1234,
2057
],
"mask": "Validation"
},
{
"node_id": 1511,
"label": 2,
"text": "Title: In Stable Dynamic Parameter Adaptation \nAbstract: ",
"neighbors": [
1357
],
"mask": "Train"
},
{
"node_id": 1512,
"label": 3,
"text": "Title: Cross-Validation and the Bootstrap: Estimating the Error Rate of a Prediction Rule \nAbstract: A training set of data has been used to construct a rule for predicting future responses. What is the error rate of this rule? The traditional answer to this question is given by cross-validation. The cross-validation estimate of prediction error is nearly unbiased, but can be highly variable. This article discusses bootstrap estimates of prediction error, which can be thought of as smoothed versions of cross-validation. A particular bootstrap method, the 632+ rule, is shown to substantially outperform cross-validation in a catalog of 24 simulation experiments. Besides providing point estimates, we also consider estimating the variability of an error rate estimate. All of the results here are nonparametric, and apply to any possible prediction rule: however we only study classification problems with 0-1 loss in detail. Our simulations include \"smooth\" prediction rules like Fisher's Linear Discriminant Function, and unsmooth ones like Nearest Neighbors.",
"neighbors": [
949,
999,
1087,
1112,
1235,
1267,
1335,
1463,
1608,
1671
],
"mask": "Train"
},
{
"node_id": 1513,
"label": 0,
"text": "Title: Fast NP Chunking Using Memory-Based Learning Techniques \nAbstract: In this paper we discuss the application of Memory-Based Learning (MBL) to fast NP chunking. We first discuss the application of a fast decision tree variant of MBL (IGTree) on the dataset described in (Ramshaw and Marcus, 1995), which consists of roughly 50,000 test and 200,000 train items. In a second series of experiments we used an architecture of two cascaded IGTrees. In the second level of this cascaded classifier we added context predictions as extra features so that incorrect predictions from the first level can be corrected, yielding a 97.2% generalisation accuracy with training and testing times in the order of seconds to minutes. ",
"neighbors": [
634,
785,
862,
1328,
1812
],
"mask": "Train"
},
{
"node_id": 1514,
"label": 6,
"text": "Title: Is Consistency Harmful? \nAbstract: We examine the issue of consistency from a new perspective. To avoid overfitting the training data, a considerable number of current systems have sacrificed the goal of learning hypotheses that are perfectly consistent with the training instances by setting a new goal of hypothesis simplicity (Occam's razor). Instead of using simplicity as a goal, we have developed a novel approach that addresses consistency directly. In other words, our concept learner has the explicit goal of selecting the most appropriate degree of consistency with the training data. We begin this paper by exploring concept learning with less than perfect consistency. Next, we describe a system that can adapt its degree of consistency in response to feedback about predictive accuracy on test data. Finally, we present the results of initial experiments that begin to address the question of how tightly hypotheses should fit the training data for different problems. ",
"neighbors": [
429,
1333
],
"mask": "Test"
},
{
"node_id": 1515,
"label": 4,
"text": "Title: Evolving Optimal Populations with XCS Classifier Systems \nAbstract: We examine the issue of consistency from a new perspective. To avoid overfitting the training data, a considerable number of current systems have sacrificed the goal of learning hypotheses that are perfectly consistent with the training instances by setting a new goal of hypothesis simplicity (Occam's razor). Instead of using simplicity as a goal, we have developed a novel approach that addresses consistency directly. In other words, our concept learner has the explicit goal of selecting the most appropriate degree of consistency with the training data. We begin this paper by exploring concept learning with less than perfect consistency. Next, we describe a system that can adapt its degree of consistency in response to feedback about predictive accuracy on test data. Finally, we present the results of initial experiments that begin to address the question of how tightly hypotheses should fit the training data for different problems. ",
"neighbors": [
163,
936,
961,
988,
1447,
1581,
1711
],
"mask": "Train"
},
{
"node_id": 1516,
"label": 1,
"text": "Title: Solving 3-SAT by GAs Adapting Constraint Weights \nAbstract: Handling NP complete problems with GAs is a great challenge. In particular the presence of constraints makes finding solutions hard for a GA. In this paper we present a problem independent constraint handling mechanism, Stepwise Adaptation of Weights (SAW), and apply it for solving the 3-SAT problem. Our experiments prove that the SAW mechanism substantially increases GA performance. Furthermore, we compare our SAW-ing GA with the best heuristic technique we could trace, WGSAT, and conclude that the GA is superior to the heuristic method. ",
"neighbors": [
833,
1136,
1218
],
"mask": "Train"
},
{
"node_id": 1517,
"label": 2,
"text": "Title: Robust Interpretation of Neural-Network Models \nAbstract: Artificial Neural Network seem very promising for regression and classification, especially for large covariate spaces. These methods represent a non-linear function as a composition of low dimensional ridge functions and therefore appear to be less sensitive to the dimensionality of the covariate space. However, due to non uniqueness of a global minimum and the existence of (possibly) many local minima, the model revealed by the network is non stable. We introduce a method to interpret neural network results which uses novel robustification techniques. This results in a robust interpretation of the model employed by the network. Simulated data from known models is used to demonstrate the interpretability results and to demonstrate the effects of different regularization methods on the robustness of the model. Graphical methods are introduced to present the interpretation results. We further demonstrate how interaction between covariates can be revealed. From this study we conclude that the interpretation method works well, but that NN models may sometimes be misinterpreted, especially if the approximations to the true model are less robust. ",
"neighbors": [
1612
],
"mask": "Validation"
},
{
"node_id": 1518,
"label": 1,
"text": "Title: Feature selection through Functional Links with Evolutionary Computation for Neural Networks \nAbstract: In this paper we describe different ways to select and transform features using evolutionary computation. The features are intended to serve as inputs to a feedforward network. The first way is the selection of features using a standard genetic algorithm, and the solution found specifies whether a certain feature should be present or not. We show that for the prediction of unemployment rates in various European countries, this is a succesfull approach. In fact, this kind of selection of features is a special case of so-called functional links. Functional links transform the input pattern space to a new pattern space. As functional links one can use polynomials, or more general functions. Both can be found using evolutionary computation. Polynomial functional links are found by evolving a coding of the powers of the polynomial. For symbolic functions we can use genetic programming. Genetic programming finds the symbolic functions that are to be applied to the inputs. We compare the workings of the latter two methods on two artificial datasets, and on a real-world medical image dataset.",
"neighbors": [
1536
],
"mask": "Train"
},
{
"node_id": 1519,
"label": 5,
"text": "Title: What online Machine Learning can do for Knowledge Acquisition A Case Study \nAbstract: This paper reports on the development of a realistic knowledge-based application using the MOBAL system. Some problems and requirements resulting from industrial-caliber tasks are formulated. A step-by-step account of the construction of a knowledge base for such a task demonstrates how the interleaved use of several learning algorithms in concert with an inference engine and a graphical interface can fulfill those requirements. Design, analysis, revision, refinement and extension of a working model are combined in one incremental process. This illustrates the balanced cooperative modeling approach. The case study is taken from the telecommunications domain and more precisely deals with security management in telecommunications networks. MOBAL would be used as part of a security management tool for acquiring, validating and refining a security policy. The modeling approach is compared with other approaches, such as KADS and stand-alone machine learning. ",
"neighbors": [
963,
1177
],
"mask": "Train"
},
{
"node_id": 1520,
"label": 2,
"text": "Title: Equivariant adaptive source separation \nAbstract: Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Equivariant Adaptive Separation via Independence). The EASI algorithms are based on the idea of serial updating: this specific form of matrix updates systematically yields algorithms with a simple, parallelizable structure, for both real and complex mixtures. Most importantly, the performance of an EASI algorithm does not depend on the mixing matrix. In particular, convergence rates, stability conditions and interference rejection levels depend only on the (normalized) distributions of the source signals. Close form expressions of these quantities are given via an asymptotic performance analysis. This is completed by some numerical experiments illustrating the effectiveness of the proposed approach. ",
"neighbors": [
59,
354,
570,
834,
839,
872,
873,
874,
920,
1072,
1200,
1211,
1245,
1246,
1258,
1524,
1526,
1709
],
"mask": "Train"
},
{
"node_id": 1521,
"label": 6,
"text": "Title: Improving Bagging Performance by Increasing Decision Tree Diversity \nAbstract: Ensembles of decision trees often exhibit greater predictive accuracy than single trees alone. Bagging and boosting are two standard ways of generating and combining multiple trees. Boosting has been empirically determined to be the more effective of the two, and it has recently been proposed that this may be because it produces more diverse trees than bagging. This paper reports empirical findings that strongly support this hypothesis. We enforce greater decision tree diversity in bagging by a simple modification of the underlying decision tree learner that utilizes randomly-generated decision stumps of predefined depth as the starting point for tree induction. The modified procedure yields very competitive results while still retaining one of the attractive properties of bagging: all iterations are independent. Additionally, we also investigate a possible integration of bagging and boosting. All these ensemble-generating procedures are compared empirically on various domains. ",
"neighbors": [
70,
1185,
1237,
1484
],
"mask": "Train"
},
{
"node_id": 1522,
"label": 6,
"text": "Title: Improving Bagging Performance by Increasing Decision Tree Diversity \nAbstract: ARCING THE EDGE Leo Breiman Technical Report 486 , Statistics Department University of California, Berkeley CA. 94720 Abstract Recent work has shown that adaptively reweighting the training set, growing a classifier using the new weights, and combining the classifiers constructed to date can significantly decrease generalization error. Procedures of this type were called arcing by Breiman[1996]. The first successful arcing procedure was introduced by Freund and Schapire[1995,1996] and called Adaboost. In an effort to explain why Adaboost works, Schapire et.al. [1997] derived a bound on the generalization error of a convex combination of classifiers in terms of the margin. We introduce a function called the edge, which differs from the margin only if there are more than two classes. A framework for understanding arcing algorithms is defined. In this framework, we see that the arcing algorithms currently in the literature are optimization algorithms which minimize some function of the edge. A relation is derived between the optimal reduction in the maximum value of the edge and the PAC concept of weak learner. Two algorithms are described which achieve the optimal reduction. Tests on both synthetic and real data cast doubt on the Schapire et.al. There is recent empirical evidence that significant reductions in generalization error can be gotten by growing a number of different classifiers on the same training set and letting these vote for the best class. Freund and Schapire ([1995], [1996] ) proposed an algorithm called AdaBoost which adaptively reweights the training set in a way based on the past history of misclassifications, constructs a new classifier using the current weights, and uses the misclassification rate of this classifier to determine the size of its vote. In a number of empirical studies on many data sets using trees (CART or C4.5) as the base classifier (Drucker and Cortes[1995], Quinlan[1996], Freud and Schapire[1996], Breiman[1996]) AdaBoost produced dramatic decreases in generalization error compared to using a single tree. Error rates were reduced to the point where tests on some well-known data sets gave the result that CART plus AdaBoost did significantly better than any other of the commonly used classification methods (Breiman[1996] ). Meanwhile, empirical results showed that other methods of adaptive resampling (or reweighting) and combining (called \"arcing\" by Breiman [1996]) also led to low test set error rates. An algorithm called arc-x4 (Breiman[1996]) gave error rates almost identical to Adaboost. Ji and Ma[1997] worked with classifiers consisting of randomly selected hyperplanes and using a different method of adaptive resampling and unweighted voting, also got low error rates. Thus, there are a least three arcing algorithms extant, all of which give excellent classification accuracy. explanation.",
"neighbors": [
569,
1484
],
"mask": "Validation"
},
{
"node_id": 1523,
"label": 1,
"text": "Title: A Generalized Permutation Approach to Job Shop Scheduling with Genetic Algorithms \nAbstract: In order to sequence the tasks of a job shop problem (JSP) on a number of machines related to the technological machine order of jobs, a new representation technique mathematically known as \"permutation with repetition\" is presented. The main advantage of this single chromosome representation is in analogy to the permutation scheme of the traveling salesman problem (TSP) that it cannot produce illegal sets of operation sequences (infeasible symbolic solutions). As a consequence of the representation scheme a new crossover operator preserving the initial scheme structure of permutations with repetition will be sketched. Its behavior is similar to the well known Order-Crossover for simple permutation schemes. Actually the GOX operator for permutations with repetition arises from a Generalisation of OX. Computational experiments show, that GOX passes the information from a couple of parent solutions efficiently to offspring solutions. Together, the new representation and GOX support the cooperative aspect of the genetic search for scheduling problems strongly. ",
"neighbors": [
343,
813,
815,
880,
1060,
1136
],
"mask": "Train"
},
{
"node_id": 1524,
"label": 2,
"text": "Title: BLIND SEPARATION OF DELAYED SOURCES BASED ON INFORMATION MAXIMIZATION \nAbstract: Recently, Bell and Sejnowski have presented an approach to blind source separation based on the information maximization principle. We extend this approach into more general cases where the sources may have been delayed with respect to each other. We present a network architecture capable of coping with such sources, and we derive the adaptation equations for the delays and the weights in the network by maximizing the information transferred through the network. Examples using wideband sources such as speech are presented to illustrate the algorithm. ",
"neighbors": [
570,
576,
1243,
1245,
1381,
1520,
1526
],
"mask": "Validation"
},
{
"node_id": 1525,
"label": 3,
"text": "Title: Reference Classes and Multiple Inheritances \nAbstract: The reference class problem in probability theory and the multiple inheritances (extensions) problem in non-monotonic logics can be referred to as special cases of conflicting beliefs. The current solution accepted in the two domains is the specificity priority principle. By analyzing an example, several factors (ignored by the principle) are found to be relevant to the priority of a reference class. A new approach, Non-Axiomatic Reasoning System (NARS), is discussed, where these factors are all taken into account. It is argued that the solution provided by NARS is better than the solutions provided by probability theory and non-monotonic logics.",
"neighbors": [
1276,
1415,
1503,
1504,
1506
],
"mask": "Train"
},
{
"node_id": 1526,
"label": 2,
"text": "Title: Working Paper IS-97-22 (Information Systems) A First Application of Independent Component Analysis to Extracting Structure\nAbstract: This paper discusses the application of a modern signal processing technique known as independent component analysis (ICA) or blind source separation to multivariate financial time series such as a portfolio of stocks. The key idea of ICA is to linearly map the observed multivariate time series into a new space of statistically independent components (ICs). This can be viewed as a factorization of the portfolio since joint probabilities become simple products in the coordinate system of the ICs. We apply ICA to three years of daily returns of the 28 largest Japanese stocks and compare the results with those obtained using principal component analysis. The results indicate that the estimated ICs fall into two categories, (i) infrequent but large shocks (responsible for the major changes in the stock prices), and (ii) frequent smaller fluctuations (contributing little to the overall level of the stocks). We show that the overall stock price can be reconstructed surprisingly well by using a small number of thresholded weighted ICs. In contrast, when using shocks derived from principal components instead of independent components, the reconstructed price is less similar to the original one. Independent component analysis is a potentially powerful method of analyzing and understanding driving mechanisms in financial markets. There are further promising applications to risk management since ICA focuses on higher order statistics. ",
"neighbors": [
576,
839,
1520,
1524
],
"mask": "Train"
},
{
"node_id": 1527,
"label": 3,
"text": "Title: A THEORY OF INFERRED CAUSATION perceive causal relationships in uncon trolled observations. 2. the task\nAbstract: This paper concerns the empirical basis of causation, and addresses the following issues: We propose a minimal-model semantics of causation, and show that, contrary to common folklore, genuine causal influences can be distinguished from spurious covariations following standard norms of inductive reasoning. We also establish a sound characterization of the conditions under which such a distinction is possible. We provide an effective algorithm for inferred causation and show that, for a large class of data the algorithm can uncover the direction of causal influences as defined above. Finally, we ad dress the issue of non-temporal causation.",
"neighbors": [
211,
260,
419,
827,
909,
971,
1086,
1240,
1543,
1747,
1894,
2076,
2088,
2166,
2221,
2420,
2524,
2525,
2561
],
"mask": "Validation"
},
{
"node_id": 1528,
"label": 5,
"text": "Title: Using Qualitative Models to Guide Inductive Learning \nAbstract: This paper presents a method for using qualitative models to guide inductive learning. Our objectives are to induce rules which are not only accurate but also explainable with respect to the qualitative model, and to reduce learning time by exploiting domain knowledge in the learning process. Such ex-plainability is essential both for practical application of inductive technology, and for integrating the results of learning back into an existing knowledge-base. We apply this method to two process control problems, a water tank network and an ore grinding process used in the mining industry. Surprisingly, in addition to achieving explainability the classificational accuracy of the induced rules is also increased. We show how the value of the qualitative models can be quantified in terms of their equivalence to additional training examples, and finally discuss possible extensions.",
"neighbors": [
151,
426,
1487
],
"mask": "Train"
},
{
"node_id": 1529,
"label": 4,
"text": "Title: Explanation Based Learning: A Comparison of Symbolic and Neural Network Approaches \nAbstract: Explanation based learning has typically been considered a symbolic learning method. An explanation based learning method that utilizes purely neural network representations (called EBNN) has recently been developed, and has been shown to have several desirable properties, including robustness to errors in the domain theory. This paper briefly summarizes the EBNN algorithm, then explores the correspondence between this neural network based EBL method and EBL methods based on symbolic representations.",
"neighbors": [
565,
882,
1314
],
"mask": "Validation"
},
{
"node_id": 1530,
"label": 1,
"text": "Title: Performance of Multi-Parent Crossover Operators on Numerical Function Optimization Problems \nAbstract: The multi-parent scanning crossover, generalizing the traditional uniform crossover, and diagonal crossover, generalizing 1-point (n-point) crossovers, were introduced in [5]. In subsequent publications, see [6, 18, 19], several aspects of multi-parent recombination are discussed. Due to space limitations, however, a full overview of experimental results showing the performance of multi-parent GAs on numerical optimization problems has never been published. This technical report is meant to fill this gap and make results available. ",
"neighbors": [
145,
163,
1218,
1424,
2089
],
"mask": "Train"
},
{
"node_id": 1531,
"label": 0,
"text": "Title: NACODAE: Navy Conversational Decision Aids Environment \nAbstract: This report documents NACODAE, the Navy Conversational Decision Aids Environment being developed at the Navy Center for Applied Research in Artificial Intelligence (NCARAI), which is a branch of the Naval Research Laboratory. NA-CODAE is a software prototype that is being developed under the Practical Advances in Case-Based Reasoning project, which is funded by the Office for Naval Research, for the purpose of assisting Navy and other DoD personnel in decision aids tasks such as system maintenance, operational training, crisis response planning, logistics, fault diagnosis, target classification, and meteorological nowcasting. Implemented in Java, NACODAE can be used on any machine containing a Java virtual machine (e.g., PCs, Unix). This document describes and exemplifies NACODAE's capabilities. Our goal is to transition this tool to operational personnel, and to continue its enhancement through user feedback and by testing recent research advances in case-based reasoning and related areas. ",
"neighbors": [
66,
887,
983,
1154
],
"mask": "Test"
},
{
"node_id": 1532,
"label": 3,
"text": "Title: Automated Decomposition of Model-based Learning Problems \nAbstract: A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of decompositional, model-based learning (DML), a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate. ",
"neighbors": [
327,
558,
577,
782,
925,
1480,
1487
],
"mask": "Test"
},
{
"node_id": 1533,
"label": 1,
"text": "Title: Evolving Visual Routines \nAbstract: Traditional machine vision assumes that the vision system recovers a a complete, labeled description of the world [ Marr, 1982 ] . Recently, several researchers have criticized this model and proposed an alternative model which considers perception as a distributed collection of task-specific, task-driven visual routines [ Aloimonos, 1993, Ullman, 1987 ] . Some of these researchers have argued that in natural living systems these visual routines are the product of natural selection [ Ramachandran, 1985 ] . So far, researchers have hand-coded task-specific visual routines for actual implementations (e.g. [ Chapman, 1993 ] ). In this paper we propose an alternative approach in which visual routines for simple tasks are evolved using an artificial evolution approach. We present results from a series of runs on actual camera images, in which simple routines were evolved using Genetic Programming techniques [ Koza, 1992 ] . The results obtained are promising: the evolved routines are able to correctly classify up to 93% of the images, which is better than the best algorithm we were able to write by hand. ",
"neighbors": [
781,
846,
900,
970,
1277,
1730
],
"mask": "Train"
},
{
"node_id": 1534,
"label": 0,
"text": "Title: The Use of Explicit Goals for Knowledge to Guide Inference and Learning \nAbstract: Combinatorial explosion of inferences has always been a central problem in artificial intelligence. Although the inferences that can be drawn from a reasoner's knowledge and from available inputs is very large (potentially infinite), the inferential resources available to any reasoning system are limited. With limited inferential capacity and very many potential inferences, reasoners must somehow control the process of inference. Not all inferences are equally useful to a given reasoning system. Any reasoning system that has goals (or any form of a utility function) and acts based on its beliefs indirectly assigns utility to its beliefs. Given limits on the process of inference, and variation in the utility of inferences, it is clear that a reasoner ought to draw the inferences that will be most valuable to it. This paper presents an approach to this problem that makes the utility of a (potential) belief an explicit part of the inference process. The method is to generate explicit desires for knowledge. The question of focus of attention is thereby transformed into two related problems: How can explicit desires for knowledge be used to control inference and facilitate resource-constrained goal pursuit in general? and, Where do these desires for knowledge come from? We present a theory of knowledge goals, or desires for knowledge, and their use in the processes of understanding and learning. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel or unusual newspaper stories, and a differential diagnosis program that improves its accuracy with experience. ",
"neighbors": [
289,
1122,
1148,
1163,
1278,
1498,
1556,
1597
],
"mask": "Validation"
},
{
"node_id": 1535,
"label": 0,
"text": "Title: Decision Models: A Theory of Volitional Explanation \nAbstract: This paper presents a theory of motivational analysis, the construction of volitional explanations to describe the planning behavior of agents. We discuss both the content of such explanations, as well as the process by which an understander builds the explanations. Explanations are constructed from decision models, which describe the planning process that an agent goes through when considering whether to perform an action. Decision models are represented as explanation patterns, which are standard patterns of causality based on previous experiences of the understander. We discuss the nature of explanation patterns, their use in representing decision models, and the process by which they are retrieved, used and evaluated.",
"neighbors": [
289,
629,
1348,
1537
],
"mask": "Train"
},
{
"node_id": 1536,
"label": 1,
"text": "Title: Representation and Evolution of Neural Networks \nAbstract: An evolutionary approach for developing improved neural network architectures is presented. It is shown that it is possible to use genetic algorithms for the construction of backpropagation networks for real world tasks. Therefore a network representation is developed with certain properties. Results with various application are presented. ",
"neighbors": [
163,
207,
1518,
1728,
2504
],
"mask": "Test"
},
{
"node_id": 1537,
"label": 0,
"text": "Title: Incremental Learning of Explanation Patterns and their Indices \nAbstract: This paper describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Recent work in AI has dealt with the issue of using past explanations stored in the reasoner's memory to understand novel situations. However, this process assumes that past explanations are well understood and provide good \"lessons\" to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Instead, it is reasonable to assume that the reasoner would have gaps in its knowledge base. By reasoning about a new situation, the reasoner should be able to fill in these gaps as new information came in, reorganize its explanations in memory, and gradually evolve a better understanding of its domain. We present a story understanding program that retrieves past explanations from situations already in memory, and uses them to build explanations to understand novel stories about terrorism. In doing so, the system refines its understanding of the domain by filling in gaps in these explanations, by elaborating the explanations, or by learning new indices for the explanations. This is a type of incremental learning since the system improves its explanatory knowledge of the domain in an incremental fashion rather than by learning new XPs as a whole.",
"neighbors": [
289,
629,
1348,
1535,
1556
],
"mask": "Train"
},
{
"node_id": 1538,
"label": 2,
"text": "Title: Analysis of Drifting Dynamics with Neural Network Hidden Markov Models \nAbstract: We present a method for the analysis of nonstationary time series with multiple operating modes. In particular, it is possible to detect and to model both a switching of the dynamics and a less abrupt, time consuming drift from one mode to another. This is achieved in two steps. First, an unsupervised training method provides prediction experts for the inherent dynamical modes. Then, the trained experts are used in a hidden Markov model that allows to model drifts. An application to physiological wake/sleep data demonstrates that analysis and modeling of real-world time series can be improved when the drift paradigm is taken into account.",
"neighbors": [
1724
],
"mask": "Train"
},
{
"node_id": 1539,
"label": 5,
"text": "Title: Finding new rules for incomplete theories: Explicit biases for induction with contextual information. In Proceedings\nAbstract: addressed in KBANN (which translates a theory into a neural-net, refines it using backpropagation, and then retranslates the result back into rules) by adding extra hidden units and connections to the initial network; however, this would require predetermining the num In this paper, we have presented constructive induction techniques recently added to the EITHER theory refinement system. Intermediate concept utilization employs existing rules in the theory to derive higher-level features for use in induction. Intermediate concept creation employs inverse resolution to introduce new intermediate concepts in order to fill gaps in a theory than span multiple levels. These revisions allow EITHER to make use of imperfect domain theories in the ways typical of previous work in both constructive induction and theory refinement. As a result, EITHER is able to handle a wider range of theory imperfections than any other existing theory refinement system. ",
"neighbors": [
136,
378,
449,
638,
924,
2091
],
"mask": "Train"
},
{
"node_id": 1540,
"label": 4,
"text": "Title: MultiPlayer Residual Advantage Learning With General Function Approximation \nAbstract: A new algorithm, advantage learning, is presented that improves on advantage updating by requiring that a single function be learned rather than two. Furthermore, advantage learning requires only a single type of update, the learning update, while advantage updating requires two different types of updates, a learning update and a normilization update. The reinforcement learning system uses the residual form of advantage learning. An application of reinforcement learning to a Markov game is presented. The testbed has continuous states and nonlinear dynamics. The game consists of two players, a missile and a plane; the missile pursues the plane and the plane evades the missile. On each time step , each player chooses one of two possible actions; turn left or turn right, resulting in a 90 degree instantaneous change in the aircraft s heading. Reinforcement is given only when the missile hits the plane or the plane reaches an escape distance from the missile. The advantage function is stored in a single-hidden-layer sigmoidal network. Speed of learning is increased by a new algorithm , Incremental Delta-Delta (IDD), which extends Jacobs (1988) Delta-Delta for use in incremental training, and differs from Suttons Incremental Delta-Bar-Delta (1992) in that it does not require the use of a trace and is amenable for use with general function approximation systems. The advantage learning algorithm for optimal control is modified for games in order to find the minimax point, rather than the maximum. Empirical results gathered using the missile/aircraft testbed validate theory that suggests residual forms of reinforcement learning algorithms converge to a local minimum of the mean squared Bellman residual when using general function approximation systems. Also, to our knowledge, this is the first time an approximate second order method has been used with residual algorithms. Empirical results are presented comparing convergence rates with and without the use of IDD for the reinforcement learning testbed described above and for a supervised learning testbed. The results of these experiments demonstrate IDD increased the rate of convergence and resulted in an order of magnitude lower total asymptotic error than when using backpropagation alone. ",
"neighbors": [
565,
842,
1045,
1118,
1378,
1443,
1459
],
"mask": "Train"
},
{
"node_id": 1541,
"label": 1,
"text": "Title: Unsupervised Learning with the Soft-Means Algorithm \nAbstract: This note describes a useful adaptation of the `peak seeking' regime used in unsupervised learning processes such as competitive learning and `k-means'. The adaptation enables the learning to capture low-order probability effects and thus to more fully capture the probabilistic structure of the training data. ",
"neighbors": [
1395
],
"mask": "Test"
},
{
"node_id": 1542,
"label": 0,
"text": "Title: Protein Sequencing Experiment Planning Using Analogy protein sequencing experiments. Planning is interleaved with experiment execution,\nAbstract: Experiment design and execution is a central activity in the natural sciences. The SeqER system provides a general architecture for the integration of automated planning techniques with a variety of domain knowledge in order to plan scientific experiments. These planning techniques include rule-based methods and, especially, the use of derivational analogy. Derivational analogy allows planning experience, captured as cases, to be reused. Analogy also allows the system to function in the absence of strong domain knowledge. Cases are efficiently and flexibly retrieved from a large casebase using massively parallel methods. ",
"neighbors": [
801
],
"mask": "Test"
},
{
"node_id": 1543,
"label": 3,
"text": "Title: Belief Networks Revisited \nAbstract: Experiment design and execution is a central activity in the natural sciences. The SeqER system provides a general architecture for the integration of automated planning techniques with a variety of domain knowledge in order to plan scientific experiments. These planning techniques include rule-based methods and, especially, the use of derivational analogy. Derivational analogy allows planning experience, captured as cases, to be reused. Analogy also allows the system to function in the absence of strong domain knowledge. Cases are efficiently and flexibly retrieved from a large casebase using massively parallel methods. ",
"neighbors": [
1324,
1527
],
"mask": "Train"
},
{
"node_id": 1544,
"label": 1,
"text": "Title: Monitoring in Embedded Agents \nAbstract: Finding good monitoring strategies is an important process in the design of any embedded agent. We describe the nature of the monitoring problem, point out what makes it difficult, and show that while periodic monitoring strategies are often the easiest to derive, they are not always the most appropriate. We demonstrate mathematically and empirically that for a wide class of problems, the so-called \"cupcake problems\", there exists a simple strategy, interval reduction, that outperforms periodic monitoring. We also show how features of the environment may influence the choice of the optimal strategy. The paper concludes with some thoughts about a monitoring strategy taxonomy, and what its defining features might be. ",
"neighbors": [
163,
566,
1206
],
"mask": "Train"
},
{
"node_id": 1545,
"label": 3,
"text": "Title: Learning Goal Oriented Bayesian Networks for Telecommunications Risk Management \nAbstract: This paper discusses issues related to Bayesian network model learning for unbalanced binary classification tasks. In general, the primary focus of current research on Bayesian network learning systems (e.g., K2 and its variants) is on the creation of the Bayesian network structure that fits the database best. It turns out that when applied with a specific purpose in mind, such as classification, the performance of these network models may be very poor. We demonstrate that Bayesian network models should be created to meet the specific goal or purpose intended for the model. We first present a goal-oriented algorithm for constructing Bayesian networks for predicting uncollectibles in telecommunications risk-management datasets. Second, we argue and demonstrate that current Bayesian network learning methods may fail to perform satisfactorily in real life applications since they do not learn models tailored to a specific goal or purpose. Third, we discuss the performance of goal oriented K2 and its variant.",
"neighbors": [
1086,
1582,
1909
],
"mask": "Validation"
},
{
"node_id": 1546,
"label": 4,
"text": "Title: Analytical Mean Squared Error Curves in Temporal Difference Learning \nAbstract: We have calculated analytical expressions for how the bias and variance of the estimators provided by various temporal difference value estimation algorithms change with o*ine updates over trials in absorbing Markov chains using lookup table representations. We illustrate classes of learning curve behavior in various chains, and show the manner in which TD is sensitive to the choice of its step size and eligibility trace parameters.",
"neighbors": [
63,
565,
1376
],
"mask": "Validation"
},
{
"node_id": 1547,
"label": 2,
"text": "Title: Misclassification Minimization \nAbstract: The problem of minimizing the number of misclassified points by a plane, attempting to separate two point sets with intersecting convex hulls in n-dimensional real space, is formulated as a linear program with equilibrium constraints (LPEC). This general LPEC can be converted to an exact penalty problem with a quadratic objective and linear constraints. A Frank-Wolfe-type algorithm is proposed for the penalty problem that terminates at a stationary point or a global solution. Novel aspects of the approach include: (i) A linear complementarity formulation of the step function that \"counts\" misclassifications, (ii) Exact penalty formulation without boundedness, nondegeneracy or constraint qualification assumptions, (iii) An exact solution extraction from the sequence of minimizers of the penalty function for a finite value of the penalty parameter for the general LPEC and an explicitly exact solution for the LPEC with uncoupled constraints, and (iv) A parametric quadratic programming formulation of the LPEC associated with the misclassification minimization problem.",
"neighbors": [
142,
227,
427,
1283
],
"mask": "Train"
},
{
"node_id": 1548,
"label": 3,
"text": "Title: Free energy coding In \nAbstract: In this paper, we introduce a new approach to the problem of optimal compression when a source code produces multiple codewords for a given symbol. It may seem that the most sensible codeword to use in this case is the shortest one. However, in the proposed free energy approach, random codeword selection yields an effective codeword length that can be less than the shortest codeword length. If the random choices are Boltzmann distributed, the effective length is optimal for the given source code. The expectation-maximization parameter estimation algorithms minimize this effective codeword length. We illustrate the performance of free energy coding on a simple problem where a compression factor of two is gained by using the new method. ",
"neighbors": [
76,
1291,
1374
],
"mask": "Train"
},
{
"node_id": 1549,
"label": 3,
"text": "Title: Explaining Predictions in Bayesian Networks and Influence Diagrams \nAbstract: As Bayesian Networks and Influence Diagrams are being used more and more widely, the importance of an efficient explanation mechanism becomes more apparent. We focus on predictive explanations, the ones designed to explain predictions and recommendations of probabilistic systems. We analyze the issues involved in defining, computing and evaluating such explanations and present an algorithm to compute them. ",
"neighbors": [
339,
1602
],
"mask": "Train"
},
{
"node_id": 1550,
"label": 6,
"text": "Title: MDL and MML Similarities and Differences (Introduction to Minimum Encoding Inference Part III) \nAbstract: Tech Report 207 Department of Computer Science, Monash University, Clayton, Vic. 3168, Australia Abstract: This paper continues the introduction to minimum encoding inductive inference given by Oliver and Hand. This series of papers was written with the objective of providing an introduction to this area for statisticians. We describe the message length estimates used in Wallace's Minimum Message Length (MML) inference and Rissanen's Minimum Description Length (MDL) inference. The differences in the message length estimates of the two approaches are explained. The implications of these differences for applications are discussed. ",
"neighbors": [
84,
157,
684,
1158,
1199,
1238,
1419,
1425,
1427,
1555,
1624
],
"mask": "Train"
},
{
"node_id": 1551,
"label": 2,
"text": "Title: A Performance Analysis of the CNS-1 on Large, Dense Backpropagation Networks Connectionist Network Supercomputer \nAbstract: We determine in this study the sustained performance of the CNS-1 during training and evaluation of large multilayered feedforward neural networks. Using a sophisticated coding, the 128-node machine would achieve up to 111 Giga connections per second (GCPS) and 22 Giga connection updates per second (GCUPS). During recall the machine would archieve 87% of the peak multiply-accumulate performance. The training of large nets is less efficient than the recall but only by a factor of 1.5 to 2. The benchmark is parallelized and the machine code is optimized before analyzing the performance. Starting from an optimal parallel algorithm, CNS specific optimizations still reduce the run time by a factor of 4 for recall and by a factor of 3 for training. Our analysis also yields some strategies for code optimization. The CNS-1 is still in design, and therefore we have to model the run time behavior of the memory system and the interconnection network. This gives us the option of changing some parameters of the CNS-1 system in order to analyze their performance impact. ",
"neighbors": [
272,
914
],
"mask": "Test"
},
{
"node_id": 1552,
"label": 0,
"text": "Title: on Case-Based Reasoning Integrations Case-Based Seeding for an Interactive Crisis Response Assistant \nAbstract: In this paper, we present an interactive, case-based approach to crisis response that provides users with the ability to rapidly develop good responses while allowing them to retain ultimate control over the decision-making process. We have implemented this approach in Inca, an INteractive Crisis Assistant for planning and scheduling in crisis domains. Inca relies on case-based methods to seed the response development process with initial candidate solutions drawn from previous cases. The human user then interacts with Inca to adapt these solutions to the current situation. We will discuss this interactive approach to crisis response using an artificial hazardous materials domain, Haz-Mat, that we developed for the purpose of evaluating candidate assistant mechanisms for crisis response. ",
"neighbors": [
1212,
1553,
1554
],
"mask": "Test"
},
{
"node_id": 1553,
"label": 0,
"text": "Title: Learning to Predict User Operations for Adaptive Scheduling \nAbstract: Mixed-initiative systems present the challenge of finding an effective level of interaction between humans and computers. Machine learning presents a promising approach to this problem in the form of systems that automatically adapt their behavior to accommodate different users. In this paper, we present an empirical study of learning user models in an adaptive assistant for crisis scheduling. We describe the problem domain and the scheduling assistant, then present an initial formulation of the adaptive assistant's learning task and the results of a baseline study. After this, we report the results of three subsequent experiments that investigate the effects of problem reformulation and representation augmentation. The results suggest that problem reformulation leads to significantly better accuracy without sacrificing the usefulness of the learned behavior. The studies also raise several interesting issues in adaptive assistance for scheduling. ",
"neighbors": [
82,
901,
1552,
1554
],
"mask": "Train"
},
{
"node_id": 1554,
"label": 0,
"text": "Title: CABINS A Framework of Knowledge Acquisition and Iterative Revision for Schedule Improvement and Reactive Repair \nAbstract: Mixed-initiative systems present the challenge of finding an effective level of interaction between humans and computers. Machine learning presents a promising approach to this problem in the form of systems that automatically adapt their behavior to accommodate different users. In this paper, we present an empirical study of learning user models in an adaptive assistant for crisis scheduling. We describe the problem domain and the scheduling assistant, then present an initial formulation of the adaptive assistant's learning task and the results of a baseline study. After this, we report the results of three subsequent experiments that investigate the effects of problem reformulation and representation augmentation. The results suggest that problem reformulation leads to significantly better accuracy without sacrificing the usefulness of the learned behavior. The studies also raise several interesting issues in adaptive assistance for scheduling. ",
"neighbors": [
901,
951,
1401,
1552,
1553
],
"mask": "Train"
},
{
"node_id": 1555,
"label": 3,
"text": "Title: Bayesian and Information-Theoretic Priors for Bayesian Network Parameters \nAbstract: We consider Bayesian and information-theoretic approaches for determining non-informative prior distributions in a parametric model family. The information-theoretic approaches are based on the recently modified definition of stochastic complexity by Rissanen, and on the Minimum Message Length (MML) approach by Wallace. The Bayesian alternatives include the uniform prior, and the equivalent sample size priors. In order to be able to empirically compare the different approaches in practice, the methods are instantiated for a model family of practical importance, the family of Bayesian networks.",
"neighbors": [
558,
1158,
1550
],
"mask": "Train"
},
{
"node_id": 1556,
"label": 0,
"text": "Title: A Goal-Based Approach to Intelligent Information Retrieval \nAbstract: Intelligent information retrieval (IIR) requires inference. The number of inferences that can be drawn by even a simple reasoner is very large, and the inferential resources available to any practical computer system are limited. This problem is one long faced by AI researchers. In this paper, we present a method used by two recent machine learning programs for control of inference that is relevant to the design of IIR systems. The key feature of the approach is the use of explicit representations of desired knowledge, which we call knowledge goals. Our theory addresses the representation of knowledge goals, methods for generating and transforming these goals, and heuristics for selecting among potential inferences in order to feasibly satisfy such goals. In this view, IIR becomes a kind of planning: decisions about what to infer, how to infer and when to infer are based on representations of desired knowledge, as well as internal representations of the system's inferential abilities and current state. The theory is illustrated using two case studies, a natural language understanding program that learns by reading novel newspaper stories, and a differential diagnosis program that improves its accuracy with experience. We conclude by making several suggestions on how this machine learning framework can be integrated with existing information retrieval methods. ",
"neighbors": [
1534,
1537
],
"mask": "Test"
},
{
"node_id": 1557,
"label": 4,
"text": "Title: Using Communication to Reduce Locality in Distributed Multi-Agent Learning \nAbstract: This paper attempts to bridge the fields of machine learning, robotics, and distributed AI. It discusses the use of communication in reducing the undesirable effects of locality in fully distributed multi-agent systems with multiple agents/robots learning in parallel while interacting with each other. Two key problems, hidden state and credit assignment, are addressed by applying local undirected broadcast communication in a dual role: as sensing and as reinforcement. The methodology is demonstrated on two multi-robot learning experiments. The first describes learning a tightly-coupled coordination task with two robots, the second a loosely-coupled task with four robots learning social rules. Communication is used to 1) share sensory data to overcome hidden state and 2) share reinforcement to overcome the credit assignment problem between the agents and bridge the gap between local/individual and global/group payoff. ",
"neighbors": [
650,
691,
1649,
1687
],
"mask": "Validation"
},
{
"node_id": 1558,
"label": 1,
"text": "Title: How good are genetic algorithms at finding large cliques: an experimental study \nAbstract: This paper investigates the power of genetic algorithms at solving the MAX-CLIQUE problem. We measure the performance of a standard genetic algorithm on an elementary set of problem instances consisting of embedded cliques in random graphs. We indicate the need for improvement, and introduce a new genetic algorithm, the multi-phase annealed GA, which exhibits superior performance on the same problem set. As we scale up the problem size and test on \"hard\" benchmark instances, we notice a degraded performance in the algorithm caused by premature convergence to local minima. To alleviate this problem, a sequence of modifications are implemented ranging from changes in input representation to systematic local search. The most recent version, called union GA, incorporates the features of union cross-over, greedy replacement, and diversity enhancement. It shows a marked speed-up in the number of iterations required to find a given solution, as well as some improvement in the clique size found. We discuss issues related to the SIMD implementation of the genetic algorithms on a Thinking Machines CM-5, which was necessitated by the intrinsically high time complexity (O(n 3 )) of the serial algorithm for computing one iteration. Our preliminary conclusions are: (1) a genetic algorithm needs to be heavily customized to work \"well\" for the clique problem; (2) a GA is computationally very expensive, and its use is only recommended if it is known to find larger cliques than other algorithms; (3) although our customization effort is bringing forth continued improvements, there is no clear evidence, at this time, that a GA will have better success in circumventing local minima. ",
"neighbors": [
163,
1136,
2564
],
"mask": "Test"
},
{
"node_id": 1559,
"label": 2,
"text": "Title: In Active Learning with Statistical Models \nAbstract: For many types of learners one can compute the statistically \"optimal\" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992; Cohn, 1994]. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regres sion are both efficient and accurate.",
"neighbors": [
71,
740,
929,
1664,
1683,
1697,
1703,
1791,
2658
],
"mask": "Train"
},
{
"node_id": 1560,
"label": 4,
"text": "Title: DESIGN AND ANALYSIS OF EFFICIENT REINFORCEMENT LEARNING ALGORITHMS \nAbstract: For many types of learners one can compute the statistically \"optimal\" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992; Cohn, 1994]. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regres sion are both efficient and accurate.",
"neighbors": [
456,
535,
791,
1161,
2198
],
"mask": "Train"
},
{
"node_id": 1561,
"label": 6,
"text": "Title: Characterizing Rational versus Exponential Learning Curves \nAbstract: We consider the standard problem of learning a concept from random examples. Here a learning curve can be defined to be the expected error of a learner's hypotheses as a function of training sample size. Haussler, Littlestone and Warmuth have shown that, in the distribution free setting, the smallest expected error a learner can achieve in the worst case over a concept class C converges rationally to zero error (i.e., fi(1=t) for training sample size t). However, recently Cohn and Tesauro have demonstrated how exponential convergence can often be observed in experimental settings (i.e., average error decreasing as e fi(t) ). By addressing a simple non-uniformity in the original analysis, this paper shows how the dichotomy between rational and exponential worst case learning curves can be recovered in the distribution free theory. These results support the experimental findings of Cohn and Tesauro: for finite concept classes, any consistent learner achieves exponential convergence, even in the worst case; but for continuous concept classes, no learner can exhibit sub-rational convergence for every target concept and domain distribution. A precise boundary between rational and exponential convergence is drawn for simple concept chains. Here we show that somewhere dense chains always force rational convergence in the worst case, but exponential convergence can always be achieved for nowhere dense chains.",
"neighbors": [
967
],
"mask": "Test"
},
{
"node_id": 1562,
"label": 2,
"text": "Title: Using Sampling and Queries to Extract Rules from Trained Neural Networks \nAbstract: Concepts learned by neural networks are difficult to understand because they are represented using large assemblages of real-valued parameters. One approach to understanding trained neural networks is to extract symbolic rules that describe their classification behavior. There are several existing rule-extraction approaches that operate by searching for such rules. We present a novel method that casts rule extraction not as a search problem, but instead as a learning problem. In addition to learning from training examples, our method exploits the property that networks can be efficiently queried. We describe algorithms for extracting both conjunctive and M -of-N rules, and present experiments that show that our method is more efficient than conventional search-based approaches.",
"neighbors": [
355,
627,
631,
1637
],
"mask": "Test"
},
{
"node_id": 1563,
"label": 1,
"text": "Title: Fast EquiPartitioning of Rectangular Domains using Stripe Decomposition \nAbstract: This paper presents a fast algorithm that provides optimal or near optimal solutions to the minimum perimeter problem on a rectangular grid. The minimum perimeter problem is to partition a grid of size M N into P equal area regions while minimizing the total perimeter of the regions. The approach taken here is to divide the grid into stripes that can be filled completely with an integer number of regions . This striping method gives rise to a knapsack integer program that can be efficiently solved by existing codes. The solution of the knapsack problem is then used to generate the grid region assignments. An implementation of the algorithm partitioned a 1000 1000 grid into 1000 regions to a provably optimal solution in less than one second. With sufficient memory to hold the M N grid array, extremely large minimum perimeter problems can be solved easily.",
"neighbors": [
53,
357,
803
],
"mask": "Test"
},
{
"node_id": 1564,
"label": 2,
"text": "Title: GROWING RADIAL BASIS FUNCTION NETWORKS \nAbstract: This paper presents and evaluates two algorithms for incrementally constructing Radial Basis Function Networks, a class of neural networks which looks more suitable for adtaptive control applications than the more popular backpropagation networks. The first algorithm has been derived by a previous method developed by Fritzke, while the second one has been inspired by the CART algorithm developed by Breiman for generation regression trees. Both algorithms proved to work well on a number of tests and exhibit comparable performances. An evaluation on the standard case study of the Mackey-Glass temporal series is reported. ",
"neighbors": [
611,
687,
745,
899,
1672
],
"mask": "Validation"
},
{
"node_id": 1565,
"label": 1,
"text": "Title: Evolving Fuzzy Prototypes for Efficient Data Clustering \nAbstract: number of prototypes used to represent each class, the position of each prototype within its class and the membership function associated with each prototype. This paper proposes a novel, evolutionary approach to data clustering and classification which overcomes many of the limitations of traditional systems. The approach rests on the optimisation of both the number and positions of fuzzy prototypes using a real-valued genetic algorithm (GA). Because the GA acts on all of the classes at once, the system benefits naturally from global information about possible class interactions. In addition, the concept of a receptive field for each prototype is used to replace the classical distance-based membership function by an infinite fuzzy support, multidimensional, Gaussian function centred over the prototype and with unique variance in each dimension, reflecting the tightness of the cluster. Hence, the notion of nearest-neighbour is replaced by that of nearest attracting prototype (NAP). The proposed model is a completely self-optimising, fuzzy system called GA-NAP. Most data clustering algorithms, including the popular K-means algorithm, require a priori knowledge about the problem domain to fix the number and starting positions of the prototypes. Although such knowledge may be assumed for domains whose dimensionality is fairly small or whose underlying structure is relatively intuitive, it is clearly much less accessible in hyper-dimensional settings, where the number of input parameters may be very large. Classical systems also suffer from the fact that they can only define clusters for one class at a time. Hence, no account is made of potential interactions among classes. These drawbacks are further compounded by the fact that the ensuing classification is typically based on a fixed, distance-based membership function for all prototypes. This paper proposes a novel approach to data clustering and classification which overcomes the aforementioned limitations of traditional systems. The model is based on the genetic evolution of fuzzy prototypes. A real-valued genetic algorithm (GA) is used to optimise both the number and positions of prototypes. Because the GA acts on all of the classes at once and measures fitness as classification accuracy, the system naturally profits from global information about class interaction. The concept of a receptive field for each prototype is also presented and used to replace the classical, fixed distance-based function by an infinite fuzzy support membership function. The new membership function is inspired by that used in the hidden layer of RBF networks. It consists of a multidimensional Gaussian function centred over the prototype and with a unique variance in each dimension that reflects the tightness of the cluster. During classification, the notion of nearest-neighbour is replaced by that of nearest attracting prototype (NAP). The proposed model is a completely self-optimising, fuzzy system called GA-NAP. ",
"neighbors": [
899,
1088
],
"mask": "Test"
},
{
"node_id": 1566,
"label": 6,
"text": "Title: Worst-case Quadratic Loss Bounds for Prediction Using Linear Functions and Gradient Descent \nAbstract: In this paper we study the performance of gradient descent when applied to the problem of on-line linear prediction in arbitrary inner product spaces. We prove worst-case bounds on the sum of the squared prediction errors under various assumptions concerning the amount of a priori information about the sequence to predict. The algorithms we use are variants and extensions of on-line gradient descent. Whereas our algorithms always predict using linear functions as hypotheses, none of our results requires the data to be linearly related. In fact, the bounds proved on the total prediction loss are typically expressed as a function of the total loss of the best fixed linear predictor with bounded norm. All the upper bounds are tight to within constants. Matching lower bounds are provided in some cases. Finally, we apply our results to the problem of on-line prediction for classes of smooth functions. ",
"neighbors": [
453,
1124,
1567
],
"mask": "Test"
},
{
"node_id": 1567,
"label": 6,
"text": "Title: Improved Bounds about On-line Learning of Smooth Functions of a Single Variable \nAbstract: We consider the complexity of learning classes of smooth functions formed by bounding different norms of a function's derivative. The learning model is the generalization of the mistake-bound model to continuous-valued functions. Suppose F q is the set of all absolutely continuous functions f from [0; 1] to R such that jjf 0 jj q 1, and opt(F q ; m) is the best possible bound on the worst-case sum of absolute prediction errors over sequences of m trials. We show that for all q 2, opt(F q ; m) = fi(",
"neighbors": [
1124,
1358,
1566
],
"mask": "Train"
},
{
"node_id": 1568,
"label": 0,
"text": "Title: The Utility of Feature Weighting in Nearest-Neighbor Algorithms \nAbstract: Nearest-neighbor algorithms are known to depend heavily on their distance metric. In this paper, we investigate the use of a weighted Euclidean metric in which the weight for each feature comes from a small set of options. We describe Diet, an algorithm that directs search through a space of discrete weights using cross-validation error as its evaluation function. Although a large set of possible weights can reduce the learner's bias, it can also lead to increased variance and overfitting. Our empirical study shows that, for many data sets, there is an advantage to weighting features, but that increasing the number of possible weights beyond two (zero and one) has very little benefit and sometimes degrades performance.",
"neighbors": [
430,
1053,
1328
],
"mask": "Train"
},
{
"node_id": 1569,
"label": 5,
"text": "Title: Estimating Attributes: Analysis and Extensions of RELIEF \nAbstract: In the context of machine learning from examples this paper deals with the problem of estimating the quality of attributes with and without dependencies among them. Kira and Rendell (1992a,b) developed an algorithm called RELIEF, which was shown to be very efficient in estimating attributes. Original RELIEF can deal with discrete and continuous attributes and is limited to only two-class problems. In this paper RELIEF is analysed and extended to deal with noisy, incomplete, and multi-class data sets. The extensions are verified on various artificial and one well known real-world problem.",
"neighbors": [
208,
430,
877,
1008,
1010,
1011,
1073,
1182,
1327,
1486,
1587,
1609,
1679,
1684,
1721,
1726
],
"mask": "Train"
},
{
"node_id": 1570,
"label": 0,
"text": "Title: Average-Case Analysis of a Nearest Neighbor Algorithm \nAbstract: In this paper we present an average-case analysis of the nearest neighbor algorithm, a simple induction method that has been studied by many researchers. Our analysis assumes a conjunctive target concept, noise-free Boolean attributes, and a uniform distribution over the instance space. We calculate the probability that the algorithm will encounter a test instance that is distance d from the prototype of the concept, along with the probability that the nearest stored training case is distance e from this test instance. From this we compute the probability of correct classification as a function of the number of observed training cases, the number of relevant attributes, and the number of irrelevant attributes. We also explore the behavioral implications of the analysis by presenting predicted learning curves for artificial domains, and give experimental results on these domains as a check on our reasoning. ",
"neighbors": [
634,
1109,
1111,
1164,
1339,
1678
],
"mask": "Train"
},
{
"node_id": 1571,
"label": 1,
"text": "Title: Average-Case Analysis of a Nearest Neighbor Algorithm \nAbstract: Eugenic Evolution for Combinatorial Optimization John William Prior Report AI98-268 May 1998 ",
"neighbors": [
163,
343,
1216,
1218,
2202
],
"mask": "Train"
},
{
"node_id": 1572,
"label": 1,
"text": "Title: The Coevolution of Mutation Rates \nAbstract: In order to better understand life, it is helpful to look beyond the envelop of life as we know it. A simple model of coevolution was implemented with the addition of genes for longevity and mutation rate in the individuals. This made it possible for a lineage to evolve to be immortal. It also allowed the evolution of no mutation or extremely high mutation rates. The model shows that when the individuals interact in a sort of zero-sum game, the lineages maintain relatively high mutation rates. However, when individuals engage in interactions that have greater consequences for one individual in the interaction than the other, lineages tend to evolve relatively low mutation rates. This model suggests that different genes may have evolved different mutation rates as adaptations to the varying pressures of interactions with other genes. ",
"neighbors": [
163,
1139,
1598
],
"mask": "Train"
},
{
"node_id": 1573,
"label": 4,
"text": "Title: Genetics-based Machine Learning and Behaviour Based Robotics: A New Synthesis complexity grows, the learning task\nAbstract: difficult. We face this problem using an architecture based on learning classifier systems and on the description of the learning technique used and of the organizational structure proposed, we present experiments that show how behaviour acquisition can be achieved. Our simulated robot learns to structural properties of animal behavioural organization, as proposed by ethologists. After a",
"neighbors": [
163,
636,
764,
910,
1432,
1481,
1673,
2174
],
"mask": "Train"
},
{
"node_id": 1574,
"label": 0,
"text": "Title: Probabilistic Instance-Based Learning \nAbstract: Traditional instance-based learning methods base their predictions directly on (training) data that has been stored in the memory. The predictions are based on weighting the contributions of the individual stored instances by a distance function implementing a domain-dependent similarity metrics. This basic approach suffers from three drawbacks: com-putationally expensive prediction when the database grows large, overfitting in the presence of noisy data, and sensitivity to the selection of a proper distance function. We address all these issues by giving a probabilistic interpretation to instance-based learning, where the goal is to approximate predictive distributions of the attributes of interest. In this probabilistic view the instances are not individual data items but probability distributions, and we perform Bayesian inference with a mixture of such prototype distributions. We demonstrate the feasibility of the method empirically for a wide variety of public domain classification data sets.",
"neighbors": [
484,
1017
],
"mask": "Test"
},
{
"node_id": 1575,
"label": 1,
"text": "Title: A Comparative Study of Genetic Search \nAbstract: We present a comparative study of genetic algorithms and their search properties when treated as a combinatorial optimization technique. This is done in the context of the NP-hard problem MAX-SAT, the comparison being relative to the Metropolis process, and by extension, simulated annealing. Our contribution is two-fold. First, we show that for large and difficult MAX-SAT instances, the contribution of cross-over to the search process is marginal. Little is lost if it is dispensed altogether, running mutation and selection as an enlarged Metropolis process. Second, we show that for these problem instances, genetic search consistently performs worse than simulated annealing when subject to similar resource bounds. The correspondence between the two algorithms is made more precise via a decomposition argument, and provides a framework for interpreting our results. ",
"neighbors": [
163,
1136,
1305
],
"mask": "Train"
},
{
"node_id": 1576,
"label": 6,
"text": "Title: What do Constructive Learners Really Learn? \nAbstract: In constructive induction (CI), the learner's problem representation is modified as a normal part of the learning process. This may be necessary if the initial representation is inadequate or inappropriate. However, the distinction between constructive and non-constructive methods appears to be highly ambiguous. Several conventional definitions of the process of constructive induction appear to include all conceivable learning processes. In this paper I argue that the process of constructive learning should be identified with that of relational learning (i.e., I suggest that ",
"neighbors": [
375,
426,
1266,
1595
],
"mask": "Validation"
},
{
"node_id": 1577,
"label": 1,
"text": "Title: Fast Probabilistic Modeling for Combinatorial Optimization \nAbstract: Probabilistic models have recently been utilized for the optimization of large combinatorial search problems. However, complex probabilistic models that attempt to capture inter-parameter dependencies can have prohibitive computational costs. The algorithm presented in this paper, termed COMIT, provides a method for using probabilistic models in conjunction with fast search techniques. We show how COMIT can be used with two very different fast search algorithms: hillclimbing and Population-based incremental learning (PBIL). The resulting algorithms maintain many of the benefits of probabilistic modeling, with far less computational expense. Extensive empirical results are provided; COMIT has been successfully applied to jobshop scheduling, traveling salesman, and knapsack problems. This paper also presents a review of probabilistic modeling for combi natorial optimization.",
"neighbors": [
343,
427,
658,
1580
],
"mask": "Train"
},
{
"node_id": 1578,
"label": 5,
"text": "Title: SFOIL: Stochastic Approach to Inductive Logic Programming \nAbstract: Current systems in the field of Inductive Logic Programming (ILP) use, primarily for the sake of efficiency, heuristically guided search techniques. Such greedy algorithms suffer from local optimization problem. Present paper describes a system named SFOIL, that tries to alleviate this problem by using a stochastic search method, based on a generalization of simulated annealing, called Markovian neural network. Various tests were performed on benchmark, and real-world domains. The results show both, advantages and weaknesses of stochastic approach. ",
"neighbors": [
877,
1010,
1061,
1182,
1622,
1651
],
"mask": "Test"
},
{
"node_id": 1579,
"label": 2,
"text": "Title: A Radial Basis Function Approach to Financial Time Series Analysis \nAbstract: Current systems in the field of Inductive Logic Programming (ILP) use, primarily for the sake of efficiency, heuristically guided search techniques. Such greedy algorithms suffer from local optimization problem. Present paper describes a system named SFOIL, that tries to alleviate this problem by using a stochastic search method, based on a generalization of simulated annealing, called Markovian neural network. Various tests were performed on benchmark, and real-world domains. The results show both, advantages and weaknesses of stochastic approach. ",
"neighbors": [
1103
],
"mask": "Test"
},
{
"node_id": 1580,
"label": 1,
"text": "Title: MIMIC: Finding Optima by Estimating Probability Densities \nAbstract: In many optimization problems, the structure of solutions reflects complex relationships between the different input parameters. For example, experience may tell us that certain parameters are closely related and should not be explored independently. Similarly, experience may establish that a subset of parameters must take on particular values. Any search of the cost landscape should take advantage of these relationships. We present MIMIC, a framework in which we analyze the global structure of the optimization landscape. A novel and efficient algorithm for the estimation of this structure is derived. We use knowledge of this structure to guide a randomized search through the solution space and, in turn, to refine our estimate of the structure. Our technique obtains significant speed gains over other randomized optimization procedures. ",
"neighbors": [
689,
1577,
1625
],
"mask": "Train"
},
{
"node_id": 1581,
"label": 4,
"text": "Title: A Study of the Generalization Capabilities of XCS \nAbstract: We analyze the generalization behavior of the XCS classifier system in environments in which only a few generalizations can be done. Experimental results presented in the paper evidence that the generalization mechanism of XCS can prevent it from learning even simple tasks in such environments. We present a new operator, named Specify, which contributes to the solution of this problem. XCS with the Specify operator, named XCSS, is compared to XCS in terms of performance and generalization capabilities in different types of environments. Experimental results show that XCSS can deal with a greater variety of environments and that it is more robust than XCS with respect to population size.",
"neighbors": [
657,
764,
1447,
1515,
1711
],
"mask": "Train"
},
{
"node_id": 1582,
"label": 3,
"text": "Title: Efficient Learning of Selective Bayesian Network Classifiers \nAbstract: In this paper, we present a computation-ally efficient method for inducing selective Bayesian network classifiers. Our approach is to use information-theoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, information-theoretic met-rics that are extensions of metrics used extensively in decision tree learning, namely Quin-lan's gain and gain ratio metrics and Man-taras's distance metric. We experimentally show that the algorithms based on gain ratio and distance metric learn selective Bayesian networks that have predictive accuracies as good as or better than those learned by existing selective Bayesian network induction approaches (K2-AS), but at a significantly lower computational cost. We prove that the subset-selection phase of these information-based algorithms has polynomial complexity, as compared to the worst-case exponential time complexity of the corresponding phase in K2-AS.",
"neighbors": [
632,
1086,
1545,
1908,
1909,
2017,
2677
],
"mask": "Test"
},
{
"node_id": 1583,
"label": 2,
"text": "Title: Evolutionary Design of Neural Architectures A Preliminary Taxonomy and Guide to Literature \nAbstract: In this paper, we present a computation-ally efficient method for inducing selective Bayesian network classifiers. Our approach is to use information-theoretic metrics to efficiently select a subset of attributes from which to learn the classifier. We explore three conditional, information-theoretic met-rics that are extensions of metrics used extensively in decision tree learning, namely Quin-lan's gain and gain ratio metrics and Man-taras's distance metric. We experimentally show that the algorithms based on gain ratio and distance metric learn selective Bayesian networks that have predictive accuracies as good as or better than those learned by existing selective Bayesian network induction approaches (K2-AS), but at a significantly lower computational cost. We prove that the subset-selection phase of these information-based algorithms has polynomial complexity, as compared to the worst-case exponential time complexity of the corresponding phase in K2-AS.",
"neighbors": [
900,
2396,
2563
],
"mask": "Validation"
},
{
"node_id": 1584,
"label": 0,
"text": "Title: Towards a Theory of Optimal Similarity Measures way of learning a similarity measure from the\nAbstract: The effectiveness of a case-based reasoning system is known to depend critically on its similarity measure. However, it is not clear whether there are elusive and esoteric similarity measures which might improve the performance of a case-based reasoner if substituted for the more commonly used measures. This paper therefore deals with the problem of choosing the best similarity measure, in the limited context of instance-based learning of classifications of a discrete example space. We consider both `fixed' similarity measures and `learnt' ones. In the former case, we give a definition of a similarity measure which we believe to be `optimal' w.r.t. the current prior distribution of target concepts and prove its optimality within a restricted class of similarity measures. We then show how this `optimal' similarity measure is instantiated by some specific prior distributions, and conclude that a very simple similarity measure is as good as any other in these cases. In a further section, we then show how our definition leads naturally to a conjecture about the ",
"neighbors": [
1164,
1328,
1626,
2037,
2151
],
"mask": "Validation"
},
{
"node_id": 1585,
"label": 4,
"text": "Title: Q-Learning for Bandit Problems \nAbstract: Multi-armed bandits may be viewed as decompositionally-structured Markov decision processes (MDP's) with potentially very-large state sets. A particularly elegant methodology for computing optimal policies was developed over twenty ago by Gittins [Gittins & Jones, 1974]. Gittins' approach reduces the problem of finding optimal policies for the original MDP to a sequence of low-dimensional stopping problems whose solutions determine the optimal policy through the so-called \"Gittins indices.\" Katehakis and Veinott [Katehakis & Veinott, 1987] have shown that the Gittins index for a process in state i may be interpreted as a particular component of the maximum-value function associated with the \"restart-in-i\" process, a simple MDP to which standard solution methods for computing optimal policies, such as successive approximation, apply. This paper explores the problem of learning the Git-tins indices on-line without the aid of a process model; it suggests utilizing process-state-specific Q-learning agents to solve their respective restart-in-state-i subproblems, and includes an example in which the online reinforcement learning approach is applied to a problem of stochastic scheduling|one instance drawn from a wide class of problems that may be formulated as bandit problems.",
"neighbors": [
565,
738,
804
],
"mask": "Train"
},
{
"node_id": 1586,
"label": 6,
"text": "Title: On the Boosting Ability of Top-Down Decision Tree Learning Algorithms provably optimal for decision tree\nAbstract: We analyze the performance of top-down algorithms for decision tree learning, such as those employed by the widely used C4.5 and CART software packages. Our main result is a proof that such algorithms are boosting algorithms. By this we mean that if the functions that label the internal nodes of the decision tree can weakly approximate the unknown target function, then the top-down algorithms we study will amplify this weak advantage to build a tree achieving any desired level of accuracy. The bounds we obtain for this amplification show an interesting dependence on the splitting criterion used by the top-down algorithm. More precisely, if the functions used to label the internal nodes have error 1=2 fl as approximations to the target function, then for the splitting criteria used by CART and C4.5, trees of size (1=*) O(1=fl 2 * 2 ) and (1=*) O(log(1=*)=fl 2 ) (respectively) suffice to drive the error below *. Thus (for example), a small constant advantage over random guessing is amplified to any larger constant advantage with trees of constant size. For a new splitting criterion suggested by our analysis, the much stronger fl A preliminary version of this paper appears in Proceedings of the Twenty-Eighth Annual ACM Symposium on the Theory of Computing, pages 459-468, ACM Press, 1996. Authors' addresses: M. Kearns, AT&T Research, 600 Mountain Avenue, Room 2A-423, Murray Hill, New Jersey 07974; electronic mail mkearns@research.att.com. Y. Mansour, Department of Computer Science, Tel Aviv University, Tel Aviv, Israel; electronic mail mansour@math.tau.ac.il. Y. Mansour was supported in part by the Israel Science Foundation, administered by the Israel Academy of Science and Humanities, and by a grant of the Israeli Ministry of Science and Technology. ",
"neighbors": [
1388
],
"mask": "Train"
},
{
"node_id": 1587,
"label": 5,
"text": "Title: A counter example to the stronger version of the binary tree hypothesis \nAbstract: The paper describes a counter example to the hypothesis which states that a greedy decision tree generation algorithm that constructs binary decision trees and branches on a single attribute-value pair rather than on all values of the selected attribute will always lead to a tree with fewer leaves for any given training set. We show also that RELIEFF is less myopic than other impurity functions and that it enables the induction algorithm that generates binary decision trees to reconstruct optimal (the smallest) decision trees in more cases. ",
"neighbors": [
1569
],
"mask": "Train"
},
{
"node_id": 1588,
"label": 1,
"text": "Title: Automatic Modularization by Speciation \nAbstract: Real-world problems are often too difficult to be solved by a single monolithic system. There are many examples of natural and artificial systems which show that a modular approach can reduce the total complexity of the system while solving a difficult problem satisfactorily. The success of modular artificial neural networks in speech and image processing is a typical example. However, designing a modular system is a difficult task. It relies heavily on human experts and prior knowledge about the problem. There is no systematic and automatic way to form a modular system for a problem. This paper proposes a novel evolutionary learning approach to designing a modular system automatically, without human intervention. Our starting point is speciation, using a technique based on fitness sharing. While speciation in genetic algorithms is not new, no effort has been made towards using a speciated population as a complete modular system. We harness the specialized expertise in the species of an entire population, rather than a single individual, by introducing a gating algorithm. We demonstrate our approach to automatic modularization by improving co-evolutionary game learning. Following earlier researchers, we learn to play iterated prisoner's dilemma. We review some problems of earlier co-evolutionary learning, and explain their poor generalization ability and sudden mass extinctions. The generalization ability of our approach is significantly better than past efforts. Using the specialized expertise of the entire speciated population though a gating algorithm, instead of the best individual, is the main contributor to this improvement. ",
"neighbors": [
1114,
1117,
2334
],
"mask": "Train"
},
{
"node_id": 1589,
"label": 4,
"text": "Title: Learning to Sense Selectively in Physical Domains \nAbstract: In this paper we describe an approach to representing, using, and improving sensory skills for physical domains. We present Icarus, an architecture that represents control knowledge in terms of durative states and sequences of such states. The system operates in cycles, activating a state that matches the environmental situation and letting that state control behavior until its conditions fail or until finding another matching state with higher priority. Information about the probability that conditions will remain satisfied minimizes demands on sensing, as does knowledge about the durations of states and their likely successors. Three statistical learning methods let the system gradually reduce sensory load as it gains experience in a domain. We report experimental evaluations of this ability on three simulated physical tasks: flying an aircraft, steering a truck, and balancing a pole. Our experiments include lesion studies that identify the reduction in sensing due to each of the learning mechanisms and others that examine the effect of domain characteristics. ",
"neighbors": [
910
],
"mask": "Test"
},
{
"node_id": 1590,
"label": 1,
"text": "Title: The Exploitation of Cooperation in Iterated Prisoner's Dilemma \nAbstract: We follow Axelrod [2] in using the genetic algorithm to play Iterated Prisoner's Dilemma. Each member of the population (i.e., each strategy) is evaluated by how it performs against the other members of the current population. This creates a dynamic environment in which the algorithm is optimising to a moving target instead of the usual evaluation against some fixed set of strategies, causing an \"arms race\" of innovation [3]. We conduct two sets of experiments. The first set investigates what conditions evolve the best strategies. The second set studies the robustness of the strategies thus evolved, that is, are the strategies useful only in the round robin of its population or are they effective against a wide variety of opponents? Our results indicate that the population has nearly always converged by about 250 generations, by which time the bias in the population has almost always stabilised at 85%. Our results confirm that cooperation almost always becomes the dominant strategy [1, 2]. We can also confirm that seeding the population with expert strategies is best done in small amounts so as to leave the initial population with plenty of genetic diversity [7]. The lack of robustness in strategies produced in the round robin evaluation is demonstrated by some examples of a population of nave cooperators being exploited by a defect-first strategy. This causes a sudden but ephemeral decline in the population's average score, but it recovers when less nave cooperators emerge and do well against the exploiting strategies. This example of runaway evolution is brought back to reality by a suitable mutation, reminiscent of punctuated equilibria [12]. We find that a way to reduce such navity is to make the GA population play against an extra, ",
"neighbors": [
163,
910,
965
],
"mask": "Train"
},
{
"node_id": 1591,
"label": 2,
"text": "Title: Unsupervised Learning by Convex and Conic Coding \nAbstract: Unsupervised learning algorithms based on convex and conic encoders are proposed. The encoders find the closest convex or conic combination of basis vectors to the input. The learning algorithms produce basis vectors that minimize the reconstruction error of the encoders. The convex algorithm develops locally linear models of the input, while the conic algorithm discovers features. Both algorithms are used to model handwritten digits and compared with vector quantization and principal component analysis. The neural network implementations involve feedback connections that project a reconstruction back to the input layer.",
"neighbors": [
33,
36,
871,
954,
1050,
1701
],
"mask": "Train"
},
{
"node_id": 1592,
"label": 2,
"text": "Title: A Unified Gradient-Descent/Clustering Architecture for Finite State Machine Induction \nAbstract: Although recurrent neural nets have been moderately successful in learning to emulate finite-state machines (FSMs), the continuous internal state dynamics of a neural net are not well matched to the discrete behavior of an FSM. We describe an architecture, called DOLCE, that allows discrete states to evolve in a net as learning progresses. dolce consists of a standard recurrent neural net trained by gradient descent and an adaptive clustering technique that quantizes the state space. dolce is based on the assumption that a finite set of discrete internal states is required for the task, and that the actual network state belongs to this set but has been corrupted by noise due to inaccuracy in the weights. dolce learns to recover the discrete state with maximum a posteriori probability from the noisy state. Simulations show that dolce leads to a significant improvement in generalization performance over earlier neural net approaches to FSM induction.",
"neighbors": [
405,
1161,
1176,
1293,
1298,
1734
],
"mask": "Train"
},
{
"node_id": 1593,
"label": 3,
"text": "Title: Boltzmann Chains and Hidden Markov Models \nAbstract: We propose a statistical mechanical framework for the modeling of discrete time series. Maximum likelihood estimation is done via Boltzmann learning in one-dimensional networks with tied weights. We call these networks Boltzmann chains and show that they contain hidden Markov models (HMMs) as a special case. Our framework also motivates new architectures that address particular shortcomings of HMMs. We look at two such architectures: parallel chains that model feature sets with disparate time scales, and looped networks that model long-term dependencies between hidden states. For these networks, we show how to implement the Boltzmann learning rule exactly, in polynomial time, without resort to simulated or mean-field annealing. The necessary computations are done by exact decimation procedures from statistical mechanics.",
"neighbors": [
978,
1116,
1288,
1437,
1461
],
"mask": "Validation"
},
{
"node_id": 1594,
"label": 1,
"text": "Title: An Evolutionary Approach to Vector Quantizer Design \nAbstract: Vector quantization is a lossy coding technique for encoding a set of vectors from different sources such as image and speech. The design of vector quantizers that yields the lowest distortion is one of the most challenging problems in the field of source coding. However, this problem is known to be difficult [3]. The conventional solution technique works through a process of iterative refinements which yield only locally optimal results. In this paper, we design and evaluate three versions of genetic algorithms for computing vector quantizers. Our preliminary study with Gaussian-Markov sources showed that the genetic approach outperforms the conventional technique in most cases.",
"neighbors": [
163,
1136
],
"mask": "Train"
},
{
"node_id": 1595,
"label": 6,
"text": "Title: ID2-of-3: Constructive Induction of M of-N Concepts for Discriminators in Decision Trees \nAbstract: We discuss an approach to constructing composite features during the induction of decision trees. The composite features correspond to m-of-n concepts. There are three goals of this research. First, we explore a family of greedy methods for building m-of-n concepts (one of which, GS, is described in this paper). Second, we show how these concepts can be formed as internal nodes of decision trees, serving as a bias to the learner. Finally, we evaluate the method on several artificially generated and naturally occurring data sets to determine the effects of this bias.",
"neighbors": [
151,
836,
1301,
1576,
1657,
1776,
1824,
1862,
1863,
1964,
2346,
2675
],
"mask": "Train"
},
{
"node_id": 1596,
"label": 5,
"text": "Title: First Order Regression \nAbstract: We present a new approach, called First Order Regression (FOR), to handling numerical information in Inductive Logic Programming (ILP). FOR is a combination of ILP and numerical regression. First-order logic descriptions are induced to carve out those subspaces that are amenable to numerical regression among real-valued variables. The program Fors is an implementation of this idea, where numerical regression is focused on a distinguished continuous argument of the target predicate. We show that this can be viewed as a generalisation of the usual ILP problem. Applications of Fors on several real-world data sets are described: the prediction of mutagenicity of chemicals, the modelling of liquid dynamics in a surge tank, predicting the roughness in steel grinding, finite element mesh design, and operator's skill reconstruction in electric discharge machining. A comparison of Fors' performance with previous results in these domains indicates that Fors is an effective tool for ILP applications that involve numerical data. ",
"neighbors": [
314,
348,
1244
],
"mask": "Test"
},
{
"node_id": 1597,
"label": 0,
"text": "Title: an Opportunistic Enterprise \nAbstract: Tech Report GIT-COGSCI-97/04 Abstract This paper identifies goal handling processes that begin to account for the kind of processes involved in invention. We identify new kinds of goals with special properties and mechanisms for processing such goals, as well as means of integrating opportunism, deliberation, and social interaction into goal/plan processes. We focus on invention goals, which address significant enterprises associated with an inventor. Invention goals represent seed goals of an expert, around which the whole knowledge of an expert gets reorganized and grows more or less opportunistically. Invention goals reflect the idiosyncrasy of thematic goals among experts. They constantly increase the sensitivity of individuals for particular events that might contribute to their satisfaction. Our exploration is based on a well-documented example: the invention of the telephone by Alexander Graham Bell. We propose mechanisms to explain: (1) how Bell's early thematic goals gave rise to the new goals to invent the multiple telegraph and the telephone, and (2) how the new goals interacted opportunistically. Finally, we describe our computational model, ALEC, that accounts for the role of goals in invention. ",
"neighbors": [
486,
1138,
1148,
1534
],
"mask": "Train"
},
{
"node_id": 1598,
"label": 1,
"text": "Title: Mutation Rates as Adaptations \nAbstract: In order to better understand life, it is helpful to look beyond the envelop of life as we know it. A simple model of coevolution was implemented with the addition of a gene for the mutation rate of the individual. This allowed the mutation rate itself to evolve in a lineage. The model shows that when the individuals interact in a sort of zero-sum game, the lineages maintain relatively high mutation rates. However, when individuals engage in interactions that have greater consequences for one individual in the interaction than the other, lineages tend to evolve relatively low mutation rates. This model suggests that different genes may have evolved different mutation rates as adaptations to the varying pressures of interactions with other genes.",
"neighbors": [
780,
1139,
1572
],
"mask": "Train"
},
{
"node_id": 1599,
"label": 4,
"text": "Title: Finding Promising Exploration Regions by Weighting Expected Navigation Costs continuous environments, some first-order approximations to\nAbstract: In many learning tasks, data-query is neither free nor of constant cost. Often the cost of a query depends on the distance from the current location in state space to the desired query point. This is easiest to visualize in robotics environments where a robot must physically move to a location in order to learn something there. The cost of this learning is the time and effort it takes to reach the new location. Furthermore, this cost is characterized by a distance relationship: When the robot moves as directly as possible from a source state to a destination state, the states through which it passes are closer (i.e., cheaper to reach) than is the destination state. Distance relationships hold in many real-world non-robotics tasks also | any environment where states are not immediately accessible. Optimiz- ing the performance of a chemical plant, for example, requires the adjustment of analog controls which have a continuum of intermediate states. Querying possibly optimal regions of state space in these environments is inadvisable if the path to the query point intersects a region of known volatility. In discrete environments with small numbers of states, it's possible to keep track of precisely where and to what degree learning has already been done sufficiently and where it still needs to be done. It is also possible to keep best known estimates of the distances from each state to each other (see Kaelbling, 1993). Kael- bling's DG-learning algorithm is based on Floyd's all- pairs shortest-path algorithm (Aho, Hopcroft, & Ull- man 1983) and is just slightly different from that used here. These \"all-goals\" algorithms (after Kaelbling) can provide a highly satisfying representation of the distance/benefit tradeoff. where E x is the exploration value of state x (the potential benefit of exploring state x), D xy is the distance to state y, and A xy is the action to take in state x to move most cheaply to state y. This information can be learned incrementally and completely : That is, it can be guaranteed that if a path from any state x to any state y is deducible from the state transitions seen so far, then (1) the algorithm will have a non-null entry for S xy (i.e., the algorithm will know a path from x to y), and (2) The current value for D xy will be the best deducible value from all data seen so far. With this information, decisions about which areas to explore next can be based on not just the amount to be gained from such exploration but also on the cost of reaching each area together with the benefit of incidental exploration done on the way. Though optimal exploration is NP-hard (i.e., it's at least as difficult as TSP) good approximations are easily computable. One such good approximation is to take the action at each state that leads in the direction of greatest accumulated exploration benefit: ",
"neighbors": [
671,
1697
],
"mask": "Train"
},
{
"node_id": 1600,
"label": 2,
"text": "Title: First-Order vs. Second-Order Single Layer Recurrent Neural Networks \nAbstract: We examine the representational capabilities of first-order and second-order Single Layer Recurrent Neural Networks (SLRNNs) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforward neurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNNs.",
"neighbors": [
411,
946,
1293
],
"mask": "Validation"
},
{
"node_id": 1601,
"label": 2,
"text": "Title: Learning the Past Tense of English Verbs: The Symbolic Pattern Associator vs. Connectionist Models \nAbstract: Learning the past tense of English verbs | a seemingly minor aspect of language acquisition | has generated heated debates since 1986, and has become a landmark task for testing the adequacy of cognitive modeling. Several artificial neural networks (ANNs) have been implemented, and a challenge for better symbolic models has been posed. In this paper, we present a general-purpose Symbolic Pattern Associator (SPA) based upon the decision-tree learning algorithm ID3. We conduct extensive head-to-head comparisons on the generalization ability between ANN models and the SPA under different representations. We conclude that the SPA generalizes the past tense of unseen verbs better than ANN models by a wide margin, and we offer insights as to why this should be the case. We also discuss a new default strategy for decision-tree learning algorithms. ",
"neighbors": [
224,
1155,
1428,
1429,
1644,
2423
],
"mask": "Train"
},
{
"node_id": 1602,
"label": 3,
"text": "Title: Defining Explanation in Probabilistic Systems \nAbstract: As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recommendations becomes more critical. The system will also need a mechanism for ordering competing explanations. We examine two representative approaches to explanation in the literature one due to G ardenfors and one due to Pearland show that both suffer from significant problems. We propose an approach to defining a notion of better explanation that combines some of the features of both together with more recent work by Pearl and others on causality.",
"neighbors": [
339,
971,
1326,
1549
],
"mask": "Train"
},
{
"node_id": 1603,
"label": 1,
"text": "Title: Evolving Complex Structures via Co- operative Coevolution \nAbstract: A cooperative coevolutionary approach to learning complex structures is presented which, although preliminary in nature, appears to have a number of advantages over non-coevolutionary approaches. The cooperative coevolutionary approach encourages the parallel evolution of substructures which interact in useful ways to form more complex higher level structures. The architecture is designed to be general enough to permit the inclusion, if appropriate, of a priori knowledge in the form of initial biases towards particular kinds of decompositions. A brief summary of initial results obtained from testing this architecture in several problem domains is presented which shows a significant speedup over more traditional non-coevolutionary approaches. ",
"neighbors": [
1114,
1117,
2089
],
"mask": "Train"
},
{
"node_id": 1604,
"label": 2,
"text": "Title: Analytic Comparison of Nonlinear H 1 -Norm Bounding Techniques for Low Order Systems with Saturation \nAbstract: A cooperative coevolutionary approach to learning complex structures is presented which, although preliminary in nature, appears to have a number of advantages over non-coevolutionary approaches. The cooperative coevolutionary approach encourages the parallel evolution of substructures which interact in useful ways to form more complex higher level structures. The architecture is designed to be general enough to permit the inclusion, if appropriate, of a priori knowledge in the form of initial biases towards particular kinds of decompositions. A brief summary of initial results obtained from testing this architecture in several problem domains is presented which shows a significant speedup over more traditional non-coevolutionary approaches. ",
"neighbors": [
1272,
1281,
1451
],
"mask": "Train"
},
{
"node_id": 1605,
"label": 6,
"text": "Title: Learning from the Environment by Experimentation: The Need for Few and Informative Examples \nAbstract: An intelligent system must be able to adapt and learn to correct and update its model of the environment incrementally and deliberately. In complex environments that have many parameters and where interactions have a cost, sampling the possible range of states to test the results of action executions is not a practical approach. We present a practical approach based on continuous and selective interaction with the environment that pinpoints the type of fault in the domain knowledge that causes any unexpected behavior of the environment, and resorts to experimentation when additional information is needed to correct the system's knowledge. ",
"neighbors": [
1491
],
"mask": "Test"
},
{
"node_id": 1606,
"label": 2,
"text": "Title: Pruning Recurrent Neural Networks for Improved Generalization Performance \nAbstract: Determining the architecture of a neural network is an important issue for any learning task. For recurrent neural networks no general methods exist that permit the estimation of the number of layers of hidden neurons, the size of layers or the number of weights. We present a simple pruning heuristic which significantly improves the generalization performance of trained recurrent networks. We illustrate this heuristic by training a fully recurrent neural network on positive and negative strings of a regular grammar. We also show that if rules are extracted from networks trained to recognize these strings, that rules extracted after pruning are more consistent with the rules to be learned. This performance improvement is obtained by pruning and retraining the networks. Simulations are shown for training and pruning a recurrent neural net on strings generated by two regular grammars, a randomly-generated 10-state grammar and an 8-state triple parity grammar. Further simulations indicate that this pruning method can gives generalization performance superior to that obtained by training with weight decay.",
"neighbors": [
28,
409,
826,
1293,
2381
],
"mask": "Train"
},
{
"node_id": 1607,
"label": 6,
"text": "Title: Characterizing the generalization performance of model selection strategies \nAbstract: We investigate the structure of model selection problems via the bias/variance decomposition. In particular, we characterize the essential aspects of a model selection task by the bias and variance profiles it generates over the sequence of hypothesis classes. With this view, we develop a new understanding of complexity-penalization methods: First, the penalty terms can be interpreted as postulating a particular profile for the variances as a function of model complexityif the postulated and true profiles do not match, then systematic under-fitting or over-fitting results, depending on whether the penalty terms are too large or too small. Second, we observe that it is generally best to penalize according to the true variances of the task, and therefore no fixed penalization strategy is optimal across all problems. We then use this characterization to introduce the notion of easy versus hard model selection problems. Here we show that if the variance profile grows too rapidly in relation to the biases, then standard model selection techniques become prone to significant errors. This can happen, for example, in regression problems where the independent variables are drawn from wide-tailed distributions. To counter this, we discuss a new model selection strategy that dramatically outperforms standard complexity-penalization and hold-out meth ods on these hard tasks.",
"neighbors": [
848,
1053,
1223,
1335
],
"mask": "Validation"
},
{
"node_id": 1608,
"label": 3,
"text": "Title: Combining estimates in regression and classification \nAbstract: We consider the problem of how to combine a collection of general regression fit vectors in order to obtain a better predictive model. The individual fits may be from subset linear regression, ridge regression, or something more complex like a neural network. We develop a general framework for this problem and examine a recent cross-validation-based proposal called \"stacking\" in this context. Combination methods based on the bootstrap and analytic methods are also derived and compared in a number of examples, including best subsets regression and regression trees. Finally, we apply these ideas to classification problems where the estimated combination weights can yield insight into the structure of the problem.",
"neighbors": [
431,
949,
987,
1220,
1512,
2225
],
"mask": "Test"
},
{
"node_id": 1609,
"label": 5,
"text": "Title: Prognosing the femoral neck fracture recovery with machine learning \nAbstract: We compare the performance and explanation abilities of several machine learning algorithms in the problem of predicting the femoral neck fracture recovery. Among different algorithms, the semi naive Bayesian classifier and Assistant-R seem to be the most appropriate. We analyze the combination of decisions of several classifiers for solving the prediction problem and show that the combined classifier improves both the performance and explanation ability. ",
"neighbors": [
1569
],
"mask": "Validation"
},
{
"node_id": 1610,
"label": 2,
"text": "Title: Using Fourier-Neural Recurrent Networks to Fit Sequential Input/Output Data \nAbstract: This paper suggests the use of Fourier-type activation functions in fully recurrent neural networks. The main theoretical advantage is that, in principle, the problem of recovering internal coefficients from input/output data is solvable in closed form.",
"neighbors": [
1028,
1037
],
"mask": "Train"
},
{
"node_id": 1611,
"label": 1,
"text": "Title: Island Model Genetic Algorithms and Linearly Separable Problems \nAbstract: Parallel Genetic Algorithms have often been reported to yield better performance than Genetic Algorithms which use a single large panmictic population. In the case of the Island Model Genetic Algorithm, it has been informally argued that having multiple subpopulations helps to preserve genetic diversity, since each island can potentially follow a different search trajectory through the search space. On the other hand, linearly separable functions have often been used to test Island Model Genetic Algorithms; it is possible that Island Models are particular well suited to separable problems. We look at how Island Models can track multiple search trajectories using the infinite population models of the simple genetic algorithm. We also introduce a simple model for better understanding when Island Model Genetic Algorithms may have an advantage when processing linearly separable problems.",
"neighbors": [
100,
163,
1153,
1379,
1380
],
"mask": "Train"
},
{
"node_id": 1612,
"label": 2,
"text": "Title: Bootstrapping with Noise: An Effective Regularization Technique \nAbstract: Bootstrap samples with noise are shown to be an effective smoothness and capacity control technique for training feed-forward networks and for other statistical methods such as generalized additive models. It is shown that noisy bootstrap performs best in conjunction with weight decay regularization and ensemble averaging. The two-spiral problem, a highly non-linear noise-free data, is used to demonstrate these findings. The combination of noisy bootstrap and ensemble averaging is also shown useful for generalized additive modeling, and is also demonstrated on the well known Cleveland Heart Data [7].",
"neighbors": [
1517
],
"mask": "Train"
},
{
"node_id": 1613,
"label": 3,
"text": "Title: Priors and Component Structures in Autoregressive Time Series Models \nAbstract: New approaches to prior specification and structuring in autoregressive time series models are introduced and developed. We focus on defining classes of prior distributions for parameters and latent variables related to latent components of an autoregressive model for an observed time series. These new priors naturally permit the incorporation of both qualitative and quantitative prior information about the number and relative importance of physically meaningful components that represent low frequency trends, quasi-periodic sub-processes, and high frequency residual noise components of observed series. The class of priors also naturally incorporates uncertainty about model order, and hence leads in posterior analysis to model order assessment and resulting posterior and predictive inferences that incorporate full uncertainties about model order as well as model parameters. Analysis also formally incorporates uncertainty, and leads to inferences about, unknown initial values of the time series, as it does for predictions of future values. Posterior analysis involves easily implemented iterative simulation methods, developed and described here. One motivating applied field is climatology, where the evaluation of latent structure, especially quasi-periodic structure, is of critical importance in connection with issues of global climatic variability. We explore analysis of data from the Southern Oscillation Index (SOI), one of several series that has been central in recent high-profile debates in the atmospheric sciences about recent apparent trends in climatic indicators. ",
"neighbors": [
99,
784,
1162,
1614,
1619
],
"mask": "Validation"
},
{
"node_id": 1614,
"label": 3,
"text": "Title: Bayesian Inference on Periodicities and Component Spectral Structure in Time Series \nAbstract: Summary We detail and illustrate time series analysis and spectral inference in autoregressive models with a focus on the underlying latent structure and time series decompositions. A novel class of priors on parameters of latent components leads to a new class of smoothness priors on autoregressive coefficients, provides for formal inference on model order, including very high order models, and leads to the incorporation of uncertainty about model order into summary inferences. The class of prior models also allows for subsets of unit roots, and hence leads to inference on sustained though stochastically time-varying periodicities in time series. Applications to analysis of the frequency composition of time series, in both time and spectral domains, is illustrated in a study of a time series from astronomy. This analyses demonstrates the impact and utility of the new class of priors in addressing model order uncertainty and in allowing for unit root structure. Time domain decomposition of a time series into estimated latent components provides an important alternative view of the component spectral characteristics of a series. In addition, our data analysis illustrates the utility of the smoothness prior and allowance for unit root structure in inference about spectral densities. In particular, the framework overcomes supposed problems in spectral estimation with autoregressive models using more traditional model fitting methods. ",
"neighbors": [
1613,
1619
],
"mask": "Validation"
},
{
"node_id": 1615,
"label": 2,
"text": "Title: A Provably Convergent Dynamic Training Method for Multilayer Perceptron Networks \nAbstract: This paper presents a new method for training multilayer perceptron networks called DMP1 (Dynamic Multilayer Perceptron 1). The method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. The individual nodes of the network are trained using a gentetic algorithm. The method is capable of handling real-valued inputs and a proof is given concerning its convergence properties of the basic model. Simulation results show that DMP1 performs favorably in comparison with other learning algorithms. ",
"neighbors": [
809,
1229,
1639
],
"mask": "Train"
},
{
"node_id": 1616,
"label": 4,
"text": "Title: NeuroDraughts: the role of representation, search, training regime and architecture in a TD draughts player \nAbstract: NeuroDraughts is a draughts playing program similar in approach to NeuroGammon and NeuroChess [Tesauro, 1992, Thrun, 1995]. It uses an artificial neural network trained by the method of temporal difference learning to learn by self-play how to play the game of draughts. This paper discusses the relative contribution of board representation, search depth, training regime, architecture and run time parameters to the strength of the TDplayer produced by the system. Keywords: Temporal Difference Learning, Input representation, Search, Draughts. ",
"neighbors": [
523,
565,
882
],
"mask": "Validation"
},
{
"node_id": 1617,
"label": 0,
"text": "Title: Knowledge Discovery in International Conflict Databases \nAbstract: Artificial Intelligence is heavily supported by military institutions, while practically no effort goes into the investigation of possible contributions of AI to the avoidance and termination of crises and wars. This paper makes a first step into this direction by investigating the use of machine learning techniques for discovering knowledge in international conflict and conflict management databases. We have applied similarity-based case retrieval to the KOSIMO database of international conflicts. Furthermore, we present results of analyzing the CONFMAN database of successful and unsuccessful conflict management attempts with an inductive decision tree learning algorithm. The latter approach seems to be particularly promising, as conflict management events apparently are more repetitive and thus better suited for machine-aided analysis. ",
"neighbors": [
236,
430,
1107
],
"mask": "Test"
},
{
"node_id": 1618,
"label": 6,
"text": "Title: Selection of Relevant Features and Examples in Machine Learning \nAbstract: In this survey, we review work in machine learning on methods for handling data sets containing large amounts of irrelevant information. We focus on two key issues: the problem of selecting relevant features, and the problem of selecting relevant examples. We describe the advances that have been made on these topics in both empirical and theoretical work in machine learning, and we present a general framework that we use to compare different methods. We close with some challenges for future work in this area. ",
"neighbors": [
1112,
1220,
2343
],
"mask": "Validation"
},
{
"node_id": 1619,
"label": 3,
"text": "Title: Exploratory Modelling of Multiple Non-Stationary Time Series: Latent Process Structure and Decompositions \nAbstract: We describe and illustrate Bayesian approaches to modelling and analysis of multiple non-stationary time series. This begins with uni-variate models for collections of related time series assumedly driven by underlying but unobservable processes, referred to as dynamic latent factor processes. We focus on models in which the factor processes, and hence the observed time series, are modelled by time-varying autoregressions capable of flexibly representing ranges of observed non-stationary characteristics. We highlight concepts and new methods of time series decomposition to infer characteristics of latent components in time series, and relate uni-variate decomposition analyses to underlying multivariate dynamic factor structure. Our motivating application is in analysis of multiple EEG traces from an ongoing EEG study at Duke. In this study, individuals undergoing ECT therapy generate multiple EEG traces at various scalp locations, and physiological interest lies in identifying dependencies and dissimilarities across series. In addition to the multivariate and non-stationary aspects of the series, this area provides illustration of the new results about decomposition of time series into latent, physically interpretable components; this is illustrated in data analysis of one EEG data set. The paper also discusses current and future research directions. fl This research was supported in part by the National Science Foundation under grant DMS-9311071. The EEG data and context arose from discussions with Dr Andrew Krystal, of Duke University Medical Center, with whom continued interactions have been most valuable. Address for correspondence: Institute of Statistics and Decision Sciences, Duke University, Durham, NC 27708-0251 U.S.A. (http://www.stat.duke.edu) ",
"neighbors": [
99,
1613,
1614,
1722,
1723
],
"mask": "Test"
},
{
"node_id": 1620,
"label": 5,
"text": "Title: Efficient -Subsumption based on Graph Algorithms \nAbstract: The -subsumption problem is crucial to the efficiency of ILP learning systems. We discuss two -subsumption algorithms based on strategies for preselecting suitable matching literals. The class of clauses, for which subsumption becomes polynomial, is a superset of the deterministic clauses. We further map the general problem of -subsumption to a certain problem of finding a clique of fixed size in a graph, and in return show that a specialization of the pruning strategy of the Car-raghan and Pardalos clique algorithm provides a dramatic reduction of the subsumption search space. We also present empirical results for the mesh design data set.",
"neighbors": [
1177,
1180
],
"mask": "Train"
},
{
"node_id": 1621,
"label": 0,
"text": "Title: Evaluating the Effectiveness of Derivation Replay in Partial-order vs State-space Planning \nAbstract: Case-based planning involves storing individual instances of problem-solving episodes and using them to tackle new planning problems. This paper is concerned with derivation replay, which is the main component of a form of case-based planning called derivational analogy (DA). Prior to this study, implementations of derivation replay have been based within state-space planning. We are motivated by the acknowledged superiority of partial-order (PO) planners in plan generation. Here we demonstrate that plan-space planning also has an advantage in replay. We will argue that the decoupling of planning (derivation) order and the execution order of plan steps, provided by partial-order planners, enables them to exploit the guidance of previous cases in a more efficient and straightforward fashion. We validate our hypothesis through a focused empirical comparison. ",
"neighbors": [
300,
594,
752,
824,
1194,
1448
],
"mask": "Train"
},
{
"node_id": 1622,
"label": 5,
"text": "Title: Stochastic Propositionalization of Non-Determinate Background Knowledge \nAbstract: It is a well-known fact that propositional learning algorithms require \"good\" features to perform well in practice. So a major step in data engineering for inductive learning is the construction of good features by domain experts. These features often represent properties of structured objects, where a property typically is the occurrence of a certain substructure having certain properties. To partly automate the process of \"feature engineering\", we devised an algorithm that searches for features which are defined by such substructures. The algorithm stochastically conducts a top-down search for first-order clauses, where each clause represents a binary feature. It differs from existing algorithms in that its search is not class-blind, and that it is capable of considering clauses (\"context\") of almost arbitrary length (size). Preliminary experiments are favorable, and support the view that this approach is promising.",
"neighbors": [
344,
1312,
1322,
1428,
1578
],
"mask": "Validation"
},
{
"node_id": 1623,
"label": 2,
"text": "Title: A Neural Network Model for the Gold Market \nAbstract: A neural network trend predictor for the gold bullion market is presented. A simple recurrent neural network was trained to recognize turning points in the gold market based on a to-date history of ten market indices. The network was tested on data that was held back from training, and a significant amount of predictive power was observed. The turning point predictions can be used to time transactions in the gold bullion and gold mining company stock index markets to obtain a significant paper profit during the test period. The training data consisted of daily closing prices for the ten input markets for a period of about five years. The turning point targets were labeled for the training phase without the help of a financial expert. Thus, this experiment shows that useful predictions can be made without the use of more extensive market data or knowledge. ",
"neighbors": [
1313
],
"mask": "Validation"
},
{
"node_id": 1624,
"label": 3,
"text": "Title: Causal Discovery via MML \nAbstract: Automating the learning of causal models from sample data is a key step toward incorporating machine learning in the automation of decision-making and reasoning under uncertainty. This paper presents a Bayesian approach to the discovery of causal models, using a Minimum Message Length (MML) method. We have developed encoding and search methods for discovering linear causal models. The initial experimental results presented in this paper show that the MML induction approach can recover causal models from generated data which are quite accurate reflections of the original models; our results compare favorably with those of the TETRAD II program of Spirtes et al. [25] even when their algorithm is supplied with prior temporal information and MML is not. ",
"neighbors": [
1550
],
"mask": "Validation"
},
{
"node_id": 1625,
"label": 4,
"text": "Title: Reinforcement Learning by Probability Matching \nAbstract: We present a new algorithm for associative reinforcement learning. The algorithm is based upon the idea of matching a network's output probability with a probability distribution derived from the environment's reward signal. This Probability Matching algorithm is shown to perform faster and be less susceptible to local minima than previously existing algorithms. We use Probability Matching to train mixture of experts networks, an architecture for which other reinforcement learning rules fail to converge reliably on even simple problems. This architecture is particularly well suited for our algorithm as it can compute arbitrarily complex functions yet calculation of the output probability is simple.",
"neighbors": [
1580
],
"mask": "Train"
},
{
"node_id": 1626,
"label": 0,
"text": "Title: A Review and Empirical Evaluation of Feature Weighting Methods for a Class of Lazy Learning Algorithms \nAbstract: Many lazy learning algorithms are derivatives of the k-nearest neighbor (k-NN) classifier, which uses a distance function to generate predictions from stored instances. Several studies have shown that k-NN's performance is highly sensitive to the definition of its distance function. Many k-NN variants have been proposed to reduce this sensitivity by parameterizing the distance function with feature weights. However, these variants have not been categorized nor empirically compared. This paper reviews a class of weight-setting methods for lazy learning algorithms. We introduce a framework for distinguishing these methods and empirically compare them. We observed four trends from our experiments and conducted further studies to highlight them. Our results suggest that methods which use performance feedback to assign weight settings demonstrated three advantages over other methods: they require less pre-processing, perform better in the presence of interacting features, and generally require less training data to learn good settings. We also found that continuous weighting methods tend to outperform feature selection algorithms for tasks where some features are useful but less important than others.",
"neighbors": [
1164,
1407,
1584,
1698,
1735
],
"mask": "Train"
},
{
"node_id": 1627,
"label": 5,
"text": "Title: Inductive Learning of Characteristic Concept Descriptions from Small Sets of Classified Examples \nAbstract: This paper presents a novel idea to the problem of learning concept descriptions from examples. Whereas most existing approaches rely on a large number of classified examples, the approach presented in the paper is aimed at being applicable when only a few examples are classified as positive (and negative) instances of a concept. The approach tries to take advantage of the information which can be induced from descriptions of unclassified objects using a conceptual clustering algorithm. The system Cola is described and results of applying Cola in two real-world domains are presented. ",
"neighbors": [
344,
479,
1177
],
"mask": "Train"
},
{
"node_id": 1628,
"label": 1,
"text": "Title: Local Selection \nAbstract: Local selection (LS) is a very simple selection scheme in evolutionary algorithms. Individual fitnesses are compared to a fixed threshold, rather than to each other, to decide who gets to reproduce. LS, coupled with fitness functions stemming from the consumption of shared environmental resources, maintains diversity in a way similar to fitness sharing; however it is generally more efficient than fitness sharing, and lends itself to parallel implementations for distributed tasks. While LS is not prone to premature convergence, it applies minimal selection pressure upon the population. LS is therefore more appropriate than other, stronger selection schemes only on certain problem classes. This papers characterizes one broad class of problems in which LS consistently out performs tournament selection.",
"neighbors": [
1063,
1175
],
"mask": "Train"
},
{
"node_id": 1629,
"label": 2,
"text": "Title: Regional Stability of an ERS/JERS-1 Classifer \nAbstract: The potential of combined ERS/JERS-1 SAR images for land cover classification was demonstrated for the Raco test site (Michigan) in recent papers and articles. Our goal is to develop a classification algorithm which is stable in terms of applicability in different geographical regions. Unlike optical remote sensing techniques, radar remote sensing can provide calibrated data where the image signal is solely determined by the physical (structural) and electrical properties of the targets on the Earth's surface and near subsurface. Hence, a classifier based on radar signatures of object classes should be applicable on new calibrated images without the need to train the classifier again. This article discusses the design and applicability of a classification algorithm, which is based on calibrated radar signatures measured from ERS-1 (C-band, vv polarized) and JERS-1 (L-band, hh polarized) SAR image data. The applicability is compared in two different test sites, Raco, Michigan and the Cedar Creek LTER site, Minnesota. It was found, that classes separate very well, when certain boundary conditions like comparable seasonality or soil moisture conditions are observed. ",
"neighbors": [
796
],
"mask": "Validation"
},
{
"node_id": 1630,
"label": 2,
"text": "Title: Averaging and Data Snooping \nAbstract: Presenting and Analyzing the Results of AI Experiments: Data Averaging and Data Snooping, Proceedings of the Fourteenth National Conference on Artificial Intelligence, AAAI-97, AAAI Press, Menlo Park, California, pp. 362367, 1997. Copyright AAAI. Presenting and Analyzing the Results of AI Experiments: Abstract Experimental results reported in the machine learning AI literature can be misleading. This paper investigates the common processes of data averaging (reporting results in terms of the mean and standard deviation of the results from multiple trials) and data snooping in the context of neural networks, one of the most popular AI machine learning models. Both of these processes can result in misleading results and inaccurate conclusions. We demonstrate how easily this can happen and propose techniques for avoiding these very important problems. For data averaging, common presentation assumes that the distribution of individual results is Gaussian. However, we investigate the distribution for common problems and find that it often does not approximate the Gaussian distribution, may not be symmetric, and may be multimodal. We show that assuming Gaussian distributions can significantly affect the interpretation of results, especially those of comparison studies. For a controlled task, we find that the distribution of performance is skewed towards better performance for smoother target functions and skewed towards worse performance for more complex target functions. We propose new guidelines for reporting performance which provide more information about the actual distribution (e.g. box-whiskers plots). For data snooping, we demonstrate that optimization of performance via experimentation with multiple parameters can lead to significance being assigned to results which are due to chance. We suggest that precise descriptions of experimental techniques can be very important to the evaluation of results, and that we need to be aware of potential data snooping biases when formulating these experimental techniques (e.g. selecting the test procedure). Additionally, it is important to only rely on appropriate statistical tests and to ensure that any assumptions made in the tests are valid (e.g. normality of the distribution). ",
"neighbors": [
1150,
1195,
1203
],
"mask": "Validation"
},
{
"node_id": 1631,
"label": 1,
"text": "Title: A Survey of Intron Research in Genetics \nAbstract: A brief survey of biological research on non-coding DNA is presented here. There has been growing interest in the effects of non-coding segments in evolutionary algorithms (EAs). To better understand and conduct research on non-coding segments and EAs, it is important to understand the biological background of such work. This paper begins with a review of basic genetics and terminology, describes the different types of non-coding DNA, and then surveys recent intron research.",
"neighbors": [
934,
2330,
2407,
2598,
2604
],
"mask": "Validation"
},
{
"node_id": 1632,
"label": 4,
"text": "Title: Learning Team Strategies With Multiple Policy-Sharing Agents: A Soccer Case Study \nAbstract: We use simulated soccer to study multiagent learning. Each team's players (agents) share action set and policy but may behave differently due to position-dependent inputs. All agents making up a team are rewarded or punished collectively in case of goals. We conduct simulations with varying team sizes, and compare two learning algorithms: TD-Q learning with linear neural networks (TD-Q) and Probabilistic Incremental Program Evolution (PIPE). TD-Q is based on evaluation functions (EFs) mapping input/action pairs to expected reward, while PIPE searches policy space directly. PIPE uses adaptive \"probabilistic prototype trees\" to synthesize programs that calculate action probabilities from current inputs. Our results show that TD-Q encounters several difficulties in learning appropriate shared EFs. PIPE, however, does not depend on EFs and can find good policies faster and more reliably. This suggests that in multiagent learning scenarios direct search through policy space can offer advantages over EF-based approaches. ",
"neighbors": [
68,
471,
1687
],
"mask": "Train"
},
{
"node_id": 1633,
"label": 2,
"text": "Title: Changing Supply Functions in Input/State Stable Systems \nAbstract: We consider the problem of characterizing possible supply functions for a given dissipative nonlinear system, and provide a result that allows some freedom in the modification of such functions. ",
"neighbors": [
1471
],
"mask": "Train"
},
{
"node_id": 1634,
"label": 2,
"text": "Title: Combining Linear Discriminant Functions with Neural Networks for Supervised Learning \nAbstract: A novel supervised learning method is presented by combining linear discriminant functions with neural networks. The proposed method results in a tree-structured hybrid architecture. Due to constructive learning, the binary tree hierarchical architecture is automatically generated by a controlled growing process for a specific supervised learning task. Unlike the classic decision tree, the linear discriminant functions are merely employed in the intermediate level of the tree for heuristically partitioning a large and complicated task into several smaller and simpler subtasks in the proposed method. These subtasks are dealt with by component neural networks at the leaves of the tree accordingly. For constructive learning, growing and credit-assignment algorithms are developed to serve for the hybrid architecture. The proposed architecture provides an efficient way to apply existing neural networks (e.g. multi-layered perceptron) for solving a large scale problem. We have already applied the proposed method to a universal approximation problem and several benchmark classification problems in order to evaluate its performance. Simulation results have shown that the proposed method yields better results and faster training in comparison with the multi-layered perceptron.",
"neighbors": [
74,
1252
],
"mask": "Test"
},
{
"node_id": 1635,
"label": 0,
"text": "Title: Redesigning control knowledge of knowledge-based systems: machine learning meets knowledge engineering \nAbstract: Machine learning and knowledge engineering have always been strongly related, but the introduction of new representations in knowledge engineering has created a gap between them. This paper describes research aimed at applying machine learning techniques to the current knowledge engineering representations. We propose a system that redesigns a part of a knowledge based system, the so called control knowledge. We claim a strong similarity between redesign of knowledge based systems and incremental machine learning. Finally we will relate this work to existing research. ",
"neighbors": [
1214,
1706
],
"mask": "Train"
},
{
"node_id": 1636,
"label": 0,
"text": "Title: Context-Sensitive Feature Selection for Lazy Learners \nAbstract: ",
"neighbors": [
245,
928,
983,
1073,
1684,
2074
],
"mask": "Validation"
},
{
"node_id": 1637,
"label": 2,
"text": "Title: The Effective Size of a Neural Network: A Principal Component Approach \nAbstract: Often when learning from data, one attaches a penalty term to a standard error term in an attempt to prefer simple models and prevent overfitting. Current penalty terms for neural networks, however, often do not take into account weight interaction. This is a critical drawback since the effective number of parameters in a network usually differs dramatically from the total number of possible parameters. In this paper, we present a penalty term that uses Principal Component Analysis to help detect functional redundancy in a neural network. Results show that our new algorithm gives a much more accurate estimate of network complexity than do standard approaches. As a result, our new term should be able to improve techniques that make use of a penalty term, such as weight decay, weight pruning, feature selection, Bayesian, and prediction-risk tech niques.",
"neighbors": [
157,
430,
1562
],
"mask": "Validation"
},
{
"node_id": 1638,
"label": 1,
"text": "Title: Walsh Functions and Predicting Problem Complexity \nAbstract: ",
"neighbors": [
1441
],
"mask": "Validation"
},
{
"node_id": 1639,
"label": 2,
"text": "Title: The Effect of Decision Surface Fitness on Dynamic Multilayer Perceptron Networks (DMP1) \nAbstract: The DMP1 (Dynamic Multilayer Perceptron 1) network training method is based upon a divide and conquer approach which builds networks in the form of binary trees, dynamically allocating nodes and layers as needed. This paper introduces the DMP1 method, and compares the preformance of DMP1 when using the standard delta rule training method for training individual nodes against the performance of DMP1 when using a genetic algorithm for training. While the basic model does not require the use of a genetic algorithm for training individual nodes, the results show that the convergence properties of DMP1 are enhanced by the use of a genetic algorithm with an appropriate fitness function. ",
"neighbors": [
809,
1615
],
"mask": "Train"
},
{
"node_id": 1640,
"label": 0,
"text": "Title: KRITIK: AN EARLY CASE-BASED DESIGN SYSTEM \nAbstract: In the late 1980s, we developed one of the early case-based design systems called Kritik. Kritik autonomously generated preliminary (conceptual, qualitative) designs for physical devices by retrieving and adapting past designs stored in its case memory. Each case in the system had an associated structure-behavior-function (SBF) device model that explained how the structure of the device accomplished its functions. These casespecific device models guided the process of modifying a past design to meet the functional specification of a new design problem. The device models also enabled verification of the design modifications. Kritik2 is a new and more complete implementation of Kritik. In this paper, we take a retrospective view on Kritik. In early papers, we had described Kritik as integrating case-based and model-based reasoning. In this integration, Kritik also grounds the computational process of case-based reasoning in the SBF content theory of device comprehension. The SBF models not only provide methods for many specific tasks in case-based design such as design adaptation and verification, but they also provide the vocabulary for the whole process of case-based design, from retrieval of old cases to storage of new ones. This grounding, we believe, is essential for building well-constrained theories of case-based design. ",
"neighbors": [
540,
603,
1121,
1345,
1355,
2706
],
"mask": "Test"
},
{
"node_id": 1641,
"label": 3,
"text": "Title: Learning Bayesian Networks from Incomplete Data \nAbstract: Much of the current research in learning Bayesian Networks fails to effectively deal with missing data. Most of the methods assume that the data is complete, or make the data complete using fairly ad-hoc methods; other methods do deal with missing data but learn only the conditional probabilities, assuming that the structure is known. We present a principled approach to learn both the Bayesian network structure as well as the conditional probabilities from incomplete data. The proposed algorithm is an iterative method that uses a combination of Expectation-Maximization (EM) and Imputation techniques. Results are presented on synthetic data sets which show that the performance of the new algorithm is much better than ad-hoc methods for handling missing data. ",
"neighbors": [
71,
558,
1086
],
"mask": "Test"
},
{
"node_id": 1642,
"label": 0,
"text": "Title: CHIRON: Planning in an Open-Textured Domain \nAbstract: Much of the current research in learning Bayesian Networks fails to effectively deal with missing data. Most of the methods assume that the data is complete, or make the data complete using fairly ad-hoc methods; other methods do deal with missing data but learn only the conditional probabilities, assuming that the structure is known. We present a principled approach to learn both the Bayesian network structure as well as the conditional probabilities from incomplete data. The proposed algorithm is an iterative method that uses a combination of Expectation-Maximization (EM) and Imputation techniques. Results are presented on synthetic data sets which show that the performance of the new algorithm is much better than ad-hoc methods for handling missing data. ",
"neighbors": [
313,
986,
1377,
1475
],
"mask": "Train"
},
{
"node_id": 1643,
"label": 4,
"text": "Title: Learning to coordinate without sharing information \nAbstract: Researchers in the field of Distributed Artificial Intelligence (DAI) have been developing efficient mechanisms to coordinate the activities of multiple autonomous agents. The need for coordination arises because agents have to share resources and expertise required to achieve their goals. Previous work in the area includes using sophisticated information exchange protocols, investigating heuristics for negotiation, and developing formal models of possibilities of conflict and cooperation among agent interests. In order to handle the changing requirements of continuous and dynamic environments, we propose learning as a means to provide additional possibilities for effective coordination. We use reinforcement learning techniques on a block pushing problem to show that agents can learn complimentary policies to follow a desired path without any knowledge about each other. We theoretically analyze and experimentally verify the effects of learning rate on system convergence, and demonstrate benefits of using learned coordination knowledge on similar problems. Reinforcement learning based coordination can be achieved in both cooperative and non-cooperative domains, and in domains with noisy communication channels and other stochastic characteristics that present a formidable challenge to using other coordination schemes. ",
"neighbors": [
566,
649,
773,
868,
1189,
1649
],
"mask": "Train"
},
{
"node_id": 1644,
"label": 2,
"text": "Title: A Comparative Study of ID3 and Backpropagation for English Text-to-Speech Mapping \nAbstract: The performance of the error backpropagation (BP) and ID3 learning algorithms was compared on the task of mapping English text to phonemes and stresses. Under the distributed output code developed by Sejnowski and Rosenberg, it is shown that BP consistently out-performs ID3 on this task by several percentage points. Three hypotheses explaining this difference were explored: (a) ID3 is overfitting the training data, (b) BP is able to share hidden units across several output units and hence can learn the output units better, and (c) BP captures statistical information that ID3 does not. We conclude that only hypothesis (c) is correct. By augmenting ID3 with a simple statistical learning procedure, the performance of BP can be approached but not matched. More complex statistical procedures can improve the performance of both BP and ID3 substantially. A study of the residual errors suggests that there is still substantial room for improvement in learning methods for text-to-speech mapping.",
"neighbors": [
318,
322,
378,
462,
701,
822,
986,
1256,
1290,
1328,
1601,
1732,
1862,
1863,
1964,
2364,
2409,
2423,
2484,
2614,
2616
],
"mask": "Train"
},
{
"node_id": 1645,
"label": 2,
"text": "Title: Acquiring the mapping from meaning to sounds \nAbstract: 1 We thank Steen Ladegaard Knudsen for his assistance in programming, analysis and running of simulations, Scott Baden for his assistance in vectorizing our code for the Cray Y-MP, the Division of Engineering Block Grant for time on the Cray at the San Diego Supercomputer Center, and the members of the PDPNLP and GURU Research Groups at UCSD for helpful comments on earlier versions of this work. ",
"neighbors": [
204,
477,
797
],
"mask": "Test"
},
{
"node_id": 1646,
"label": 1,
"text": "Title: Generation Gaps Revisited \nAbstract: There has been a lot of recent interest in so-called \"steady state\" genetic algorithms (GAs) which, among other things, replace only a few individuals (typically 1 or 2) each generation from a fixed size population of size N. Understanding the advantages and/or disadvantages of replacing only a fraction of the population each generation (rather than the entire population) was a goal of some of the earliest GA research. In spite of considerable progress in our understanding of GAs since then, the pros/cons of overlapping generations remains a somewhat cloudy issue. However, recent theoretical and empirical results provide the background for a much clearer understanding of this issue. In this paper we review, combine, and extend these results in a way that significantly sharpens our insight.",
"neighbors": [
145,
943
],
"mask": "Train"
},
{
"node_id": 1647,
"label": 6,
"text": "Title: Recognition and Exploitation of Contextual Clues via Incremental Meta-Learning (Extended Version) \nAbstract: Daily experience shows that in the real world, the meaning of many concepts heavily depends on some implicit context, and changes in that context can cause more or less radical changes in the concepts. Incremental concept learning in such domains requires the ability to recognize and adapt to such changes. This paper presents a solution for incremental learning tasks where the domain provides explicit clues as to the current context (e.g., attributes with characteristic values). We present a general two-level learning model, and its realization in a system named MetaL(B), that can learn to detect certain types of contextual clues, and can react accordingly when a context change is suspected. The model consists of a base level learner that performs the regular on-line learning and classification task, and a meta-learner that identifies potential contextual clues. Context learning and detection occur during regular on-line learning, without separate training phases for context recognition. Experiments with synthetic domains as well as a `real-world' problem show that MetaL(B) is robust in a variety of dimensions and produces substantial improvement over simple object-level learning in situations with changing contexts. ",
"neighbors": [
1684,
1908,
2074,
2586,
2615
],
"mask": "Train"
},
{
"node_id": 1648,
"label": 0,
"text": "Title: Creative Design: Reasoning and Understanding \nAbstract: This paper investigates memory issues that influence long- term creative problem solving and design activity, taking a case-based reasoning perspective. Our exploration is based on a well-documented example: the invention of the telephone by Alexander Graham Bell. We abstract Bell's reasoning and understanding mechanisms that appear time and again in long-term creative design. We identify that the understanding mechanism is responsible for analogical anticipation of design constraints and analogical evaluation, beside case-based design. But an already understood design can satisfy opportunistically suspended design problems, still active in background. The new mechanisms are integrated in a computational model, ALEC 1 , that accounts for some creative be ",
"neighbors": [
1355
],
"mask": "Train"
},
{
"node_id": 1649,
"label": 4,
"text": "Title: Multi-Agent Reinforcement Learning: Independent vs. Cooperative Agents \nAbstract: Intelligent human agents exist in a cooperative social environment that facilitates learning. They learn not only by trial-and-error, but also through cooperation by sharing instantaneous information, episodic experience, and learned knowledge. The key investigations of this paper are, \"Given the same number of reinforcement learning agents, will cooperative agents outperform independent agents who do not communicate during learning?\" and \"What is the price for such cooperation?\" Using independent agents as a benchmark, cooperative agents are studied in following ways: (1) sharing sensation, (2) sharing episodes, and (3) sharing learned policies. This paper shows that (a) additional sensation from another agent is beneficial if it can be used efficiently, (b) sharing learned policies or episodes among agents speeds up learning at the cost of communication, and (c) for joint tasks, agents engaging in partnership can significantly outperform independent agents although they may learn slowly in the beginning. These tradeoffs are not just limited to multi-agent reinforcement learning.",
"neighbors": [
148,
691,
868,
1213,
1228,
1557,
1643,
1687
],
"mask": "Train"
},
{
"node_id": 1650,
"label": 1,
"text": "Title: Genetic Algorithms For Vertex Splitting in DAGs 1 \nAbstract: 1 This paper has been submitted to the 5th International Conference on Genetic Algorithms 2 electronic mail address: matze@cs.umr.edu 3 electronic mail address: ercal@cs.umr.edu ",
"neighbors": [
163,
1466
],
"mask": "Train"
},
{
"node_id": 1651,
"label": 5,
"text": "Title: An application of ILP in a musical database: learning to compose the two-voice counterpoint \nAbstract: We describe SFOIL, a descendant of FOIL that uses the advanced stochastic search heuristic, and its application in learning to compose the two-voice counterpoint. The application required learning a 4-ary relation from more than 20.000 training instances. SFOIL is able to efficiently deal with this learning task which is to our knowledge one of the most complex learning task solved by an ILP system. This demonstrates that ILP systems can scale up to real databases and that top-down ILP systems that use the covering approach and advanced search strategies are appropriate for knowledge discovery in databases and are promising for further investigation. ",
"neighbors": [
877,
1010,
1061,
1578
],
"mask": "Train"
},
{
"node_id": 1652,
"label": 2,
"text": "Title: A Theory of Visual Relative Motion Perception: Grouping, Binding, and Gestalt Organization \nAbstract: The human visual system is more sensitive to the relative motion of objects than to their absolute motion. An understanding of motion perception requires an understanding of how neural circuits can group moving visual elements relative to one another, based upon hierarchical reference frames. We have modeled visual relative motion perception using a neural network architecture that groups visual elements according to Gestalt common-fate principles and exploits information about the behavior of each group to predict the behavior of individual elements. A simple competitive neural circuit binds visual elements together into a representation of a visual object. Information about the spiking pattern of neurons allows transfer of the bindings of an object representation from location to location in the neural circuit as the object moves. The model exhibits characteristics of human object grouping and solves some key neural circuit design problems in visual relative motion perception. ",
"neighbors": [
1659
],
"mask": "Train"
},
{
"node_id": 1653,
"label": 0,
"text": "Title: Problem Solving for Redesign \nAbstract: A knowledge-level analysis of complex tasks like diagnosis and design can give us a better understanding of these tasks in terms of the goals they aim to achieve and the different ways to achieve these goals. In this paper we present a knowledge-level analysis of redesign. Redesign is viewed as a family of methods based on some common principles, and a number of dimensions along which redesign problem solving methods can vary are distinguished. By examining the problem-solving behavior of a number of existing redesign systems and approaches, we came up with a collection of problem-solving methods for redesign and developed a task-method structure for redesign. In constructing a system for redesign a large number of knowledge-related choices and decisions are made. In order to describe all relevant choices in redesign problem solving, we have to extend the current notion of possible relations between tasks and methods in a PSM architecture. The realization of a task by a problem-solving method, and the decomposition of a problem-solving method into subtasks are the most common relations in a PSM architecture. However, we suggest to extend these relations with the notions of task refinement and method refinement. These notions represent intermediate decisions in a task-method structure, in which the competence of a task or method is refined without immediately paying attention to its operationalization in terms of subtasks. Explicit representation of this kind of intermediate decisions helps to make and represent decisions in a more piecemeal fashion. ",
"neighbors": [
1385
],
"mask": "Test"
},
{
"node_id": 1654,
"label": 3,
"text": "Title: Hyperparameter estimation in Dirichlet process mixture models \nAbstract: In Bayesian density estimation and prediction using Dirichlet process mixtures of standard, exponential family distributions, the precision or total mass parameter of the mixing Dirichlet process is a critical hyperparame-ter that strongly influences resulting inferences about numbers of mixture components. This note shows how, with respect to a flexible class of prior distributions for this parameter, the posterior may be represented in a simple conditional form that is easily simulated. As a result, inference about this key quantity may be developed in tandem with the existing, routine Gibbs sampling algorithms for fitting such mixture models. The concept of data augmentation is important, as ever, in developing this extension of the existing algorithm. A final section notes an simple asymptotic approx imation to the posterior. ",
"neighbors": [
852,
855,
1338
],
"mask": "Train"
},
{
"node_id": 1655,
"label": 2,
"text": "Title: Strategies for the Parallel Training of Simple Recurrent Neural Networks \nAbstract: In Bayesian density estimation and prediction using Dirichlet process mixtures of standard, exponential family distributions, the precision or total mass parameter of the mixing Dirichlet process is a critical hyperparame-ter that strongly influences resulting inferences about numbers of mixture components. This note shows how, with respect to a flexible class of prior distributions for this parameter, the posterior may be represented in a simple conditional form that is easily simulated. As a result, inference about this key quantity may be developed in tandem with the existing, routine Gibbs sampling algorithms for fitting such mixture models. The concept of data augmentation is important, as ever, in developing this extension of the existing algorithm. A final section notes an simple asymptotic approx imation to the posterior. ",
"neighbors": [
1313
],
"mask": "Train"
},
{
"node_id": 1656,
"label": 2,
"text": "Title: Unsupervised Neural Network Learning Procedures For Feature Extraction and Classification \nAbstract: Technical report CNS-TR-95-1 Center for Neural Systems McMaster University ",
"neighbors": [
527,
731,
1710,
1822,
2357
],
"mask": "Train"
},
{
"node_id": 1657,
"label": 2,
"text": "Title: Using Neural Networks to Automatically Refine Expert System Knowledge Bases: Experiments in the NYNEX MAX Domain \nAbstract: In this paper we describe our study of applying knowledge-based neural networks to the problem of diagnosing faults in local telephone loops. Currently, NYNEX uses an expert system called MAX to aid human experts in diagnosing these faults; however, having an effective learning algorithm in place of MAX would allow easy portability between different maintenance centers, and easy updating when the phone equipment changes. We find that (i) machine learning algorithms have better accuracy than MAX, (ii) neural networks perform better than decision trees, (iii) neural network ensembles perform better than standard neural networks, (iv) knowledge-based neural networks perform better than standard neural networks, and (v) an ensemble of knowledge-based neural networks performs the best. ",
"neighbors": [
1307,
1422,
1462,
1595
],
"mask": "Train"
},
{
"node_id": 1658,
"label": 6,
"text": "Title: Learning One-Dimensional Geometric Patterns Under One-Sided Random Misclassification Noise \nAbstract: In this paper we describe our study of applying knowledge-based neural networks to the problem of diagnosing faults in local telephone loops. Currently, NYNEX uses an expert system called MAX to aid human experts in diagnosing these faults; however, having an effective learning algorithm in place of MAX would allow easy portability between different maintenance centers, and easy updating when the phone equipment changes. We find that (i) machine learning algorithms have better accuracy than MAX, (ii) neural networks perform better than decision trees, (iii) neural network ensembles perform better than standard neural networks, (iv) knowledge-based neural networks perform better than standard neural networks, and (v) an ensemble of knowledge-based neural networks performs the best. ",
"neighbors": [
884
],
"mask": "Validation"
},
{
"node_id": 1659,
"label": 2,
"text": "Title: In Unsmearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections \nAbstract: Human vision systems integrate information nonlocally, across long spatial ranges. For example, a moving stimulus appears smeared when viewed briefly (30 ms), yet sharp when viewed for a longer exposure (100 ms) (Burr, 1980). This suggests that visual systems combine information along a trajectory that matches the motion of the stimulus. Our self-organizing neural network model shows how developmental exposure to moving stimuli can direct the formation of horizontal trajectory-specific motion integration pathways that unsmear representations of moving stimuli. These results account for Burr's data and can potentially also model other phenomena, such as visual inertia.",
"neighbors": [
1093,
1094,
1652
],
"mask": "Validation"
},
{
"node_id": 1660,
"label": 3,
"text": "Title: Explaining \"Explaining Away\" \nAbstract: Explaining away is a common pattern of reasoning in which the confirmation of one cause of an observed or believed event reduces the need to invoke alternative causes. The opposite of explaining away also can occur, in which the confirmation of one cause increases belief in another. We provide a general qualitative probabilistic analysis of intercausal reasoning, and identify the property of the interaction among the causes, product synergy, that determines which form of reasoning is appropriate. Product synergy extends the qualitative probabilistic network (QPN) formalism to support qualitative intercausal inference about the directions of change in probabilistic belief. The intercausal relation also justifies Occam's razor, facilitating pruning in search for likely diagnoses. 0 Portions of this paper originally appeared in Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning [16]. y Supported by the National Science Foundation under grant IRI-8807061 to Carnegie Mellon and by the Rockwell International Science Center. ",
"neighbors": [
952,
1083
],
"mask": "Test"
},
{
"node_id": 1661,
"label": 6,
"text": "Title: Simulating Access to Hidden Information while Learning \nAbstract: We introduce a new technique which enables a learner without access to hidden information to learn nearly as well as a learner with access to hidden information. We apply our technique to solve an open problem of Maass and Turan [18], showing that for any concept class F , the least number of queries sufficient for learning F by an algorithm which has access only to arbitrary equivalence queries is at most a factor of 1= log 2 (4=3) more than the least number of queries sufficient for learning F by an algorithm which has access to both arbitrary equivalence queries and membership queries. Previously known results imply that the 1= log 2 (4=3) in our bound is best possible. We describe analogous results for two generalizations of this model to function learning, and apply those results to bound the difficulty of learning in the harder of these models in terms of the difficulty of learning in the easier model. We bound the difficulty of learning unions of k concepts from a class F in terms of the difficulty of learning F . We bound the difficulty of learning in a noisy environment for deterministic algorithms in terms of the difficulty of learning in a noise-free environment. We apply a variant of our technique to develop an algorithm transformation that allows probabilistic learning algorithms to nearly optimally cope with noise. A second variant enables us to improve a general lower bound of Turan [19] for the PAC-learning model (with queries). Finally, we show that logarithmically many membership queries never help to obtain computationally efficient learning algorithms. fl Supported by Air Force Office of Scientific Research grant F49620-92-J-0515. Most of this work was done while this author was at TU Graz supported by a Lise Meitner Fellowship from the Fonds zur Forderung der wissenschaftlichen Forschung (Austria). ",
"neighbors": [
109,
453,
791,
1358,
1469
],
"mask": "Train"
},
{
"node_id": 1662,
"label": 1,
"text": "Title: Evolution of Homing Navigation in a Real Mobile Robot \nAbstract: In this paper we describe the evolution of a discrete-time recurrent neural network to control a real mobile robot. In all our experiments the evolutionary procedure is carried out entirely on the physical robot without human intervention. We show that the autonomous development of a set of behaviors for locating a battery charger and periodically returning to it can be achieved by lifting constraints in the design of the robot/environment interactions that were employed in a preliminary experiment. The emergent homing behavior is based on the autonomous development of an internal neural topographic map (which is not pre-designed) that allows the robot to choose the appropriate trajectory as function of location and remaining energy. ",
"neighbors": [
1036
],
"mask": "Train"
},
{
"node_id": 1663,
"label": 1,
"text": "Title: Evolution of the Topology and the Weights of Neural Networks using Genetic Programming with a\nAbstract: Genetic programming is a methodology for program development, consisting of a special form of genetic algorithm capable of handling parse trees representing programs, that has been successfully applied to a variety of problems. In this paper a new approach to the construction of neural networks based on genetic programming is presented. A linear chromosome is combined to a graph representation of the network and new operators are introduced, which allow the evolution of the architecture and the weights simultaneously without the need of local weight optimization. This paper describes the approach, the operators and reports results of the application of the model to several binary classification problems. ",
"neighbors": [
1266,
1756,
2504
],
"mask": "Train"
},
{
"node_id": 1664,
"label": 2,
"text": "Title: \"What is the best thing to do right now?\": getting beyond greedy exploration \nAbstract: Genetic programming is a methodology for program development, consisting of a special form of genetic algorithm capable of handling parse trees representing programs, that has been successfully applied to a variety of problems. In this paper a new approach to the construction of neural networks based on genetic programming is presented. A linear chromosome is combined to a graph representation of the network and new operators are introduced, which allow the evolution of the architecture and the weights simultaneously without the need of local weight optimization. This paper describes the approach, the operators and reports results of the application of the model to several binary classification problems. ",
"neighbors": [
740,
804,
1559,
1697
],
"mask": "Train"
},
{
"node_id": 1665,
"label": 0,
"text": "Title: Conceptual Analogy: Conceptual clustering for informed and efficient analogical reasoning \nAbstract: Conceptual analogy (CA) is a general approach that applies conceptual clustering and concept representations to facilitate the efficient use of past experiences (cases) during analogical reasoning (Borner 1995). The approach was developed and implemented in SYN* (see also (Borner 1994, Borner and Faauer 1995)) to support the design of supply nets in building engineering. This paper sketches the task; it outlines the nearest-neighbor-based agglomerative conceptual clustering applied in organizing large amounts of structured cases into case classes; it provides the concept representation used to characterize case classes and shows the analogous solution of new problems based on the concepts available. However, the main purpose of this paper is to evaluate CA in terms of its reasoning efficiency; its capability to derive solutions that go beyond the cases in the case base but still preserve the quality of cases.",
"neighbors": [
539,
883
],
"mask": "Train"
},
{
"node_id": 1666,
"label": 3,
"text": "Title: Efficient Non-parametric Estimation of Probability Density Functions \nAbstract: Accurate and fast estimation of probability density functions is crucial for satisfactory computational performance in many scientific problems. When the type of density is known a priori, then the problem becomes statistical estimation of parameters from the observed values. In the non-parametric case, usual estimators make use of kernel functions. If X j ; j = 1; 2; : : : ; n is a sequence of i.i.d. random variables with estimated probability density function f n , in the kernel method the computation of the values f n (X 1 ); f n (X 2 ); : : : ; f n (X n ) requires O(n 2 ) operations, since each kernel needs to be evaluated at every X j . We propose a sequence of special weight functions for the non-parametric estimation of f which requires almost linear time: if m is a slowly growing function that increases without bound with n, our method requires only O(m 2 n) arithmetic operations. We derive conditions for convergence under a number of metrics, which turn out to be similar to those required for the convergence of kernel based methods. We also discuss experiments on different distributions and compare the efficiency and the accuracy of our computations with kernel based estimators for various values of n and m. ",
"neighbors": [
719,
1133
],
"mask": "Train"
},
{
"node_id": 1667,
"label": 2,
"text": "Title: Advances in Neural Information Processing Systems 8 Active Learning in Multilayer Perceptrons \nAbstract: We propose an active learning method with hidden-unit reduction, which is devised specially for multilayer perceptrons (MLP). First, we review our active learning method, and point out that many Fisher-information-based methods applied to MLP have a critical problem: the information matrix may be singular. To solve this problem, we derive the singularity condition of an information matrix, and propose an active learning technique that is applicable to ",
"neighbors": [
740,
1697
],
"mask": "Test"
},
{
"node_id": 1668,
"label": 2,
"text": "Title: Space-Frequency Localized Basis Function Networks for Nonlinear System Estimation and Control \nAbstract: Stable neural network control and estimation may be viewed formally as a merging of concepts from nonlinear dynamic systems theory with tools from multivariate approximation theory. This paper extends earlier results on adaptive control and estimation of nonlinear systems using gaussian radial basis functions to the on-line generation of irregularly sampled networks, using tools from multiresolution analysis and wavelet theory. This yields much more compact and efficient system representations while preserving global closed-loop stability. Approximation models employing basis functions that are localized in both space and spatial frequency admit a measure of the approximated function's spatial frequency content that is not directly dependent on reconstruction error. As a result, these models afford a means of adaptively selecting basis functions according to the local spatial frequency content of the approximated function. An algorithm for stable, on-line adaptation of output weights simultaneously with node configuration in a class of non-parametric models with wavelet basis functions is presented. An asymptotic bound on the error in the network's reconstruction is derived and shown to be dependent solely on the minimum approximation error associated with the steady state node configuration. In addition, prior bounds on the temporal bandwidth of the system to be identified or controlled are used to develop a criterion for on-line selection of radial and ridge wavelet basis functions, thus reducing the rate of increase in network's size with the dimension of the state vector. Experimental results obtained by using the network to predict the path of an unknown light bluff object thrown through air, in an active-vision based robotic catching system, are given to illustrate the network's performance in a simple real-time application. ",
"neighbors": [
611,
1488,
1910,
2378,
2535
],
"mask": "Validation"
},
{
"node_id": 1669,
"label": 6,
"text": "Title: Bias and the Quantification of Stability Bias and the Quantification of Stability Bias and the\nAbstract: Research on bias in machine learning algorithms has generally been concerned with the impact of bias on predictive accuracy. We believe that there are other factors that should also play a role in the evaluation of bias. One such factor is the stability of the algorithm; in other words, the repeatability of the results. If we obtain two sets of data from the same phenomenon, with the same underlying probability distribution, then we would like our learning algorithm to induce approximately the same concepts from both sets of data. This paper introduces a method for quantifying stability, based on a measure of the agreement between concepts. We also discuss the relationships among stability, predictive accuracy, and bias. ",
"neighbors": [
1236
],
"mask": "Train"
},
{
"node_id": 1670,
"label": 1,
"text": "Title: Raising GA Performance by Simultaneous Tuning of Selective Pressure and Recombination Disruptiveness \nAbstract: In many Genetic Algorithms applications the objective is to find a (near-)optimal solution using a limited amount of computation. Given these requirements it is difficult to find a good balance between exploration and exploitation. Usually such a balance is found by tuning the various parameters (like the selective pressure, population size, the mutation- and crossover rate) of the Genetic Algorithm. As an alternative we propose simultaneous tuning of the selective pressure and the disruptiveness of the recombination operators. Our experiments show that the combination of a proper selective pressure and a highly disruptive recombination operator yields superior performance. The reduction mechanism used in a Steady-State GA has a strong influence on the optimal crossover disruptiveness. Using the worst fitness deletion strategy the building blocks present in the current best individuals are always preserved. This releases the crossover operator from the burden to maintain good building blocks and allows us to tune crossover disruptiveness to improve the search for better individuals.",
"neighbors": [
1016,
1218
],
"mask": "Validation"
},
{
"node_id": 1671,
"label": 6,
"text": "Title: From: Computational Learning Theory and Natural Systems, Chapter 18, \"Cross-validation and Modal Theories\", Cross-Validation and\nAbstract: Cross-validation is a frequently used, intuitively pleasing technique for estimating the accuracy of theories learned by machine learning algorithms. During testing of a machine learning algorithm (foil) on new databases of prokaryotic RNA transcription promoters which we have developed, cross-validation displayed an interesting phenomenon. One theory is found repeatedly and is responsible for very little of the cross-validation error, whereas other theories are found very infrequently which tend to be responsible for the majority of the cross-validation error. It is tempting to believe that the most frequently found theory (the \"modal theory\") may be more accurate as a classifier of unseen data than the other theories. However, experiments showed that modal theories are not more accurate on unseen data than the other theories found less frequently during cross-validation. Modal theories may be useful in predicting when cross-validation is a poor estimate of true accuracy. We offer explanations 1 For correspondence: Department of Computer Science and Engineering, University of California, San ",
"neighbors": [
344,
1512
],
"mask": "Test"
},
{
"node_id": 1672,
"label": 2,
"text": "Title: Learning Controllers for Industrial Robots \nAbstract: One of the most significant cost factors in robotics applications is the design and development of real-time robot control software. Control theory helps when linear controllers have to be developed, but it doesn't sufficiently support the generation of non-linear controllers, although in many cases (such as in compliance control), nonlinear control is essential for achieving high performance. This paper discusses how Machine Learning has been applied to the design of (non-)linear controllers. Several alternative function approximators, including Multilayer Perceptrons (MLP), Radial Basis Function Networks (RBFNs), and Fuzzy Controllers are analyzed and compared, leading to the definition of two major families: Open Field Function Function Approximators and Locally Receptive Field Function Approximators. It is shown that RBFNs and Fuzzy Controllers bear strong similarities, and that both have a symbolic interpretation. This characteristics allows for applying both symbolic and statistic learning algorithms to synthesize the network layout from a set of examples and, possibly, some background knowledge. Three integrated learning algorithms, two of which are original, are described and evaluated on experimental test cases. The first test case is provided by a robot KUKA IR-361 engaged into the \"peg-into-hole\" task, whereas the second is represented by a classical prediction task on the Mackey-Glass time series. From the experimental comparison, it appears that both Fuzzy Controllers and RBFNs synthesised from examples are excellent approximators, and that, in practice, they can be even more accurate than MLPs. ",
"neighbors": [
294,
696,
807,
899,
903,
1352,
1564
],
"mask": "Test"
},
{
"node_id": 1673,
"label": 1,
"text": "Title: EVOLVING ROBOT BEHAVIORS \nAbstract: This paper discusses the use of evolutionary computation to evolve behaviors that exhibit emergent intelligent behavior. Genetic algorithms are used to learn navigation and collision avoidance behaviors for robots. The learning is performed under simulation, and the resulting behaviors are then used to control the actual robot. Some of the emergent behavior is described in detail. ",
"neighbors": [
910,
964,
965,
966,
1573
],
"mask": "Train"
},
{
"node_id": 1674,
"label": 0,
"text": "Title: Proceedings of CogSci89 Structural Evaluation of Analogies: What Counts? \nAbstract: Judgments of similarity and soundness are important aspects of human analogical processing. This paper explores how these judgments can be modeled using SME, a simulation of Gentner's structure-mapping theory. We focus on structural evaluation, explicating several principles which psychologically plausible algorithms should follow. We introduce the Specificity Conjecture, which claims that naturalistic representations include a preponderance of appearance and low-order information. We demonstrate via computational experiments that this conjecture affects how structural evaluation should be performed, including the choice of normalization technique and how the systematicity preference is implemented. ",
"neighbors": [
1123,
1354,
1680
],
"mask": "Train"
},
{
"node_id": 1675,
"label": 1,
"text": "Title: A Study of Genetic Algorithms to Find Approximate Solutions to Hard 3CNF Problems \nAbstract: Genetic algorithms have been used to solve hard optimization problems ranging from the Travelling Salesman problem to the Quadratic Assignment problem. We show that the Simple Genetic Algorithm can be used to solve an optimization problem derived from the 3-Conjunctive Normal Form problem. By separating the populations into small sub-populations, parallel genetic algorithms exploits the inherent parallelism in genetic algorithms and prevents premature convergence. Genetic algorithms using hill-climbing conduct genetic search in the space of local optima, and hill-climbing can be less com-putationally expensive than genetic search. We examine the effectiveness of these techniques in improving the quality of solutions of 3CNF problems. ",
"neighbors": [
163,
1153
],
"mask": "Train"
},
{
"node_id": 1676,
"label": 2,
"text": "Title: TD Learning of Game Evaluation Functions with Hierarchical Neural Architectures \nAbstract: Genetic algorithms have been used to solve hard optimization problems ranging from the Travelling Salesman problem to the Quadratic Assignment problem. We show that the Simple Genetic Algorithm can be used to solve an optimization problem derived from the 3-Conjunctive Normal Form problem. By separating the populations into small sub-populations, parallel genetic algorithms exploits the inherent parallelism in genetic algorithms and prevents premature convergence. Genetic algorithms using hill-climbing conduct genetic search in the space of local optima, and hill-climbing can be less com-putationally expensive than genetic search. We examine the effectiveness of these techniques in improving the quality of solutions of 3CNF problems. ",
"neighbors": [
74,
207,
523,
565,
1146
],
"mask": "Train"
},
{
"node_id": 1677,
"label": 2,
"text": "Title: ICSIM: An Object Oriented Simulation Environment for Structured Connectionist Nets. Class Project Report, Physics 250 \nAbstract: ICSIM is a simulator for structured connectionism under development at ICSI. Structured connectionism is characterized by the need for flexibility, efficiency and support for the design and reuse of modular substructure. We take the position that a fast object-oriented language like Sather [5] is an appropriate implementation medium to achieve these goals. The core of ICSIM consists of a hierarchy of classes that correspond to simulation entities. New connectionist models are realized by combining and specializing pre-existing classes. Whenever possible, auxillary functionality has been separated out into functional modules in order to keep the basic hierarchy as clean and simple as possible. ",
"neighbors": [
1120
],
"mask": "Train"
},
{
"node_id": 1678,
"label": 6,
"text": "Title: Induction of One-Level Decision Trees \nAbstract: In recent years, researchers have made considerable progress on the worst-case analysis of inductive learning tasks, but for theoretical results to have impact on practice, they must deal with the average case. In this paper we present an average-case analysis of a simple algorithm that induces one-level decision trees for concepts defined by a single relevant attribute. Given knowledge about the number of training instances, the number of irrelevant attributes, the amount of class and attribute noise, and the class and attribute distributions, we derive the expected classification accuracy over the entire instance space. We then examine the predictions of this analysis for different settings of these domain parameters, comparing them to exper imental results to check our reasoning. ",
"neighbors": [
378,
861,
1339,
1570
],
"mask": "Validation"
},
{
"node_id": 1679,
"label": 5,
"text": "Title: Machine learning in prognosis of the femoral neck fracture recovery examples, estimating attributes, explanation ability,\nAbstract: We compare the performance of several machine learning algorithms in the problem of prognos-tics of the femoral neck fracture recovery: the K-nearest neighbours algorithm, the semi-naive Bayesian classifier, backpropagation with weight elimination learning of the multilayered neural networks, the LFC (lookahead feature construction) algorithm, and the Assistant-I and Assistant-R algorithms for top down induction of decision trees using information gain and RELIEFF as search heuristics, respectively. We compare the prognostic accuracy and the explanation ability of different classifiers. Among the different algorithms the semi-naive Bayesian classifier and Assistant-R seem to be the most appropriate. We analyze the combination of decisions of several classifiers for solving prediction problems and show that the combined classifier improves both performance and the explanation ability. ",
"neighbors": [
627,
1569,
1726
],
"mask": "Train"
},
{
"node_id": 1680,
"label": 0,
"text": "Title: Making SME greedy and pragmatic \nAbstract: The Structure-Mapping Engine (SME) has successfully modeled several aspects of human consistent interpretations of an analogy. While useful for theoretical explorations, this aspect of the algorithm is both psychologically implausible and computationally inefficient. (2) SME contains no mechanism for focusing on interpretations relevant to an analogizer's goals. This paper describes modifications to SME which overcome these flaws. We describe a greedy merge algorithm which efficiently computes an approximate \"best\" interpretation, and can generate alternate interpretations when necessary. We describe pragmatic marking, a technique which focuses the mapping to produce relevant, yet novel, inferences. We illustrate these techniques via example and evaluate their performance using empirical data and theoretical analysis. analogical processing. However, it has two significant drawbacks: (1) SME constructs all structurally",
"neighbors": [
1123,
1354,
1674
],
"mask": "Train"
},
{
"node_id": 1681,
"label": 3,
"text": "Title: Understanding WaveShrink: Variance and Bias Estimation \nAbstract: Research Report ",
"neighbors": [
1682
],
"mask": "Train"
},
{
"node_id": 1682,
"label": 3,
"text": "Title: WaveShrink: Shrinkage Functions and Thresholds \nAbstract: Donoho and Johnstone's WaveShrink procedure has proven valuable for signal de-noising and non-parametric regression. WaveShrink is based on the principle of shrinking wavelet coefficients towards zero to remove noise. WaveShrink has very broad asymptotic near-optimality properties. In this paper, we introduce a new shrinkage scheme, semisoft, which generalizes hard and soft shrinkage. We study the properties of the shrinkage functions, and demonstrate that semisoft shrinkage offers advantages over both hard shrinkage (uniformly smaller risk and less sensitivity to small perturbations in the data) and soft shrinkage (smaller bias and overall L 2 risk). We also construct approximate pointwise confidence intervals for WaveShrink and address the problem of threshold selection. ",
"neighbors": [
1681
],
"mask": "Train"
},
{
"node_id": 1683,
"label": 6,
"text": "Title: An Algorithm for Active Data Collection for Learning Feasibility Study with Neural Networks. \nAbstract: Macquarie University Technical Report No. 95-173C Department of Computing School of MPCE, Macquarie University, New South Wales, Australia ",
"neighbors": [
740,
1198,
1559,
1697
],
"mask": "Train"
},
{
"node_id": 1684,
"label": 5,
"text": "Title: Context-sensitive attribute estimation in regression \nAbstract: One of key issues in both discrete and continuous class prediction and in machine learning in general seems to be the problem of estimating the quality of attributes. Heuristic measures mostly assume independence of attributes so their use is non-optimal in domains with strong dependencies between attributes. For the same reason they are also mostly unable to recognize context dependent features. Relief and its extension Re-liefF are statistical methods capable of correctly estimating the quality of attributes in classification problems with strong dependencies between attributes. By exploiting local information provided by different contexts they provide a global view and recognize contextual attributes. After the analysis of ReliefF we have extended it to continuous class problems. Regressional ReliefF (RReliefF) and ReliefF provide a unified view on estimating attribute quality. The experiments show that RReliefF correctly estimates the quality of attributes, recognizes the contextual attributes and can be used for non myopic learning of the regression trees.",
"neighbors": [
314,
1182,
1569,
1636,
1647,
1726
],
"mask": "Test"
},
{
"node_id": 1685,
"label": 1,
"text": "Title: Optimization by Means of Genetic Algorithms \nAbstract: Genetic Algorithms (GAs) are powerful heuristic search strategies based upon a simple model of organic evolution. The basic working scheme of GAs as developed by Holland [Hol75] is described within this paper in a formal way, and extensions based upon the second-level learning principle for strategy parameters as introduced in Evolution Strategies (ESs) are proposed. First experimental results concerning this extension of GAs are also reported.",
"neighbors": [
163,
422,
1069,
1455
],
"mask": "Validation"
},
{
"node_id": 1686,
"label": 5,
"text": "Title: Learning with Abduction \nAbstract: We investigate how abduction and induction can be integrated into a common learning framework through the notion of Abductive Concept Learning (ACL). ACL is an extension of Inductive Logic Programming (ILP) to the case in which both the background and the target theory are abductive logic programs and where an abductive notion of entailment is used as the coverage relation. In this framework, it is then possible to learn with incomplete information about the examples by exploiting the hypothetical reasoning of abduction. The paper presents the basic framework of ACL with its main characteristics and illustrates its potential in addressing several problems in ILP such as learning with incomplete information and multiple predicate learning. An algorithm for ACL is developed by suitably extending the top-down ILP method for concept learning and integrating this with an abductive proof procedure for Abductive Logic Programming (ALP). A prototype system has been developed and applied to learning problems with incomplete information. The particular role of integrity constraints in ACL is investigated showing ACL as a hybrid learning framework that integrates the explanatory (discriminant) and descriptive (characteristic) settings of ILP.",
"neighbors": [
837,
2282,
2426
],
"mask": "Train"
},
{
"node_id": 1687,
"label": 4,
"text": "Title: Markov games as a framework for multi-agent reinforcement learning \nAbstract: In the Markov decision process (MDP) formalization of reinforcement learning, a single adaptive agent interacts with an environment defined by a probabilistic transition function. In this solipsistic view, secondary agents can only be part of the environment and are therefore fixed in their behavior. The framework of Markov games allows us to widen this view to include multiple adaptive agents with interacting or competing goals. This paper considers a step in this direction in which exactly two agents with diametrically opposed goals share an environment. It describes a Q-learning-like algorithm for finding optimal policies and demonstrates its application to a simple two-player game in which the optimal policy is probabilistic.",
"neighbors": [
54,
148,
523,
773,
870,
898,
1137,
1189,
1228,
1459,
1557,
1632,
1649
],
"mask": "Train"
},
{
"node_id": 1688,
"label": 1,
"text": "Title: Co-Evolving Soccer Softbot Team Coordination with Genetic Programming \nAbstract: Genetic Programming is a promising new method for automatically generating functions and algorithms through natural selection. In contrast to other learning methods, Genetic Programming's automatic programming makes it a natural approach for developing algorithmic robot behaviors. In this paper we present an overview of how we apply Genetic Programming to behavior-based team coordination in the RoboCup Soccer Server domain. The result is not just a hand-coded soccer algorithm, but a team of softbots which have learned on their own how to play a reasonable game of soccer.",
"neighbors": [
1178,
1228
],
"mask": "Validation"
},
{
"node_id": 1689,
"label": 1,
"text": "Title: Selection for Wandering Behavior in a Small Robot \nAbstract: We have evolved artificial neural networks to control the wandering behavior of small robots. The task was to touch as many squares in a grid as possible during a fixed period of time. A number of the simulated robots were embodied in small Lego (Trademark) robot, controlled by a Motorola (Trademark) 6811 processor; and their performance was compared to the simulations. We observed that: (a) evolution was an effective means to program control; (b) progress was characterized by sharply stepped periods of improvement, separated by periods of stasis that corresponded to levels of behavioral/computational complexity; and (c) the simulated and realized robots behaved quite similarly, the realized robots in some cases outperforming the simulated ones. Introducing random noise to the simulations improved the fit somewhat (from 0.73 to 0.79). Hybrid simulated/embodied selection regimes for evolutionary robots are discussed. ",
"neighbors": [
38,
163,
219,
538,
1738
],
"mask": "Train"
},
{
"node_id": 1690,
"label": 1,
"text": "Title: Evolving Behavioral Strategies in Predators and Prey \nAbstract: The predator/prey domain is utilized to conduct research in Distributed Artificial Intelligence. Genetic Programming is used to evolve behavioral strategies for the predator agents. To further the utility of the predator strategies, the prey population is allowed to evolve at the same time. The expected competitive learning cycle did not surface. This failing is investigated, and a simple prey algorithm surfaces, which is consistently able to evade capture from the predator algorithms.",
"neighbors": [
995,
1178,
1736
],
"mask": "Test"
},
{
"node_id": 1691,
"label": 1,
"text": "Title: Evolutionary Algorithms: Some Very Old Strategies for Optimization and Adaptation \nAbstract: Genetic Algorithms and Evolution Strategies, the main representatives of a class of algorithms based on the model of natural evolution, are discussed w.r.t. their basic working mechanisms, differences, and application possibilities. The mechanism of self-adaptation of strategy parameters within Evolution Strategies is emphasized and turns out to be the major difference to Genetic Algorithms, since it allows for an on-line adaptation of strategy parameters without exogenous control.",
"neighbors": [
163,
422,
793
],
"mask": "Validation"
},
{
"node_id": 1692,
"label": 6,
"text": "Title: Boosting Trees for Cost-Sensitive Classifications \nAbstract: This paper explores two boosting techniques for cost-sensitive tree classification in the situation where misclassification costs change very often. Ideally, one would like to have only one induction, and use the induced model for different misclassification costs. Thus, it demands robustness of the induced model against cost changes. Combining multiple trees gives robust predictions against this change. We demonstrate that ordinary boosting combined with the minimum expected cost criterion to select the prediction class is a good solution under this situation. We also introduce a variant of the ordinary boosting procedure which utilizes the cost information during training. We show that the proposed technique performs better than the ordinary boosting in terms of misclassification cost. However, this technique requires to induce a set of new trees every time the cost changes. Our empirical investigation also reveals some interesting behavior of boosting decision trees for cost-sensitive classification. ",
"neighbors": [
70,
228,
1484
],
"mask": "Train"
},
{
"node_id": 1693,
"label": 4,
"text": "Title: INCREMENTAL SELF-IMPROVEMENT FOR LIFE-TIME MULTI-AGENT REINFORCEMENT LEARNING \nAbstract: Previous approaches to multi-agent reinforcement learning are either very limited or heuristic by nature. The main reason is: each agent's or \"animat's\" environment continually changes because the other learning animats keep changing. Traditional reinforcement learning algorithms cannot properly deal with this. Their convergence theorems require repeatable trials and strong (typically Markovian) assumptions about the environment. In this paper, however, we use a novel, general, sound method for multiple, reinforcement learning \"animats\", each living a single life with limited computational resources in an unrestricted, changing environment. The method is called \"incremental self-improvement\" (IS | Schmidhuber, 1994). IS properly takes into account that whatever some animat learns at some point may affect learning conditions for other animats or for itself at any later point. The learning algorithm of an IS-based animat is embedded in its own policy | the animat cannot only improve its performance, but in principle also improve the way it improves etc. At certain times in the animat's life, IS uses reinforcement/time ratios to estimate from a single training example (namely the entire life so far) which previously learned things are still useful, and selectively keeps them but gets rid of those that start appearing harmful. IS is based on an efficient, stack-based backtracking procedure which is guaranteed to make each animat's learning history a history of long-term reinforcement accelerations. Experiments demonstrate IS' effectiveness. In one experiment, IS learns a sequence of more and more complex function approximation problems. In another, a multi-agent system consisting of three co-evolving, IS-based animats chasing each other learns interesting, stochastic predator and prey strategies. ",
"neighbors": [
1228
],
"mask": "Validation"
},
{
"node_id": 1694,
"label": 1,
"text": "Title: Strategy Adaptation by Competing Subpopulations \nAbstract: The breeder genetic algorithm BGA depends on a set of control parameters and genetic operators. In this paper it is shown that strategy adaptation by competing subpopulations makes the BGA more robust and more efficient. Each subpopulation uses a different strategy which competes with other subpopulations. Numerical results are pre sented for a number of test functions.",
"neighbors": [
793,
1455
],
"mask": "Train"
},
{
"node_id": 1695,
"label": 0,
"text": "Title: Analogical Problem Solving by Adaptation of Schemes \nAbstract: We present a computational approach to the acquisition of problem schemes by learning by doing and to their application in analogical problem solving. Our work has its background in automatic program construction and relies on the concept of recursive program schemes. In contrast to the usual approach to cognitive modelling where computational models are designed to fit specific data we propose a framework to describe certain empirically established characteristics of human problem solving and learning in a uniform and formally sound way.",
"neighbors": [
1354
],
"mask": "Train"
},
{
"node_id": 1696,
"label": 1,
"text": "Title: The Royal Road for Genetic Algorithms: Fitness Landscapes and GA Performance \nAbstract: Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class (\"Royal Road\" functions), and present some initial experimental results concerning the role of crossover and \"building blocks\" on landscapes constructed from features of this class.",
"neighbors": [
163,
1114,
1334,
1769,
1771,
1872,
1943,
1971,
2175,
2250,
2330
],
"mask": "Train"
},
{
"node_id": 1697,
"label": 2,
"text": "Title: Neural Network Exploration Using Optimal Experiment Design \nAbstract: We consider the question \"How should one act when the only goal is to learn as much as possible?\" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. The author was also funded by ATR Human Information Processing Laboratories, Siemens Corporate Research and NSF grant CDA-9309300. ",
"neighbors": [
16,
804,
929,
1559,
1599,
1664,
1667,
1683,
1703
],
"mask": "Train"
},
{
"node_id": 1698,
"label": 0,
"text": "Title: CBET: a Case Base Exploration Tool \nAbstract: We consider the question \"How should one act when the only goal is to learn as much as possible?\" Building on the theoretical results of Fedorov [1972] and MacKay [1992], we apply techniques from Optimal Experiment Design (OED) to guide the query/action selection of a neural network learner. We demonstrate that these techniques allow the learner to minimize its generalization error by exploring its domain efficiently and completely. We conclude that, while not a panacea, OED-based query/action has much to offer, especially in domains where its high computational costs can be tolerated. This report describes research done at the Center for Biological and Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Center is provided in part by a grant from the National Science Foundation under contract ASC-9217041. The author was also funded by ATR Human Information Processing Laboratories, Siemens Corporate Research and NSF grant CDA-9309300. ",
"neighbors": [
66,
430,
686,
1626,
2597
],
"mask": "Train"
},
{
"node_id": 1699,
"label": 0,
"text": "Title: On the Usefulness of Re-using Diagnostic Solutions \nAbstract: Recent studies on planning, comparing plan re-use and plan generation, have shown that both the above tasks may have the same degree of computational complexity, even if we deal with very similar problems. The aim of this paper is to show that the same kind of results apply also for diagnosis. We propose a theoretical complexity analysis coupled with some experimental tests, intended to evaluate the adequacy of adaptation strategies which re-use the solutions of past diagnostic problems in order to build a solution to the problem to be solved. Results of such analysis show that, even if diagnosis re-use falls into the same complexity class of diagnosis generation (they are both NP-complete problems), practical advantages can be obtained by exploiting a hybrid architecture combining case-based and model-based diagnostic problem solving in a unifying framework. ",
"neighbors": [
799,
1215,
2656
],
"mask": "Test"
},
{
"node_id": 1700,
"label": 2,
"text": "Title: Growing a Hypercubical Output Space in a Self-Organizing Feature Map \nAbstract: Recent studies on planning, comparing plan re-use and plan generation, have shown that both the above tasks may have the same degree of computational complexity, even if we deal with very similar problems. The aim of this paper is to show that the same kind of results apply also for diagnosis. We propose a theoretical complexity analysis coupled with some experimental tests, intended to evaluate the adequacy of adaptation strategies which re-use the solutions of past diagnostic problems in order to build a solution to the problem to be solved. Results of such analysis show that, even if diagnosis re-use falls into the same complexity class of diagnosis generation (they are both NP-complete problems), practical advantages can be obtained by exploiting a hybrid architecture combining case-based and model-based diagnostic problem solving in a unifying framework. ",
"neighbors": [
687,
745,
1157
],
"mask": "Validation"
},
{
"node_id": 1701,
"label": 2,
"text": "Title: Pattern analysis and synthesis in attractor neural networks \nAbstract: The representation of hidden variable models by attractor neural networks is studied. Memories are stored in a dynamical attractor that is a continuous manifold of fixed points, as illustrated by linear and nonlinear networks with hidden neurons. Pattern analysis and synthesis are forms of pattern completion by recall of a stored memory. Analysis and synthesis in the linear network are performed by bottom-up and top-down connections. In the nonlinear network, the analysis computation additionally requires rectification nonlinearity and inner product inhibition between hidden neurons. One popular approach to sensory processing is based on generative models, which assume that sensory input patterns are synthesized from some underlying hidden variables. For example, the sounds of speech can be synthesized from a sequence of phonemes, and images of a face can be synthesized from pose and lighting variables. Hidden variables are useful because they constitute a simpler representation of the variables that are visible in the sensory input. Using a generative model for sensory processing requires a method of pattern analysis. Given a sensory input pattern, analysis is the recovery of the hidden variables from which it was synthesized. In other words, analysis and synthesis are inverses of each other. There are a number of approaches to pattern analysis. In analysis-by-synthesis, the synthetic model is embedded inside a negative feedback loop[1]. Another approach is to construct a separate analysis model[2]. This paper explores a third approach, in which visible-hidden pairs are embedded as attractive fixed points, or attractors, in the state space of a recurrent neural network. The attractors can be regarded as memories stored in the network, and analysis and synthesis as forms of pattern completion by recall of a memory. The approach is illustrated with linear and nonlinear network architectures. In both networks, the synthetic model is linear, as in principal ",
"neighbors": [
33,
36,
832,
1591
],
"mask": "Test"
},
{
"node_id": 1702,
"label": 6,
"text": "Title: Decision Graphs An Extension of Decision Trees \nAbstract: Technical Report No: 92/173 (C) Jonathan Oliver 1992 Shortened appeared in AI and Statistics 1993[14] Abstract: In this paper, we examine Decision Graphs, a generalization of decision trees. We present an inference scheme to construct decision graphs using the Minimum Message Length Principle. Empirical tests demonstrate that this scheme compares favourably with other decision tree inference schemes. This work provides a metric for comparing the relative merit of the decision tree and decision graph formalisms for a particular domain. ",
"neighbors": [
1161,
1199
],
"mask": "Train"
},
{
"node_id": 1703,
"label": 4,
"text": "Title: REINFORCEMENT DRIVEN INFORMATION ACQUISITION IN NON-DETERMINISTIC ENVIRONMENTS \nAbstract: For an agent living in a non-deterministic Markov environment (NME), what is, in theory, the fastest way of acquiring information about its statistical properties? The answer is: to design \"optimal\" sequences of \"experiments\" by performing action sequences that maximize expected information gain. This notion is implemented by combining concepts from information theory and reinforcement learning. Experiments show that the resulting method, reinforcement driven information acquisition, can explore certain NMEs much faster than conventional random exploration. ",
"neighbors": [
740,
1559,
1697
],
"mask": "Train"
},
{
"node_id": 1704,
"label": 2,
"text": "Title: LBG-U method for vector quantization an improvement over LBG inspired from neural networks \nAbstract: Internal Report 97-01 ",
"neighbors": [
687,
745,
1157
],
"mask": "Train"
},
{
"node_id": 1705,
"label": 6,
"text": "Title: Learning from Incomplete Boundary Queries Using Split Graphs and Hypergraphs (Extended Abstract) \nAbstract: We consider learnability with membership queries in the presence of incomplete information. In the incomplete boundary query model introduced by Blum et al. [7], it is assumed that membership queries on instances near the boundary of the target concept may receive a \"don't know\" answer. We show that zero-one threshold functions are efficiently learnable in this model. The learning algorithm uses split graphs when the boundary region has radius 1, and their generalization to split hypergraphs (for which we give a split-finding algorithm) when the boundary region has constant radius greater than 1. We use a notion of indistinguishability of concepts that is appropriate for this model.",
"neighbors": [
459,
1364,
1469,
2356
],
"mask": "Test"
},
{
"node_id": 1706,
"label": 0,
"text": "Title: A Performance Model for Knowledge-based Systems \nAbstract: Most techniques for verification and validation are directed at functional properties of programs. However, other properties of programs are also essential. This paper describes a model for the average computing time of a KADS knowledge-based system based on its structure. An example taken from an existing knowledge-based system is used to demonstrate the use of the cost-model in designing the system. ",
"neighbors": [
799,
1385,
1635
],
"mask": "Validation"
},
{
"node_id": 1707,
"label": 0,
"text": "Title: Supporting Combined Human and Machine Planning: The Prodigy 4.0 User Interface Version 2.0* \nAbstract: Realistic and complex planning situations require a mixed-initiative planning framework in which human and automated planners interact to mutually construct a desired plan. Ideally, this joint cooperation has the potential of achieving better plans than either the human or the machine can create alone. Human planners often take a case-based approach to planning, relying on their past experience and planning by retrieving and adapting past planning cases. Planning by analogical reasoning in which generative and case-based planning are combined, as in Prodigy/Analogy, provides a suitable framework to study this mixed-initiative integration. However, having a human user engaged in this planning loop creates a variety of new research questions. The challenges we found creating a mixed-initiative planning system fall into three categories: planning paradigms differ in human and machine planning; visualization of the plan and planning process is a complex, but necessary task; and human users range across a spectrum of experience, both with respect to the planning domain and the underlying planning technology. This paper presents our approach to these three problems when designing an interface to incorporate a human into the process of planning by analogical reasoning with Prodigy/Analogy. The interface allows the user to follow both generative and case-based planning, it supports visualization of both plan and the planning rationale, and it addresses the variance in the experience of the user by allowing the user to control the presentation of information. * This research is sponsored as part of the DARPA/RL Knowledge Based Planning and Scheduling Initiative under grant number F30602-95-1-0018. A short version of this document appeared as Cox, M. T., & Veloso, M. M. (1997). Supporting combined human and machine planning: An interface for planning by analogical reasoning. In D. B. Leake & E. Plaza (Eds.), Case-Based Reasoning Research and Development: Second International Conference on Case-Based Reasoning (pp. 531-540). Berlin: Springer-Verlag. ",
"neighbors": [
824,
825,
1215
],
"mask": "Test"
},
{
"node_id": 1708,
"label": 6,
"text": "Title: A Simpler Look at Consistency \nAbstract: One of the major goals of most early concept learners was to find hypotheses that were perfectly consistent with the training data. It was believed that this goal would indirectly achieve a high degree of predictive accuracy on a set of test data. Later research has partially disproved this belief. However, the issue of consistency has not yet been resolved completely. We examine the issue of consistency from a new perspective. To avoid overfitting the training data, a considerable number of current systems have sacrificed the goal of learning hypotheses that are perfectly consistent with the training instances by setting a goal of hypothesis simplicity (Occam's razor). Instead of using simplicity as a goal, we have developed a novel approach that addresses consistency directly. In other words, our concept learner has the explicit goal of selecting the most appropriate degree of consistency with the training data. We begin this paper by exploring concept learning with less than perfect consistency. Next, we describe a system that can adapt its degree of consistency in response to feedback about predictive accuracy on test data. Finally, we present the results of initial experiments that begin to address the question of how tightly hypotheses should fit the training data for different problems. ",
"neighbors": [
1333
],
"mask": "Train"
},
{
"node_id": 1709,
"label": 2,
"text": "Title: NEURAL NETWORK APPROACH TO BLIND SEPARATION AND ENHANCEMENT OF IMAGES \nAbstract: In this contribution we propose a new solution for the problem of blind separation of sources (for one dimensional signals and images) in the case that not only the waveform of sources is unknown, but also their number. For this purpose multi-layer neural networks with associated adaptive learning algorithms are developed. The primary source signals can have any non-Gaussian distribution, i.e. they can be sub-Gaussian and/or super-Gaussian. Computer experiments are presented which demonstrate the validity and high performance of the proposed approach. ",
"neighbors": [
59,
570,
839,
1520
],
"mask": "Train"
},
{
"node_id": 1710,
"label": 2,
"text": "Title: A Non-linear Information Maximisation Algorithm that Performs Blind Separation. \nAbstract: A new learning algorithm is derived which performs online stochastic gradient ascent in the mutual information between outputs and inputs of a network. In the absence of a priori knowledge about the `signal' and `noise' components of the input, propagation of information depends on calibrating network non-linearities to the detailed higher-order moments of the input density functions. By incidentally minimising mutual information between outputs, as well as maximising their individual entropies, the network `fac-torises' the input into independent components. As an example application, we have achieved near-perfect separation of ten digitally mixed speech signals. Our simulations lead us to believe that our network performs better at blind separation than the Herault-Jutten network, reflecting the fact that it is derived rigorously from the mutual information objective. ",
"neighbors": [
576,
1450,
1656
],
"mask": "Train"
},
{
"node_id": 1711,
"label": 4,
"text": "Title: Environments with Classifier Systems (Experiments on Adding Memory to XCS) \nAbstract: Pier Luca Lanzi Technical Report N. 97.45 October 17 th , 1997 ",
"neighbors": [
1515,
1581
],
"mask": "Validation"
},
{
"node_id": 1712,
"label": 6,
"text": "Title: An Efficient Extension to Mixture Techniques for Prediction and Decision Trees \nAbstract: We present a method for maintaining mixtures of prunings of a prediction or decision tree that extends the \"node-based\" prunings of [Bun90, WST95, HS97] to the larger class of edge-based prunings. The method includes an efficient online weight allocation algorithm that can be used for prediction, compression and classification. Although the set of edge-based prunings of a given tree is much larger than that of node-based prunings, our algorithm has similar space and time complexity to that of previous mixture algorithms for trees. Using the general on-line framework of Freund and Schapire [FS97], we prove that our algorithm maintains correctly the mixture weights for edge-based prunings with any bounded loss function. We also give a similar algorithm for the logarithmic loss function with a corresponding weight allocation algorithm. Finally, we describe experiments comparing node-based and edge-based mixture models for estimating the probability of the next word in English text, which show the ad vantages of edge-based models.",
"neighbors": [
453,
569,
1025,
1290
],
"mask": "Train"
},
{
"node_id": 1713,
"label": 3,
"text": "Title: A simulation approach to convergence rates for Markov chain Monte Carlo algorithms \nAbstract: Markov chain Monte Carlo (MCMC) methods, including the Gibbs sampler and the Metropolis-Hastings algorithm, are very commonly used in Bayesian statistics for sampling from complicated, high-dimensional posterior distributions. A continuing source of uncertainty is how long such a sampler must be run in order to converge approximately to its target stationary distribution. Rosenthal (1995b) presents a method to compute rigorous theoretical upper bounds on the number of iterations required to achieve a specified degree of convergence in total variation distance by verifying drift and minorization conditions. We propose the use of auxiliary simulations to estimate the numerical values needed in Rosenthal's theorem. Our simulation method makes it possible to compute quantitative convergence bounds for models for which the requisite analytical computations would be prohibitively difficult or impossible. On the other hand, although our method appears to perform well in our example problems, it can not provide the guarantees offered by analytical proof. Acknowledgements. We thank Brad Carlin for assistance and encouragement. ",
"neighbors": [
41,
115,
138,
416,
468,
889,
892,
1716,
1870,
1978,
1982,
1991,
1992,
2002,
2153,
2510
],
"mask": "Train"
},
{
"node_id": 1714,
"label": 3,
"text": "Title: Uncertain Inferences and Uncertain Conclusions \nAbstract: Uncertainty may be taken to characterize inferences, their conclusions, their premises or all three. Under some treatments of uncertainty, the inference itself is never characterized by uncertainty. We explore both the significance of uncertainty in the premises and in the conclusion of an argument that involves uncertainty. We argue that for uncertainty to characterize the conclusion of an inference is natural, but that there is an interplay between uncertainty in the premises and uncertainty in the procedure of argument itself. We show that it is possible in principle to incorporate all uncertainty in the premises, rendering uncertainty arguments deductively valid. But we then argue (1) that this does not reflect human argument, (2) that it is computationally costly, and (3) that the gain in simplicity obtained by allowing uncertainty in inference can sometimes outweigh the loss of flexibility it entails. ",
"neighbors": [
1458
],
"mask": "Validation"
},
{
"node_id": 1715,
"label": 1,
"text": "Title: A Sampling-Based Heuristic for Tree Search Applied to Grammar Induction \nAbstract: In the field of Operation Research and Artificial Intelligence, several stochastic search algorithms have been designed based on the theory of global random search (Zhigljavsky 1991). Basically, those techniques iteratively sample the search space with respect to a probability distribution which is updated according to the result of previous samples and some predefined strategy. Genetic Algorithms (GAs) (Goldberg 1989) or Greedy Randomized Adaptive Search Procedures (GRASP) (Feo & Resende 1995) are two particular instances of this paradigm. In this paper, we present SAGE, a search algorithm based on the same fundamental mechanisms as those techniques. However, it addresses a class of problems for which it is difficult to design transformation operators to perform local search because of intrinsic constraints in the definition of the problem itself. For those problems, a procedural approach is the natural way to construct solutions, resulting in a state space represented as a tree or a DAG. The aim of this paper is to describe the underlying heuristics used by SAGE to address problems belonging to that class. The performance of SAGE is analyzed on the problem of grammar induction and its successful application to problems from the recent Abbadingo DFA learning competition is presented. ",
"neighbors": [
163,
793,
1386,
1734
],
"mask": "Train"
},
{
"node_id": 1716,
"label": 3,
"text": "Title: Analysis of the Gibbs sampler for a model related to James-Stein estimators \nAbstract: Summary. We analyze a hierarchical Bayes model which is related to the usual empirical Bayes formulation of James-Stein estimators. We consider running a Gibbs sampler on this model. Using previous results about convergence rates of Markov chains, we provide rigorous, numerical, reasonable bounds on the running time of the Gibbs sampler, for a suitable range of prior distributions. We apply these results to baseball data from Efron and Morris (1975). For a different range of prior distributions, we prove that the Gibbs sampler will fail to converge, and use this information to prove that in this case the associated posterior distribution is non-normalizable. Acknowledgements. I am very grateful to Jun Liu for suggesting this project, and to Neal Madras for suggesting the use of the Submartingale Convergence Theorem herein. I thank Kate Cowles and Richard Tweedie for helpful conversations, and thank the referees for useful comments. ",
"neighbors": [
41,
138,
892,
1713,
1982,
1991,
2153
],
"mask": "Train"
},
{
"node_id": 1717,
"label": 1,
"text": "Title: 3 Representation Issues in Neighborhood Search and Evolutionary Algorithms \nAbstract: Evolutionary Algorithms are often presented as general purpose search methods. Yet, we also know that no search method is better than another over all possible problems and that in fact there is often a good deal of problem specific information involved in the choice of problem representation and search operators. In this paper we explore some very general properties of representations as they relate to neighborhood search methods. In particular, we looked at the expected number of local optima under a neighborhood search operator when averaged overall possible representations. The number of local optima under a neighborhood search operator for standard Binary and standard binary reflected Gray codes is developed and explored as one measure of problem complexity. We also relate number of local optima to another metric, , designed to provide one measure of complexity with respect to a simple genetic algorithm. Choosing a good representation is a vital component of solving any search problem. However, choosing a good representation for a problem is as difficult as choosing a good search algorithm for a problem. Wolpert and Macready's (1995) No Free Lunch (NFL) theorem proves that no search algorithm is better than any other over all possible discrete functions. Radcliffe and Surry (1995) extend these notions to also cover the idea that all representations are equivalent when their behavior is considered on average over all possible functions. To understand these results, we first outline some of the simple assumptions behind this theorem. First, assume the optimization problem is discrete; this describes all combinatorial optimization problems-and really all optimization problems being solved on computers since computers have finite precision. Second, we ignore the fact that we can resample points in the space. The \"No Free Lunch\" result can be stated as follows: ",
"neighbors": [
163,
941,
1380,
1441
],
"mask": "Validation"
},
{
"node_id": 1718,
"label": 2,
"text": "Title: PREDICTING SUNSPOTS AND EXCHANGE RATES WITH CONNECTIONIST NETWORKS \nAbstract: We investigate the effectiveness of connectionist networks for predicting the future continuation of temporal sequences. The problem of overfitting, particularly serious for short records of noisy data, is addressed by the method of weight-elimination: a term penalizing network complexity is added to the usual cost function in back-propagation. The ultimate goal is prediction accuracy. We analyze two time series. On the benchmark sunspot series, the networks outperform traditional statistical approaches. We show that the network performance does not deteriorate when there are more input units than needed. Weight-elimination also manages to extract some part of the dynamics of the notoriously noisy currency exchange rates and makes the network solution interpretable. ",
"neighbors": [
28,
157,
201,
810,
1079,
1373,
1867,
2138,
2413,
2414,
2582
],
"mask": "Test"
},
{
"node_id": 1719,
"label": 1,
"text": "Title: An Analysis of the MAX Problem in Genetic Programming hold only in some cases, in\nAbstract: We present a detailed analysis of the evolution of genetic programming (GP) populations using the problem of finding a program which returns the maximum possible value for a given terminal and function set and a depth limit on the program tree (known as the MAX problem). We confirm the basic message of [ Gathercole and Ross, 1996 ] that crossover together with program size restrictions can be responsible for premature convergence to a suboptimal solution. We show that this can happen even when the population retains a high level of variety and show that in many cases evolution from the sub-optimal solution to the solution is possible if sufficient time is allowed. In both cases theoretical models are presented and compared with actual runs. ",
"neighbors": [
1216,
1257,
1911,
2175,
2261,
2363
],
"mask": "Train"
},
{
"node_id": 1720,
"label": 5,
"text": "Title: Least Generalizations and Greatest Specializations of Sets of Clauses \nAbstract: The main operations in Inductive Logic Programming (ILP) are generalization and specialization, which only make sense in a generality order. In ILP, the three most important generality orders are subsumption, implication and implication relative to background knowledge. The two languages used most often are languages of clauses and languages of only Horn clauses. This gives a total of six different ordered languages. In this paper, we give a systematic treatment of the existence or non-existence of least generalizations and greatest specializations of finite sets of clauses in each of these six ordered sets. We survey results already obtained by others and also contribute some answers of our own. Our main new results are, firstly, the existence of a computable least generalization under implication of every finite set of clauses containing at least one non-tautologous function-free clause (among other, not necessarily function-free clauses). Secondly, we show that such a least generalization need not exist under relative implication, not even if both the set that is to be generalized and the background knowledge are function-free. Thirdly, we give a complete discussion of existence and non-existence of greatest specializations in each of the six ordered languages.",
"neighbors": [
849
],
"mask": "Test"
},
{
"node_id": 1721,
"label": 5,
"text": "Title: Machine learning in blood group determination of Danish Jersey cattle (causal probabilistic network). Dobljene mreze\nAbstract: In the following paper we approach the problem with different machine learning algorithms and show that they can be compared with causal probabilistic networks in terms of performance and comprehensibility. ",
"neighbors": [
1569
],
"mask": "Validation"
},
{
"node_id": 1722,
"label": 3,
"text": "Title: BAYESIAN TIME SERIES: Models and Computations for the Analysis of Time Series in the Physical Sciences \nAbstract: This articles discusses developments in Bayesian time series mod-elling and analysis relevant in studies of time series in the physical and engineering sciences. With illustrations and references, we discuss: Bayesian inference and computation in various state-space models, with examples in analysing quasi-periodic series; isolation and modelling of various components of error in time series; decompositions of time series into significant latent subseries; nonlinear time series models based on mixtures of auto-regressions; problems with errors and uncertainties in the timing of observations; and the development of non-linear models based on stochastic deformations of time scales. ",
"neighbors": [
99,
1619,
1723
],
"mask": "Test"
},
{
"node_id": 1723,
"label": 3,
"text": "Title: Modelling and robustness issues in Bayesian time series analysis \nAbstract: Some areas of recent development and current interest in time series are noted, with some discussion of Bayesian modelling efforts motivated by substantial practical problems. The areas include non-linear auto-regressive time series modelling, measurement error structures in state-space modelling of time series, and issues of timing uncertainties and time deformations. Some discussion of the needs and opportunities for work on non/semi-parametric models and robustness issues is given in each context. ",
"neighbors": [
1619,
1722
],
"mask": "Test"
},
{
"node_id": 1724,
"label": 2,
"text": "Title: Annealed Competition of Experts for a Segmentation and Classification of Switching Dynamics \nAbstract: We present a method for the unsupervised segmentation of data streams originating from different unknown sources which alternate in time. We use an architecture consisting of competing neural networks. Memory is included in order to resolve ambiguities of input-output relations. In order to obtain maximal specialization, the competition is adiabatically increased during training. Our method achieves almost perfect identification and segmentation in the case of switching chaotic dynamics where input manifolds overlap and input-output relations are ambiguous. Only a small dataset is needed for the training proceedure. Applications to time series from complex systems demonstrate the potential relevance of our approach for time series analysis and short-term prediction. ",
"neighbors": [
1079,
1492,
1508,
1538
],
"mask": "Train"
},
{
"node_id": 1725,
"label": 2,
"text": "Title: DIFFERENTIALLY GENERATED NEURAL NETWORK CLASSIFIERS ARE EFFICIENT \nAbstract: Differential learning for statistical pattern classification is described in [5]; it is based on the classification figure-of-merit (CFM) objective function described in [9, 5]. We prove that differential learning is asymptotically efficient, guaranteeing the best generalization allowed by the choice of hypothesis class (see below) as the training sample size grows large, while requiring the least classifier complexity necessary for Bayesian (i.e., minimum probability-of-error) discrimination. Moreover, differential learning almost always guarantees the best generalization allowed by the choice of hypothesis class for small training sample sizes. ",
"neighbors": [
921,
1265
],
"mask": "Train"
},
{
"node_id": 1726,
"label": 5,
"text": "Title: Prognosing the Survival Time of the Patients with the Anaplastic Thyroid Carcinoma with Machine Learning \nAbstract: Anaplastic thyroid carcinoma is a rare but very aggressive tumor. Many factors that might influence the survival of patients have been suggested. The aim of our study was to determine which of the factors, known at the time of admission to the hospital, might predict survival of patients with anaplastic thyroid carcinoma. Our aim was also to assess the relative importance of the factors and to identify potentially useful decision and regression trees generated by machine learning algorithms. Our study included 126 patients (90 females and 36 males; mean age was 66.7 years) with anaplastic thyroid carcinoma treated at the Institute of Oncology Ljubljana from 1972 to 1992. Patients were classified into categories according to 11 attributes: sex, age, history, physical findings, extent of disease on admission, and tumor morphology. In this paper we compare the machine learning approach with the previous statistical evaluations on the problem (uni-variate and multivariate analysis) and show that it can provide more thorough analysis and improve understanding of the data. ",
"neighbors": [
314,
1182,
1569,
1679,
1684
],
"mask": "Train"
},
{
"node_id": 1727,
"label": 6,
"text": "Title: Machine Learning, 22(1/2/3):95-121, 1996. On the Worst-case Analysis of Temporal-difference Learning Algorithms \nAbstract: We study the behavior of a family of learning algorithms based on Sutton's method of temporal differences. In our on-line learning framework, learning takes place in a sequence of trials, and the goal of the learning algorithm is to estimate a discounted sum of all the reinforcements that will be received in the future. In this setting, we are able to prove general upper bounds on the performance of a slightly modified version of Sutton's so-called TD() algorithm. These bounds are stated in terms of the performance of the best linear predictor on the given training sequence, and are proved without making any statistical assumptions of any kind about the process producing the learner's observed training sequence. We also prove lower bounds on the performance of any algorithm for this learning problem, and give a similar analysis of the closely related problem of learning to predict in a model in which the learner must produce predictions for a whole batch of observations before receiving reinforcement. ",
"neighbors": [
565,
738,
1376
],
"mask": "Validation"
},
{
"node_id": 1728,
"label": 1,
"text": "Title: Dynamic Parameter Encoding for Genetic Algorithms \nAbstract: The common use of static binary place-value codes for real-valued parameters of the phenotype in Holland's genetic algorithm (GA) forces either the sacrifice of representational precision for efficiency of search or vice versa. Dynamic Parameter Encoding (DPE) is a mechanism that avoids this dilemma by using convergence statistics derived from the GA population to adaptively control the mapping from fixed-length binary genes to real values. DPE is shown to be empirically effective and amenable to analysis; we explore the problem of premature convergence in GAs through two convergence models.",
"neighbors": [
129,
163,
168,
1110,
1249,
1474,
1536,
1775
],
"mask": "Train"
},
{
"node_id": 1729,
"label": 1,
"text": "Title: Adapting Crossover in a Genetic Algorithm \nAbstract: Traditionally, genetic algorithms have relied upon 1 and 2-point crossover operators. Many recent empirical studies, however, have shown the benefits of higher numbers of crossover points. Some of the most intriguing recent work has focused on uniform crossover, which involves on the average L/2 crossover points for strings of length L. Despite theoretical analysis, however, it appears difficult to predict when a particular crossover form will be optimal for a given problem. This paper describes an adaptive genetic algorithm that decides, as it runs, which form is optimal.",
"neighbors": [
163,
1016,
1110
],
"mask": "Train"
},
{
"node_id": 1730,
"label": 1,
"text": "Title: Evolving Edge Detectors with Genetic Programming edge detectors for 1-D signals and image profiles. The\nAbstract: images. We apply genetic programming techniques to the production of high",
"neighbors": [
1034,
1533
],
"mask": "Train"
},
{
"node_id": 1731,
"label": 3,
"text": "Title: A Gibbs Sampling Approach to Cointegration \nAbstract: This paper reviews the application of Gibbs sampling to a cointegrated VAR system. Aggregate imports and import prices for Belgium are modelled using two cointegrating relations. Gibbs sampling techniques are used to estimate from a Bayesian perspective the cointegrating relations and their weights in the VAR system. Extensive use of spectral analysis is made to get insight into convergence issues. ",
"neighbors": [
888
],
"mask": "Test"
},
{
"node_id": 1732,
"label": 2,
"text": "Title: Improving the Performance of Radial Basis Function Networks by Learning Center Locations \nAbstract: This paper reviews the application of Gibbs sampling to a cointegrated VAR system. Aggregate imports and import prices for Belgium are modelled using two cointegrating relations. Gibbs sampling techniques are used to estimate from a Bayesian perspective the cointegrating relations and their weights in the VAR system. Extensive use of spectral analysis is made to get insight into convergence issues. ",
"neighbors": [
611,
853,
1493,
1644,
2225,
2423
],
"mask": "Train"
},
{
"node_id": 1733,
"label": 3,
"text": "Title: Decision Analysis by Augmented Probability Simulation \nAbstract: We provide a generic Monte Carlo method to find the alternative of maximum expected utility in a decision analysis. We define an artificial distribution on the product space of alternatives and states, and show that the optimal alternative is the mode of the implied marginal distribution on the alternatives. After drawing a sample from the artificial distribution, we may use exploratory data analysis tools to approximately identify the optimal alternative. We illustrate our method for some important types of influence diagrams. (Decision Analysis, Influence Diagrams, Markov chain Monte Carlo, Simulation) ",
"neighbors": [
41,
904
],
"mask": "Train"
},
{
"node_id": 1734,
"label": 1,
"text": "Title: A Stochastic Search Approach to Grammar Induction \nAbstract: This paper describes a new sampling-based heuristic for tree search named SAGE and presents an analysis of its performance on the problem of grammar induction. This last work has been inspired by the Abbadingo DFA learning competition [14] which took place between Mars and November 1997. SAGE ended up as one of the two winners in that competition. The second winning algorithm, first proposed by Rod-ney Price, implements a new evidence-driven heuristic for state merging. Our own version of this heuristic is also described in this paper and compared to SAGE.",
"neighbors": [
163,
793,
1249,
1592,
1715
],
"mask": "Test"
},
{
"node_id": 1735,
"label": 0,
"text": "Title: Supporting Conversational Case-Based Reasoning in an Integrated Reasoning Framework Conversational Case-Based Reasoning \nAbstract: Conversational case-based reasoning (CCBR) has been successfully used to assist in case retrieval tasks. However, behavioral limitations of CCBR motivate the search for integrations with other reasoning approaches. This paper briefly describes our group's ongoing efforts towards enhancing the inferencing behaviors of a conversational case-based reasoning development tool named NaCoDAE. In particular, we focus on integrating NaCoDAE with machine learning, model-based reasoning, and generative planning modules. This paper defines CCBR, briefly summarizes the integrations, and explains how they enhance the overall system. Our research focuses on enhancing the performance of conversational case-based reasoning (CCBR) systems (Aha & Breslow, 1997). CCBR is a form of case-based reasoning where users initiate problem solving conversations by entering an initial problem description in natural language text. This text is assumed to be a partial rather than a complete problem description. The CCBR system then assists in eliciting refinements of this description and in suggesting solutions. Its primary purpose is to provide a focus of attention for the user so as to quickly provide a solution(s) for their problem. Figure 1 summarizes the CCBR problem solving cycle. Cases in a CCBR library have three components: ",
"neighbors": [
983,
1002,
1626
],
"mask": "Train"
},
{
"node_id": 1736,
"label": 1,
"text": "Title: Evolving Cooperation Strategies \nAbstract: The identification, design, and implementation of strategies for cooperation is a central research issue in the field of Distributed Artificial Intelligence (DAI). We propose a novel approach to the construction of cooperation strategies for a group of problem solvers based on the Genetic Programming (GP) paradigm. GPs are a class of adaptive algorithms used to evolve solution structures that optimize a given evaluation criterion. Our approach is based on designing a representation for cooperation strategies that can be manipulated by GPs. We present results from experiments in the predator-prey domain, which has been extensively studied as a easy-to-describe but difficult-to-solve cooperation problem domain. The key aspect of our approach is the minimal reliance on domain knowledge and human intervention in the construction of good cooperation strategies. Promising comparison results with prior systems lend credence to the viability of this ap proach.",
"neighbors": [
1178,
1690,
1737
],
"mask": "Train"
},
{
"node_id": 1737,
"label": 1,
"text": "Title: A Simulation of Adaptive Agents in a Hostile Environment \nAbstract: In this paper we use the genetic programming technique to evolve programs to control an autonomous agent capable of learning how to survive in a hostile environment. In order to facilitate this goal, agents are run through random environment configurations. Randomly generated programs, which control the interaction of the agent with its environment, are recombined to form better programs. Each generation of the population of agents is placed into the Simulator with the ultimate goal of producing an agent capable of surviving any environment. The environment that an agent is presented consists of other agents, mines, and energy. The goal of this research is to construct a program which when executed will allow an agent (or agents) to correctly sense, and mark, the presence of items (energy and mines) in any environment. The Simulator determines the raw fitness of each agent by interpreting the associated program. General programs are evolved to solve this problem. Different environmental setups are presented to show the generality of the solution. These environments include one agent in a fixed environment, one agent in a fluctuating environment, and multiple agents in a fluctuating environment cooperating together. The genetic programming technique was extremely successful. The average fitness per generation in all three environments tested showed steady improvement. Programs were successfully generated that enabled an agent to handle any possible environment. ",
"neighbors": [
380,
415,
1178,
1736
],
"mask": "Train"
},
{
"node_id": 1738,
"label": 1,
"text": "Title: Evolving nonTrivial Behaviors on Real Robots: an Autonomous Robot that Picks up Objects \nAbstract: Recently, a new approach that involves a form of simulated evolution has been proposed for the building of autonomous robots. However, it is still not clear if this approach may be adequate to face real life problems. In this paper we show how control systems that perform a nontrivial sequence of behaviors can be obtained with this methodology by carefully designing the conditions in which the evolutionary process operates. In the experiment described in the paper, a mobile robot is trained to locate, recognize, and grasp a target object. The controller of the robot has been evolved in simulation and then downloaded and tested on the real robot. ",
"neighbors": [
219,
538,
1264,
1689
],
"mask": "Train"
},
{
"node_id": 1739,
"label": 6,
"text": "Title: Model Selection based on Minimum Description Length \nAbstract: Recently, a new approach that involves a form of simulated evolution has been proposed for the building of autonomous robots. However, it is still not clear if this approach may be adequate to face real life problems. In this paper we show how control systems that perform a nontrivial sequence of behaviors can be obtained with this methodology by carefully designing the conditions in which the evolutionary process operates. In the experiment described in the paper, a mobile robot is trained to locate, recognize, and grasp a target object. The controller of the robot has been evolved in simulation and then downloaded and tested on the real robot. ",
"neighbors": [
641,
848
],
"mask": "Train"
},
{
"node_id": 1740,
"label": 1,
"text": "Title: As mentioned in the introduction, an encod-ing/crossover pair makes a spectrum of geographical linkages. A\nAbstract: It is open as to which chromosomal dimension performs best. Although higher-dimensional encodings (whether real or imaginary) can preserve more geographical gene linkages, we suspect that too high a dimension would not perform desirably. We are studying the question of which dimension of encoding is best for a given instance. It is likely that the optimal dimension is somehow dependent on the chromosome size and the input graph topology; interactions with the flexibility of crossover are yet unknown. The interaction of these considerations with the number of cuts used in the crossover is also an open issue. * In relocating genes onto a multi-dimensional chromosome, the simplest way is via a sequential assignment such as row-major order. Section 4 showed that performance improves when a DFS-row-major reembedding is used for two- and three-dimensional encodings. We suspect that this phenomenon will be consistent for higher-dimensional cases, and hope to perform more detailed investigations in the future. Although DFS reordering proved to be helpful for both linear encodings [19] and multi-dimensional encodings, we do not believe DFS-row-major reembedding is a good approach for the multi-dimensional cases since the row-major embedding is so simplistic. We are considering alternative 2-dimensional and 3-dimensional reembeddings which will hopefully provide further improvement. [4] T. N. Bui and B. R. Moon. Hyperplane synthesis for genetic algorithms. In Fifth International Conference on Genetic Algorithms, pages 102-109, July 1993. [5] T. N. Bui and B. R. Moon. Analyzing hyperplane synthesis in genetic algorithms using clustered schemata. In International Conference on Evolutionary Computation, Oct. 1994. Lecture Notes in Computer Science, 866:108-118, Springer-Verlag. ",
"neighbors": [
1136,
1305
],
"mask": "Test"
},
{
"node_id": 1741,
"label": 4,
"text": "Title: Reinforcement Learning Algorithm for Partially Observable Markov Decision Problems \nAbstract: Increasing attention has been paid to reinforcement learning algorithms in recent years, partly due to successes in the theoretical analysis of their behavior in Markov environments. If the Markov assumption is removed, however, neither generally the algorithms nor the analyses continue to be usable. We propose and analyze a new learning algorithm to solve a certain class of non-Markov decision problems. Our algorithm applies to problems in which the environment is Markov, but the learner has restricted access to state information. The algorithm involves a Monte-Carlo policy evaluation combined with a policy improvement method that is similar to that of Markov decision problems and is guaranteed to converge to a local maximum. The algorithm operates in the space of stochastic policies, a space which can yield a policy that performs considerably better than any deterministic policy. Although the space of stochastic policies is continuous|even for a discrete action space|our algorithm is computationally tractable. ",
"neighbors": [
492,
565,
601,
738,
1841
],
"mask": "Validation"
},
{
"node_id": 1742,
"label": 3,
"text": "Title: On MCMC Sampling in Hierarchical Longitudinal Models SUMMARY \nAbstract: Markov chain Monte Carlo (MCMC) algorithms have revolutionized Bayesian practice. In their simplest form (i.e., when parameters are updated one at a time) they are, however, often slow to converge when applied to high-dimensional statistical models. A remedy for this problem is to block the parameters into groups, which are then updated simultaneously using either a Gibbs or Metropolis-Hastings step. In this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in non-Gaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three real-data examples. ",
"neighbors": [
2456
],
"mask": "Validation"
},
{
"node_id": 1743,
"label": 1,
"text": "Title: Feature Selection Methods: Genetic Algorithms vs. Greedy-like Search \nAbstract: This paper presents a comparison between two feature selection methods, the Importance Score (IS) which is based on a greedy-like search and a genetic algorithm-based (GA) method, in order to better understand their strengths and limitations and their area of application. The results of our experiments show a very strong relation between the nature of the data and the behavior of both systems. The Importance Score method is more efficient when dealing with little noise and small number of interacting features, while the genetic algorithms can provide a more robust solution at the expense of increased computational effort. Keywords. feature selection, machine learning, genetic algorithms, search. ",
"neighbors": [
2192
],
"mask": "Validation"
},
{
"node_id": 1744,
"label": 2,
"text": "Title: Simple Synchrony Networks: Learning Generalisations across Syntactic Constituents \nAbstract: This paper describes a training algorithm for Simple Synchrony Networks (SSNs), and reports on experiments in language learning using a recursive grammar. The SSN is a new connectionist architecture combining a technique for learning about patterns across time, Simple Recurrent Networks (SRNs), with Temporal Synchrony Variable Binding (TSVB). The use of TSVB means the SSN can learn about entities in the training set, and generalise this information to entities in the test set. In the experiments, the network is trained on sentences with up to one embedded clause, and with some words restricted to certain classes of constituent. During testing, the network generalises information learned to sentences with up to three embedded clauses, and with words appearing in any constituent. These results demonstrate that SSNs learn generalisations across syntactic constituents. ",
"neighbors": [
2041
],
"mask": "Test"
},
{
"node_id": 1745,
"label": 1,
"text": "Title: Learning Recursive Sequences via Evolution of Machine-Language Programs \nAbstract: We use directed search techniques in the space of computer programs to learn recursive sequences of positive integers. Specifically, the integer sequences of squares, x 2 ; cubes, x 3 ; factorial, x!; and Fibonacci numbers are studied. Given a small finite prefix of a sequence, we show that three directed searches|machine-language genetic programming with crossover, exhaustive iterative hill climbing, and a hybrid (crossover and hill climbing)|can automatically discover programs that exactly reproduce the finite target prefix and, moreover, that correctly produce the remaining sequence up to the underlying machine's precision. Our machine-language representation is generic|it contains instructions for arithmetic, register manipulation and comparison, and control flow. We also introduce an output instruction that allows variable-length sequences as result values. Importantly, this representation does not contain recursive operators; recursion, when needed, is automatically synthesized from primitive instructions. For a fixed set of search parameters (e.g., instruction set, program size, fitness criteria), we compare the efficiencies of the three directed search techniques on the four sequence problems. For this parameter set, an evolutionary-based search always outperforms exhaustive hill climbing as well as undirected random search. Since only the prefix of the target sequence is variable in our experiments, we posit that this approach to sequence induction is potentially quite general. ",
"neighbors": [
2175,
2641
],
"mask": "Validation"
},
{
"node_id": 1746,
"label": 2,
"text": "Title: DISCRETE-TIME TRANSITIVITY AND ACCESSIBILITY: ANALYTIC SYSTEMS 1 \nAbstract: This paper studies the problem, and establishes the desired implication for analytic systems in several cases: (i) compact state space, (ii) under a Poisson stability condition, and (iii) in a generic sense. In addition, the paper studies accessibility properties of the \"control sets\" recently introduced in the context of dynamical systems studies. Finally, various examples and counterexamples are provided relating the various Lie algebras introduced in past work. ",
"neighbors": [
1948
],
"mask": "Train"
},
{
"node_id": 1747,
"label": 3,
"text": "Title: FROM BAYESIAN NETWORKS TO CAUSAL NETWORKS \nAbstract: This paper demonstrates the use of graphs as a mathematical tool for expressing independencies, and as a formal language for communicating and processing causal information for decision analysis. We show how complex information about external interventions can be organized and represented graphically and, conversely, how the graphical representation can be used to facilitate quantitative predictions of the effects of interventions. We first review the theory of Bayesian networks and show that directed acyclic graphs (DAGs) offer an economical scheme for representing conditional independence assumptions and for deducing and displaying all the logical consequences of such assumptions. We then introduce the manipulative account of causation and show that any DAG defines a simple transformation which tells us how the probability distribution will change as a result of external interventions in the system. Using this transformation it is possible to quantify, from non-experimental data, the effects of external interventions and to specify conditions under which randomized experiments are not necessary. As an example, we show how the effect of smoking on lung cancer can be quantified from non-experimental data, using a minimal set of qualitative assumptions. Finally, the paper offers a graphical interpretation for Rubin's model of causal effects, and demonstrates its equivalence to the manipulative account of causation. We exemplify the tradeoffs between the two approaches by deriving nonparametric bounds on treatment effects under conditions of imperfect compliance. fl Portions of this paper were presented at the 49th Session of the International Statistical Institute, Florence, Italy, August 25 - September 3, 1993. ",
"neighbors": [
419,
1324,
1326,
1527,
2160,
2434,
2559
],
"mask": "Test"
},
{
"node_id": 1748,
"label": 6,
"text": "Title: A Quantum Computational Learning Algorithm \nAbstract: An interesting classical result due to Jackson allows polynomial-time learning of the function class DNF using membership queries. Since in most practical learning situations access to a membership oracle is unrealistic, this paper explores the possibility that quantum computation might allow a learning algorithm for DNF that relies only on example queries. A natural extension of Fourier-based learning into the quantum domain is presented. The algorithm requires only an example oracle, and it runs in O( 2 n ) time, a result that appears to be classically impossible. The algorithm is unique among quantum algorithms in that it does not assume a priori knowledge of a function and does not operate on a superposition that includes all possible basis states. ",
"neighbors": [
456,
2182
],
"mask": "Train"
},
{
"node_id": 1749,
"label": 3,
"text": "Title: Algebraic Techniques for Efficient Inference in Bayesian Networks \nAbstract: A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. These algorithms use graph-theoretic techniques to analyze and exploit network topology. In this paper, we examine the problem of efficient probabilistic inference in a belief network as a combinatorial optimization problem, that of finding an optimal factoring given an algebraic expression over a set of probability distributions. We define a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of efficient probabilistic inference, and present simple, easily implemented algorithms with excellent performance. We also show how use of an algebraic perspective permits significant extension to the belief net representation. ",
"neighbors": [
2164
],
"mask": "Validation"
},
{
"node_id": 1750,
"label": 5,
"text": "Title: Modeling Superscalar Processors via Statistical Simulation \nAbstract: A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. These algorithms use graph-theoretic techniques to analyze and exploit network topology. In this paper, we examine the problem of efficient probabilistic inference in a belief network as a combinatorial optimization problem, that of finding an optimal factoring given an algebraic expression over a set of probability distributions. We define a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of efficient probabilistic inference, and present simple, easily implemented algorithms with excellent performance. We also show how use of an algebraic perspective permits significant extension to the belief net representation. ",
"neighbors": [
2106
],
"mask": "Test"
},
{
"node_id": 1751,
"label": 2,
"text": "Title: Global self organization of all known protein sequences reveals inherent biological signatures self organization method\nAbstract: A global classification of all currently known protein sequences is performed. Every protein sequence is partitioned into segments of 50 amino acids and a dynamic-programming distance is calculated between each pair of segments. This space of segments is first embedded into Euclidean space with small metric distortion. A novel self-organized cross-validated clustering algorithm is then applied to the embedded space with Euclidean distances. The resulting hierarchical tree of clusters offers a new representation of protein sequences and families, which compares favorably with the most updated classifications based on functional and structural protein data. Motifs and domains such as the Zinc Finger, EF hand, Homeobox, EGF-like and others are automatically correctly identified. A novel representation of protein families is introduced, from which functional biological kinship of protein families can be deduced, as demonstrated for the transporters family. ",
"neighbors": [
2691
],
"mask": "Train"
},
{
"node_id": 1752,
"label": 0,
"text": "Title: EXPLANATORY INTERFACE IN INTERACTIVE DESIGN ENVIRONMENTS \nAbstract: Explanation is an important issue in building computer-based interactive design environments in which a human designer and a knowledge system may cooperatively solve a design problem. We consider the two related problems of explaining the system's reasoning and the design generated by the system. In particular, we analyze the content of explanations of design reasoning and design solutions in the domain of physical devices. We describe two complementary languages: task-method-knowledge models for explaining design reasoning, and structure-behavior-function models for explaining device designs. INTERACTIVE KRITIK is a computer program that uses these representations to visually illustrate the system's reasoning and the result of a design episode. The explanation of design reasoning in INTERACTIVE KRITIK is in the context of the evolving design solution, and, similarly, the explanation of the design solution is in the context of the design reasoning. ",
"neighbors": [
2706
],
"mask": "Validation"
},
{
"node_id": 1753,
"label": 2,
"text": "Title: Object Oriented Design of a BP Neural Network Simulator and Implementation on the Connection Machine (CM-5) \nAbstract: In this paper we describe the implementation of the backpropagation algorithm by means of an object oriented library (ARCH). The use of this library relieve the user from the details of a specific parallel programming paradigm and at the same time allows a greater portability of the generated code. To provide a comparision with existing solutions, we survey the most relevant implementations of the algorithm proposed so far in the literature, both on dedicated and general purpose computers. Extensive experimental results show that the use of the library does not hurt the performance of our simulator, on the contrary our implementation on a Connection Machine (CM-5) is comparable with the fastest in its category. ",
"neighbors": [
2268
],
"mask": "Train"
},
{
"node_id": 1754,
"label": 4,
"text": "Title: The Neural Network House: An Overview \nAbstract: Typical home comfort systems utilize only rudimentary forms of energy management and conservation. The most sophisticated technology in common use today is an automatic setback thermostat. Tremendous potential remains for improving the efficiency of electric and gas usage. However, home residents who are ignorant of the physics of energy utilization cannot design environmental control strategies, but neither can energy management experts who are ignorant of the behavior patterns of the inhabitants. Adaptive control seems the only alternative. We have begun building an adaptive control system that can infer appropriate rules of operation for home comfort systems based on the lifestyle of the inhabitants and energy conservation goals. Recent research has demonstrated the potential of neural networks for intelligent control. We are constructing a prototype control system in an actual residence using neural network reinforcement learning and prediction techniques. The residence is equipped with sensors to provide information about environmental conditions (e.g., temperatures, ambient lighting level, sound and motion in each room) and actuators to control the gas furnace, electric space heaters, gas hot water heater, lighting, motorized blinds, ceiling fans, and dampers in the heating ducts. This paper presents an overview of the project as it now stands.",
"neighbors": [
1867,
1869
],
"mask": "Validation"
},
{
"node_id": 1755,
"label": 2,
"text": "Title: Learning Controllers from Examples a motivation for searching alternative, empirical techniques for generating controllers. \nAbstract: Today there is a great interest in discovering methods that allow a faster design and development of real-time control software. Control theory helps when linear controllers have to be developed but it does not support the generation In this paper, it is discussed how Machine Learning has been applied to the Function, and Locally Receptive Field Function Approximators. Three integrated learning algorithms, two of which are original, are described and then tried on two experimental test cases. The first test case is provided by an industrial robot KUKA IR-361 engaged into the \"peg-into-hole\" task, while the second is a classical prediction task on the Mackey-Glass chaotic series. From the experimental comparison, it appears that both Fuzzy Controllers and RBFNs synthesised from examples are excellent approximators, and that they can be even more accurate than MLPs. of non-linear controllers, which in many cases (such as in compliant motion control)",
"neighbors": [
611,
2432
],
"mask": "Validation"
},
{
"node_id": 1756,
"label": 1,
"text": "Title: Soft Computing: the Convergence of Emerging Reasoning Technologies \nAbstract: The term Soft Computing (SC) represents the combination of emerging problem-solving technologies such as Fuzzy Logic (FL), Probabilistic Reasoning (PR), Neural Networks (NNs), and Genetic Algorithms (GAs). Each of these technologies provide us with complementary reasoning and searching methods to solve complex, real-world problems. After a brief description of each of these technologies, we will analyze some of their most useful combinations, such as the use of FL to control GAs and NNs parameters; the application of GAs to evolve NNs (topologies or weights) or to tune FL controllers; and the implementation of FL controllers as NNs tuned by backpropagation-type algorithms.",
"neighbors": [
168,
745,
1663,
2603,
2613
],
"mask": "Train"
},
{
"node_id": 1757,
"label": 3,
"text": "Title: A case study in dynamic belief networks: monitoring walking, fall prediction and detection \nAbstract: The term Soft Computing (SC) represents the combination of emerging problem-solving technologies such as Fuzzy Logic (FL), Probabilistic Reasoning (PR), Neural Networks (NNs), and Genetic Algorithms (GAs). Each of these technologies provide us with complementary reasoning and searching methods to solve complex, real-world problems. After a brief description of each of these technologies, we will analyze some of their most useful combinations, such as the use of FL to control GAs and NNs parameters; the application of GAs to evolve NNs (topologies or weights) or to tune FL controllers; and the implementation of FL controllers as NNs tuned by backpropagation-type algorithms.",
"neighbors": [
1268,
1842,
2221,
2341
],
"mask": "Train"
},
{
"node_id": 1758,
"label": 4,
"text": "Title: Simultaneous Learning of Control Laws and Local Environment Representations for Intelligent Navigation Robots \nAbstract: Two issues of an intelligent navigation robot have been addressed in this work. First is the robot's ability to learn a representation of the local environment and use this representation to identify which local environment it is in. This is done by first extracting features from the sensors which are more informative than just distances of obstacles in various directions. Using these features a reduced ring representation (RRR) of the local environment is derived. As the robot navigates, it learns the RRR signatures of all the new environment types it encounters. For purpose of identification, a ring matching criteria is proposed where the robot tries to match the RRR from the sensory input to one of the RRRs in its library. The second issue addressed is that of learning hill climbing control laws in the local environments. Unlike conventional neuro-controllers, a reinforcement learning framework, where the robot first learns a model of the environment and then learns the control law in terms of a neural network is proposed here. The reinforcement function is generated from the sensory inputs of the robot before and after a control action is taken. Three key results shown in this work are that (1) The robot is able to build its library of RRR signatures perfectly even with significant sensor noise for eight different local environ-mets, (2) It was able to identify its local environment with an accuracy of more than 96%, once the library is build, and (3) the robot was able to learn adequate hill climbing control laws which take it to the distinctive state of the local environment for five different environment types.",
"neighbors": [
2412
],
"mask": "Train"
},
{
"node_id": 1759,
"label": 3,
"text": "Title: Belief Maintenance in Bayesian Networks \nAbstract: Two issues of an intelligent navigation robot have been addressed in this work. First is the robot's ability to learn a representation of the local environment and use this representation to identify which local environment it is in. This is done by first extracting features from the sensors which are more informative than just distances of obstacles in various directions. Using these features a reduced ring representation (RRR) of the local environment is derived. As the robot navigates, it learns the RRR signatures of all the new environment types it encounters. For purpose of identification, a ring matching criteria is proposed where the robot tries to match the RRR from the sensory input to one of the RRRs in its library. The second issue addressed is that of learning hill climbing control laws in the local environments. Unlike conventional neuro-controllers, a reinforcement learning framework, where the robot first learns a model of the environment and then learns the control law in terms of a neural network is proposed here. The reinforcement function is generated from the sensory inputs of the robot before and after a control action is taken. Three key results shown in this work are that (1) The robot is able to build its library of RRR signatures perfectly even with significant sensor noise for eight different local environ-mets, (2) It was able to identify its local environment with an accuracy of more than 96%, once the library is build, and (3) the robot was able to learn adequate hill climbing control laws which take it to the distinctive state of the local environment for five different environment types.",
"neighbors": [
2288,
2697,
2698
],
"mask": "Train"
},
{
"node_id": 1760,
"label": 2,
"text": "Title: Parallel Environments for Implementing Neural Networks \nAbstract: As artificial neural networks (ANNs) gain popularity in a variety of application domains, it is critical that these models run fast and generate results in real time. Although a number of implementations of neural networks are available on sequential machines, most of these implementations require an inordinate amount of time to train or run ANNs, especially when the ANN models are large. One approach for speeding up the implementation of ANNs is to implement them on parallel machines. This paper surveys the area of parallel environments for the implementations of ANNs, and prescribes desired characteristics to look for in such implementations. ",
"neighbors": [
427,
1879
],
"mask": "Train"
},
{
"node_id": 1761,
"label": 3,
"text": "Title: An extension of Fill's exact sampling algorithm to non-monotone chains* \nAbstract: We provide an extension of Fill's (1998) exact sampler algorithm. Our algorithm is similar to Fill's, however it makes no assumptions regarding stochastic monotonicity, discreteness of the state space, the existence of densities, etc. We illustrate our algorithm on a simple example. ",
"neighbors": [
2208
],
"mask": "Train"
},
{
"node_id": 1762,
"label": 6,
"text": "Title: Improved Uniform Test Error Bounds \nAbstract: We derive distribution-free uniform test error bounds that improve on VC-type bounds for validation. We show how to use knowledge of test inputs to improve the bounds. The bounds are sharp, but they require intense computation. We introduce a method to trade sharpness for speed of computation. Also, we compute the bounds for several test cases. ",
"neighbors": [
571,
2495,
2694
],
"mask": "Train"
},
{
"node_id": 1763,
"label": 2,
"text": "Title: A Brief History of Connectionism \nAbstract: Connectionist research is firmly established within the scientific community, especially within the multi-disciplinary field of cognitive science. This diversity, however, has created an environment which makes it difficult for connectionist researchers to remain aware of recent advances in the field, let alone understand how the field has developed. This paper attempts to address this problem by providing a brief guide to connectionist research. The paper begins by defining the basic tenets of connectionism. Next, the development of connectionist research is traced, commencing with connectionism's philosophical predecessors, moving to early psychological and neuropsychological influences, followed by the mathematical and computing contributions to connectionist research. Current research is then reviewed, focusing specifically on the different types of network architectures and learning rules in use. The paper concludes by suggesting that neural network research|at least in cognitive science|should move towards models that incorporate the relevant functional principles inherent in neurobiological systems. ",
"neighbors": [
407,
611,
639,
745,
2611
],
"mask": "Train"
},
{
"node_id": 1764,
"label": 3,
"text": "Title: Some remarks on Scheiblechner's treatment of ISOP models. \nAbstract: Scheiblechner (1995) proposes a probabilistic axiomatization of measurement called ISOP (isotonic ordinal probabilistic models) that replaces Rasch's (1980) specific objectivity assumptions with two interesting ordinal assumptions. Special cases of Scheiblechner's model include standard unidimensional factor analysis models in which the loadings are held constant, and the Rasch model for binary item responses. Closely related are the doubly-monotone item response models of Mokken (1971; see also Mokken and Lewis, 1982; Si-jtsma, 1988; Molenaar, 1991; Sijtsma and Junker, 1996; and Sijtsma and Hemker, 1996). More generally, strictly unidimensional latent variable models have been considered in some detail by Holland and Rosenbaum (1986), Ellis and van den Wollenberg (1993), and Junker (1990, 1993). The purpose of this note is to provide connections with current research in foundations and nonparametric latent variable and item response modeling that are missing from Scheiblechner's (1995) paper, and to point out important related work by Hemker et al. (1996a,b), Ellis and Junker (1996) and Junker and Ellis (1996). We also discuss counterexamples to three major theorems in the paper. By carrying out these three tasks, we hope to provide researchers interested in the foundations of measurement and item response modeling the opportunity to give the ISOP approach the careful attention it deserves. ",
"neighbors": [
1770,
1938
],
"mask": "Train"
},
{
"node_id": 1765,
"label": 3,
"text": "Title: A Characterization of Monotone Unidimensional Latent Variable Models \nAbstract: Scheiblechner (1995) proposes a probabilistic axiomatization of measurement called ISOP (isotonic ordinal probabilistic models) that replaces Rasch's (1980) specific objectivity assumptions with two interesting ordinal assumptions. Special cases of Scheiblechner's model include standard unidimensional factor analysis models in which the loadings are held constant, and the Rasch model for binary item responses. Closely related are the doubly-monotone item response models of Mokken (1971; see also Mokken and Lewis, 1982; Si-jtsma, 1988; Molenaar, 1991; Sijtsma and Junker, 1996; and Sijtsma and Hemker, 1996). More generally, strictly unidimensional latent variable models have been considered in some detail by Holland and Rosenbaum (1986), Ellis and van den Wollenberg (1993), and Junker (1990, 1993). The purpose of this note is to provide connections with current research in foundations and nonparametric latent variable and item response modeling that are missing from Scheiblechner's (1995) paper, and to point out important related work by Hemker et al. (1996a,b), Ellis and Junker (1996) and Junker and Ellis (1996). We also discuss counterexamples to three major theorems in the paper. By carrying out these three tasks, we hope to provide researchers interested in the foundations of measurement and item response modeling the opportunity to give the ISOP approach the careful attention it deserves. ",
"neighbors": [
1770,
1938
],
"mask": "Train"
},
{
"node_id": 1766,
"label": 2,
"text": "Title: Computational Models of Sensorimotor Integration Computational Maps and Motor Control. \nAbstract: The sensorimotor integration system can be viewed as an observer attempting to estimate its own state and the state of the environment by integrating multiple sources of information. We describe a computational framework capturing this notion, and some specific models of integration and adaptation that result from it. Psychophysical results from two sensorimotor systems, subserving the integration and adaptation of visuo-auditory maps, and estimation of the state of the hand during arm movements, are presented and analyzed within this framework. These results suggest that: (1) Spatial information from visual and auditory systems is integrated so as to reduce the variance in localization. (2) The effects of a remapping in the relation between visual and auditory space can be predicted from a simple learning rule. (3) The temporal propagation of errors in estimating the hand's state is captured by a linear dynamic observer, providing evidence for the existence of an internal model which simulates the dynamic behavior of the arm. ",
"neighbors": [
427,
477,
1810
],
"mask": "Train"
},
{
"node_id": 1767,
"label": 4,
"text": "Title: Incremental Evolution of Complex General Behavior \nAbstract: Several researchers have demonstrated how complex action sequences can be learned through neuro-evolution (i.e. evolving neural networks with genetic algorithms). However, complex general behavior such as evading predators or avoiding obstacles, which is not tied to specific environments, turns out to be very difficult to evolve. Often the system discovers mechanical strategies (such as moving back and forth) that help the agent cope, but are not very effective, do not appear believable and would not generalize to new environments. The problem is that a general strategy is too difficult for the evolution system to discover directly. This paper proposes an approach where such complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general. The task transitions are implemented through successive stages of delta-coding (i.e. evolving modifications), which allows even converged populations to adapt to the new task. The method is tested in the stochastic, dynamic task of prey capture, and compared with direct evolution. The incremental approach evolves more effective and more general behavior, and should also scale up to harder tasks. ",
"neighbors": [
500,
2257
],
"mask": "Train"
},
{
"node_id": 1768,
"label": 4,
"text": "Title: Evolving Neural Networks to Play Go \nAbstract: Go is a difficult game for computers to master, and the best go programs are still weaker than the average human player. Since the traditional game playing techniques have proven inadequate, new approaches to computer go need to be studied. This paper presents a new approach to learning to play go. The SANE (Symbiotic, Adaptive Neuro-Evolution) method was used to evolve networks capable of playing go on small boards with no pre-programmed go knowledge. On a 9 fi 9 go board, networks that were able to defeat a simple computer opponent were evolved within a few hundred generations. Most significantly, the networks exhibited several aspects of general go playing, which suggests the approach could scale up well.",
"neighbors": [
2257
],
"mask": "Train"
},
{
"node_id": 1769,
"label": 1,
"text": "Title: Testing the Robustness of the Genetic Algorithm on the Floating Building Block Representation. \nAbstract: Recent studies on a floating building block representation for the genetic algorithm (GA) suggest that there are many advantages to using the floating representation. This paper investigates the behavior of the GA on floating representation problems in response to three different types of pressures: (1) a reduction in the amount of genetic material available to the GA during the problem solving process, (2) functions which have negative-valued building blocks, and (3) randomizing non-coding segments. Results indicate that the GA's performance on floating representation problems is very robust. Significant reductions in genetic material (genome length) may be made with relatively small decrease in performance. The GA can effectively solve problems with negative building blocks. Randomizing non-coding segments appears to improve rather than harm GA performance. ",
"neighbors": [
1696,
2330
],
"mask": "Test"
},
{
"node_id": 1770,
"label": 2,
"text": "Title: A Survey of Theory and Methods of Invariant Item Ordering \nAbstract: This work was initiated while Junker was visiting the University of Utrecht with the support of a Carnegie Mellon University Faculty Development Grant, and the generous hospitality of the Social Sciences Faculty, University of Utrecht. Additional support was provided by the Office of Naval Research, Cognitive Sciences Division, Grant N00014-87-K-0277 and the National Institute of Mental Health, Training Grant MH15758. ",
"neighbors": [
1764,
1765,
1938
],
"mask": "Train"
},
{
"node_id": 1771,
"label": 1,
"text": "Title: When Will a Genetic Algorithm Outperform Hill Climbing? \nAbstract: We analyze a simple hill-climbing algorithm (RMHC) that was previously shown to outperform a genetic algorithm (GA) on a simple \"Royal Road\" function. We then analyze an \"idealized\" genetic algorithm (IGA) that is significantly faster than RMHC and that gives a lower bound for GA speed. We identify the features of the IGA that give rise to this speedup, and discuss how these features can be incorporated into a real GA. ",
"neighbors": [
1696,
1775,
1872
],
"mask": "Train"
},
{
"node_id": 1772,
"label": 2,
"text": "Title: New Inexact Parallel Variable Distribution Algorithms Editor: \nAbstract: We consider the recently proposed parallel variable distribution (PVD) algorithm of Ferris and Mangasarian [4] for solving optimization problems in which the variables are distributed among p processors. Each processor has the primary responsibility for updating its block of variables while allowing the remaining \"secondary\" variables to change in a restricted fashion along some easily computable directions. We propose useful generalizations that consist, for the general unconstrained case, of replacing exact global solution of the subproblems by a certain natural sufficient descent condition, and, for the convex case, of inexact subproblem solution in the PVD algorithm. These modifications are the key features of the algorithm that has not been analyzed before. The proposed modified algorithms are more practical and make it easier to achieve good load balancing among the parallel processors. We present a general framework for the analysis of this class of algorithms and derive some new and improved linear convergence results for problems with weak sharp minima of order 2 and strongly convex problems. We also show that nonmonotone synchronization schemes are admissible, which further improves flexibility of PVD approach. ",
"neighbors": [
2307,
2351
],
"mask": "Train"
},
{
"node_id": 1773,
"label": 2,
"text": "Title: Canonical Momenta Indicators of Financial Markets and Neocortical EEG \nAbstract: A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. This methodology can be extended to other systems, e.g., electroencephalography. This approach to complex systems emphasizes the utility of blending an intuitive and powerful mathematical-physics formalism to generate indicators which are used by AI-type rule-based models of management. ",
"neighbors": [
2545
],
"mask": "Train"
},
{
"node_id": 1774,
"label": 2,
"text": "Title: Networks of Spiking Neurons: The Third Generation of Neural Network Models \nAbstract: The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e. threshold gates) respectively sigmoidal gates. In particular it is shown that networks of spiking neurons are computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neuro biology.",
"neighbors": [
990,
1891,
1968,
2619
],
"mask": "Train"
},
{
"node_id": 1775,
"label": 1,
"text": "Title: GENETIC ALGORITHMS AND VERY FAST SIMULATED REANNEALING: A COMPARISON \nAbstract: We compare Genetic Algorithms (GA) with a functional search method, Very Fast Simulated Reannealing (VFSR), that not only is efficient in its search strategy, but also is statistically guaranteed to find the function optima. GA previously has been demonstrated to be competitive with other standard Boltzmann-type simulated annealing techniques. Presenting a suite of six standard test functions to GA and VFSR codes from previous studies, without any additional fine tuning, strongly suggests that VFSR can be expected to be orders of magnitude more efficient than GA. ",
"neighbors": [
163,
1728,
1771,
1793,
1795,
2082,
2116
],
"mask": "Train"
},
{
"node_id": 1776,
"label": 5,
"text": "Title: Extending Theory Refinement to M-of-N Rules \nAbstract: In recent years, machine learning research has started addressing a problem known as theory refinement. The goal of a theory refinement learner is to modify an incomplete or incorrect rule base, representing a domain theory, to make it consistent with a set of input training examples. This paper presents a major revision of the Either propositional theory refinement system. Two issues are discussed. First, we show how run time efficiency can be greatly improved by changing from a exhaustive scheme for computing repairs to an iterative greedy method. Second, we show how to extend Either to refine M-of-N rules. The resulting algorithm, Neither (New Either), is more than an order of magnitude faster and produces significantly more accurate results with theories that fit the M-of-N format. To demonstrate the advantages of Neither, we present experimental results from two real-world domains. ",
"neighbors": [
136,
1595,
2543
],
"mask": "Train"
},
{
"node_id": 1777,
"label": 3,
"text": "Title: Representing Aggregate Belief through the Competitive Equilibrium of a Securities Market \nAbstract: We consider the problem of belief aggregation: given a group of individual agents with probabilistic beliefs over a set of of uncertain events, formulate a sensible consensus or aggregate probability distribution over these events. Researchers have proposed many aggregation methods, although on the question of which is best the general consensus is that there is no consensus. We develop a market-based approach to this problem, where agents bet on uncertain events by buying or selling securities contingent on their outcomes. Each agent acts in the market so as to maximize expected utility at given securities prices, limited in its activity only by its own risk aversion. The equilibrium prices of goods in this market represent aggregate beliefs. For agents with constant risk aversion, we demonstrate that the aggregate probability exhibits several desirable properties, and is related to independently motivated techniques. We argue that the market-based approach provides a plausible mechanism for belief aggregation in multiagent systems, as it directly addresses self-motivated agent incentives for participation and for truthfulness, and can provide a decision-theoretic foundation for the \"expert weights\" often employed in centralized pooling techniques.",
"neighbors": [
1802,
2064
],
"mask": "Test"
},
{
"node_id": 1778,
"label": 2,
"text": "Title: SEMILINEAR PREDICTABILITY MINIMIZATION PRODUCES WELL-KNOWN FEATURE DETECTORS Neural Computation, 1996 (accepted) \nAbstract: Predictability minimization (PM | Schmidhuber, 1992) exhibits various intuitive and theoretical advantages over many other methods for unsupervised redundancy reduction. So far, however, there were only toy applications of PM. In this paper, we apply semilinear PM to static real world images and find: without a teacher and without any significant preprocessing, the system automatically learns to generate distributed representations based on well-known feature detectors, such as orientation sensitive edge detectors and off-center-on-surround-like structures, thus extracting simple features related to those considered useful for image pre-processing and compression.",
"neighbors": [
731,
2024
],
"mask": "Test"
},
{
"node_id": 1779,
"label": 4,
"text": "Title: REINFORCEMENT LEARNING WITH SELF-MODIFYING POLICIES \nAbstract: A learner's modifiable components are called its policy. An algorithm that modifies the policy is a learning algorithm. If the learning algorithm has modifiable components represented as part of the policy, then we speak of a self-modifying policy (SMP). SMPs can modify the way they modify themselves etc. They are of interest in situations where the initial learning algorithm itself can be improved by experience | this is what we call \"learning to learn\". How can we force some (stochastic) SMP to trigger better and better self-modifications? The success-story algorithm (SSA) addresses this question in a lifelong reinforcement learning context. During the learner's life-time, SSA is occasionally called at times computed according to SMP itself. SSA uses backtracking to undo those SMP-generated SMP-modifications that have not been empirically observed to trigger lifelong reward accelerations (measured up until the current SSA call | this evaluates the long-term effects of SMP-modifications setting the stage for later SMP-modifications). SMP-modifications that survive SSA represent a lifelong success history. Until the next SSA call, they build the basis for additional SMP-modifications. Solely by self-modifications our SMP/SSA-based learners solve a complex task in a partially observable environment (POE) whose state space is far bigger than most reported in the POE literature. ",
"neighbors": [
2007
],
"mask": "Train"
},
{
"node_id": 1780,
"label": 4,
"text": "Title: Machine Learning, Shifting Inductive Bias with Success-Story Algorithm, Adaptive Levin Search, and Incremental Self-Improvement \nAbstract: We study task sequences that allow for speeding up the learner's average reward intake through appropriate shifts of inductive bias (changes of the learner's policy). To evaluate long-term effects of bias shifts setting the stage for later bias shifts we use the \"success-story algorithm\" (SSA). SSA is occasionally called at times that may depend on the policy itself. It uses backtracking to undo those bias shifts that have not been empirically observed to trigger long-term reward accelerations (measured up until the current SSA call). Bias shifts that survive SSA represent a lifelong success history. Until the next SSA call, they are considered useful and build the basis for additional bias shifts. SSA allows for plugging in a wide variety of learning algorithms. We plug in (1) a novel, adaptive extension of Levin search and (2) a method for embedding the learner's policy modification strategy within the policy itself (incremental self-improvement). Our inductive transfer case studies involve complex, partially observable environments where traditional reinforcement learning fails. ",
"neighbors": [
2007
],
"mask": "Validation"
},
{
"node_id": 1781,
"label": 5,
"text": "Title: Learning Singly-Recursive Relations from Small Datasets \nAbstract: The inductive logic programming system LOPSTER was created to demonstrate the advantage of basing induction on logical implication rather than -subsumption. LOPSTER's sub-unification procedures allow it to induce recursive relations using a minimum number of examples, whereas inductive logic programming algorithms based on -subsumption require many more examples to solve induction tasks. However, LOPSTER's input examples must be carefully chosen; they must be along the same inverse resolution path. We hypothesize that an extension of LOPSTER can efficiently induce recursive relations without this requirement. We introduce a generalization of LOPSTER named CRUSTACEAN that has this capability and empirically evaluate its ability to induce recursive relations. ",
"neighbors": [
1819,
2663
],
"mask": "Validation"
},
{
"node_id": 1782,
"label": 4,
"text": "Title: Least-Squares Temporal Difference Learning \nAbstract: Submitted to NIPS-98 TD() is a popular family of algorithms for approximate policy evaluation in large MDPs. TD() works by incrementally updating the value function after each observed transition. It has two major drawbacks: it makes inefficient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and = 0, the Least-Squares TD (LSTD) algorithm of Bradtke and Barto [5] eliminates all stepsize parameters and improves data efficiency. This paper extends Bradtke and Barto's work in three significant ways. First, it presents a simpler derivation of the LSTD algorithm. Second, it generalizes from = 0 to arbitrary values of ; at the extreme of = 1, the resulting algorithm is shown to be a practical formulation of supervised linear regression. Third, it presents a novel, intuitive interpretation of LSTD as a model-based reinforcement learning technique.",
"neighbors": [
134,
295,
565,
566,
2328
],
"mask": "Validation"
},
{
"node_id": 1783,
"label": 3,
"text": "Title: Importance Sampling \nAbstract: Technical Report No. 9805, Department of Statistics, University of Toronto Abstract. Simulated annealing | moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions | has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers. Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler. The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases. This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling. It is also related to thermodynamic integration methods for estimating ratios of normalizing constants. Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers. ",
"neighbors": [
48,
2348
],
"mask": "Train"
},
{
"node_id": 1784,
"label": 1,
"text": "Title: Genetic Programming and Redundancy \nAbstract: The Genetic Programming optimization method (GP) elaborated by John Koza [ Koza, 1992 ] is a variant of Genetic Algorithms. The search space of the problem domain consists of computer programs represented as parse trees, and the crossover operator is realized by an exchange of subtrees. Empirical analyses show that large parts of those trees are never used or evaluated which means that these parts of the trees are irrelevant for the solution or redundant. This paper is concerned with the identification of the redundancy occuring in GP. It starts with a mathematical description of the behavior of GP and the conclusions drawn from that description among others explain the \"size problem\" which denotes the phenomenon that the average size of trees in the population grows with time.",
"neighbors": [
55,
163,
380,
844,
1184,
2199,
2688,
2705
],
"mask": "Train"
},
{
"node_id": 1785,
"label": 1,
"text": "Title: A DISCUSSION ON SOME DESIGN PRINCIPLES FOR EFFICIENT CROSSOVER OPERATORS FOR GRAPH COLORING PROBLEMS \nAbstract: A year ago, a new metaheuristic for graph coloring problems was introduced by Costa, Hertz and Dubuis. They have shown, with computer experiments, some clear indication of the benefits of this approach. Graph coloring has many applications specially in the areas of scheduling, assignments and timetabling. The metaheuristic can be classified as a memetic algorithm since it is based on a population search in which periods of local optimization are interspersed with phases in which new configurations are created from earlier well-developed configurations or local minima of the previous iterative improvement process. The new population is created using crossover operators as in genetic algorithms. In this paper we discuss how a methodology inspired in Competitive Analysis may be relevant to the problem of designing better crossover operators. RESUMO: No ultimo ano uma nova metaheurstica para o problema de colora~c~ao em grafos foi apre-sentada por Costa, Hertz e Dubuis. Eles mostraram, com experimentos computacionais, algumas indica~c~oes claras dos benefcios desta nova tecnica. Colora~c~ao em grafos tem muitas aplica~c~oes, especialmente na area de programa~c~ao de tarefas, localiza~c~ao e horario . A metaheurstica pode ser classificada como algoritmo memetico desde que seja baseada em uma busca de popula~c~ao cujos perodos de otimiza~c~ao local s~ao intercalados com fases onde novas configura~c~oes s~ao criadas a partir de boas configura~c~oes ou mnimos locais de itera~c~oes anteriores. A nova popula~c~ao e criada usando opera~c~oes de crossover como em algoritmos geneticos. Neste artigo apresen-tamos como uma metodologia baseada em Competitive Analysis pode ser relevante para construir opera~c~oes de crossover. ",
"neighbors": [
2564
],
"mask": "Validation"
},
{
"node_id": 1786,
"label": 6,
"text": "Title: Decision Trees: Equivalence and Propositional Operations \nAbstract: For the well-known concept of decision trees as it is used for inductive inference we study the natural concept of equivalence: two decision trees are equivalent if and only if they represent the same hypothesis. We present a simple efficient algorithm to establish whether two decision trees are equivalent or not. The complexity of this algorithm is bounded by the product of the sizes of both decision trees. The hypothesis represented by a decision tree is essentially a boolean function, just like a proposition. Although every boolean function can be represented in this way, we show that disjunctions and conjunctions of decision trees can not efficiently be represented as decision trees, and simply shaped propositions may require exponential size for representation as de cision trees.",
"neighbors": [
2207
],
"mask": "Validation"
},
{
"node_id": 1787,
"label": 2,
"text": "Title: An integrated approach to the study of object features in visual recognition \nAbstract: We propose to assess the relevance of theories of synaptic modification as models of feature extraction in human vision, by using masks derived from synaptic weight patterns to occlude parts of the stimulus images in psychophysical experiments. In the experiment reported here, we found that a mask derived from principal component analysis of object images was more effective in reducing the generalization performance of human subjects than a mask derived from another method of feature extraction (BCM), based on higher-order statistics of the images. ",
"neighbors": [
359,
2499,
2676
],
"mask": "Train"
},
{
"node_id": 1788,
"label": 2,
"text": "Title: Path-integral evolution of chaos embedded in noise: Duffing neocortical analog \nAbstract: A two dimensional time-dependent Duffing oscillator model of macroscopic neocortex exhibits chaos for some ranges of parameters. We embed this model in moderate noise, typical of the context presented in real neocortex, using PATHINT, a non-Monte-Carlo path-integral algorithm that is particularly adept in handling nonlinear Fokker-Planck systems. This approach shows promise to investigate whether chaos in neocortex, as predicted by such models, can survive in noisy contexts. ",
"neighbors": [
1795,
2178,
2181
],
"mask": "Train"
},
{
"node_id": 1789,
"label": 2,
"text": "Title: Pruning with generalization based weight saliencies: flOBD, flOBS \nAbstract: The purpose of most architecture optimization schemes is to improve generalization. In this presentation we suggest to estimate the weight saliency as the associated change in generalization error if the weight is pruned. We detail the implementation of both an O(N )-storage scheme extending OBD, as well as an O(N 2 ) scheme extending OBS. We illustrate the viability of the approach on pre diction of a chaotic time series.",
"neighbors": [
2469
],
"mask": "Train"
},
{
"node_id": 1790,
"label": 1,
"text": "Title: Using Genetic Programming to Evolve Board Evaluation Functions \nAbstract: The purpose of most architecture optimization schemes is to improve generalization. In this presentation we suggest to estimate the weight saliency as the associated change in generalization error if the weight is pruned. We detail the implementation of both an O(N )-storage scheme extending OBD, as well as an O(N 2 ) scheme extending OBS. We illustrate the viability of the approach on pre diction of a chaotic time series.",
"neighbors": [
22,
415,
523,
565,
2334
],
"mask": "Validation"
},
{
"node_id": 1791,
"label": 4,
"text": "Title: Q2: Memory-based active learning for optimizing noisy continuous functions field expands beyond prediction and function\nAbstract: This paper introduces a new algorithm, Q2, for optimizing the expected output of a multi-input noisy continuous function. Q2 is designed to need only a few experiments, it avoids strong assumptions on the form of the function, and it is autonomous in that it requires little problem-specific tweaking. Four existing approaches to this problem (response surface methods, numerical optimization, supervised learning, and evolutionary methods) all have inadequacies when the requirement of \"black box\" behavior is combined with the need for few experiments. Q2 uses instance-based determination of a convex region of interest for performing experiments. In conventional instance-based approaches to learning, a neighborhood was defined by proximity to a query point. In contrast, Q2 defines the neighborhood by a new geometric procedure that captures the size and ",
"neighbors": [
682,
1559,
1859
],
"mask": "Train"
},
{
"node_id": 1792,
"label": 6,
"text": "Title: 21 Using n 2 classifier in constructive induction \nAbstract: In this paper, we propose a multi-classification approach for constructive induction. The idea of an improvement of classification accuracy is based on iterative modification of input data space. This process is independently repeated for each pair of n classes. Finally, it gives (n 2 n)/2 input data subspaces of attributes dedicated for optimal discrimination of appropriate pairs of classes. We use genetic algorithms as a constructive induction engine. A final classification is obtained by a weighted majority voting rule, according to n 2 - classifier approach. The computational experiment was performed on medical data set. The obtained results point out the advantage of using a multi-classification model (n 2 classifier) in constructive induction in relation to the analogous single-classifier approach.",
"neighbors": [
163,
430,
582,
2067
],
"mask": "Test"
},
{
"node_id": 1793,
"label": 2,
"text": "Title: STATISTICAL MECHANICS OF COMBAT WITH HUMAN FACTORS \nAbstract: This highly interdisciplinary project extends previous work in combat modeling and in control-theoretic descriptions of decision-making human factors in complex activities. A previous paper has established the first theory of the statistical mechanics of combat (SMC), developed using modern methods of statistical mechanics, baselined to empirical data gleaned from the National Training Center (NTC). This previous project has also established a JANUS(T)-NTC computer simulation/wargame of NTC, providing a statistical ``what-if '' capability for NTC scenarios. This mathematical formulation is ripe for control-theoretic extension to include human factors, a methodology previously developed in the context of teleoperated vehicles. Similar NTC scenarios differing at crucial decision points will be used for data to model the inuence of decision making on combat. The results may then be used to improve present human factors and C 2 algorithms in computer simulations/wargames. Our approach is to ``subordinate'' the SMC nonlinear stochastic equations, fitted to NTC scenarios, to establish the zeroth order description of that combat. In practice, an equivalent mathematical-physics representation is used, more suitable for numerical and formal work, i.e., a Lagrangian representation. Theoretically, these equations are nested within a larger set of nonlinear stochastic operator-equations which include C 3 human factors, e.g., supervisory decisions. In this study, we propose to perturb this operator theory about the SMC zeroth order set of equations. Then, subsets of scenarios fit to zeroth order, originally considered to be similarly degenerate, can be further split perturbatively to distinguish C 3 decision-making inuences. New methods of Very Fast Simulated Re-Annealing (VFSR), developed in the previous project, will be used for fitting these models to empirical data. ",
"neighbors": [
1775,
1795,
2082,
2178,
2181
],
"mask": "Train"
},
{
"node_id": 1794,
"label": 2,
"text": "Title: NONLINEAR NONEQUILIBRIUM NONQUANTUM NONCHAOTIC STATISTICAL MECHANICS OF NEOCORTICAL INTERACTIONS \nAbstract: The work in progress reported by Wright & Liley shows great promise, primarily because of their experimental and simulation paradigms. However, their tentative conclusion that macroscopic neocortex may be considered (approximately) a linear near-equilibrium system is premature and does not correspond to tentative conclusions drawn from other studies of neocortex. At this time, there exists an interdisciplinary multidimensional gradation on published studies of neocortex, with one primary dimension of mathematical physics represented by two extremes. At one extreme, there is much scientifically unsupported talk of chaos and quantum physics being responsible for many important macroscopic neocortical processes (involving many thousands to millions of neurons) (Wilczek, 1994). At another extreme, many non-mathematically trained neuroscientists uncritically lump all neocortical mathematical theory into one file, and consider only statistical averages of citations for opinions on the quality of that research (Nunez, 1995). In this context, it is important to appreciate that Wright and Liley (W&L) report on their scientifically sound studies on macroscopic neocortical function, based on simulation and a blend of sound theory and reproducible experiments. However, their pioneering work, given the absence of much knowledge of neocortex at this time, is open to criticism, especially with respect to their present inferences and conclusions. Their conclusion that EEG data exhibit linear near-equilibrium dynamics may very well be true, but only in the sense of focusing only on one local minima, possibly with individual-specific and physiological-state dependent ",
"neighbors": [
1795,
2178,
2545
],
"mask": "Validation"
},
{
"node_id": 1795,
"label": 2,
"text": "Title: Application of statistical mechanics methodol- ogy to term-structure bond-pricing models, Mathl. Comput. Modelling Application of\nAbstract: The work in progress reported by Wright & Liley shows great promise, primarily because of their experimental and simulation paradigms. However, their tentative conclusion that macroscopic neocortex may be considered (approximately) a linear near-equilibrium system is premature and does not correspond to tentative conclusions drawn from other studies of neocortex. At this time, there exists an interdisciplinary multidimensional gradation on published studies of neocortex, with one primary dimension of mathematical physics represented by two extremes. At one extreme, there is much scientifically unsupported talk of chaos and quantum physics being responsible for many important macroscopic neocortical processes (involving many thousands to millions of neurons) (Wilczek, 1994). At another extreme, many non-mathematically trained neuroscientists uncritically lump all neocortical mathematical theory into one file, and consider only statistical averages of citations for opinions on the quality of that research (Nunez, 1995). In this context, it is important to appreciate that Wright and Liley (W&L) report on their scientifically sound studies on macroscopic neocortical function, based on simulation and a blend of sound theory and reproducible experiments. However, their pioneering work, given the absence of much knowledge of neocortex at this time, is open to criticism, especially with respect to their present inferences and conclusions. Their conclusion that EEG data exhibit linear near-equilibrium dynamics may very well be true, but only in the sense of focusing only on one local minima, possibly with individual-specific and physiological-state dependent ",
"neighbors": [
1775,
1788,
1793,
1794,
2082,
2178,
2181,
2545
],
"mask": "Test"
},
{
"node_id": 1796,
"label": 1,
"text": "Title: Evaluating and Improving Steady State Evolutionary Algorithms on Constraint Satisfaction Problems \nAbstract: The work in progress reported by Wright & Liley shows great promise, primarily because of their experimental and simulation paradigms. However, their tentative conclusion that macroscopic neocortex may be considered (approximately) a linear near-equilibrium system is premature and does not correspond to tentative conclusions drawn from other studies of neocortex. At this time, there exists an interdisciplinary multidimensional gradation on published studies of neocortex, with one primary dimension of mathematical physics represented by two extremes. At one extreme, there is much scientifically unsupported talk of chaos and quantum physics being responsible for many important macroscopic neocortical processes (involving many thousands to millions of neurons) (Wilczek, 1994). At another extreme, many non-mathematically trained neuroscientists uncritically lump all neocortical mathematical theory into one file, and consider only statistical averages of citations for opinions on the quality of that research (Nunez, 1995). In this context, it is important to appreciate that Wright and Liley (W&L) report on their scientifically sound studies on macroscopic neocortical function, based on simulation and a blend of sound theory and reproducible experiments. However, their pioneering work, given the absence of much knowledge of neocortex at this time, is open to criticism, especially with respect to their present inferences and conclusions. Their conclusion that EEG data exhibit linear near-equilibrium dynamics may very well be true, but only in the sense of focusing only on one local minima, possibly with individual-specific and physiological-state dependent ",
"neighbors": [
833,
1030,
1917,
2001
],
"mask": "Validation"
},
{
"node_id": 1797,
"label": 1,
"text": "Title: Improving the Performance of Evolutionary Optimization by Dynamically Scaling the Evaluation Function \nAbstract: Traditional evolutionary optimization algorithms assume a static evaluation function, according to which solutions are evolved. Incremental evolution is an approach through which a dynamic evaluation function is scaled over time in order to improve the performance of evolutionary optimization. In this paper, we present empirical results that demonstrate the effectiveness of this approach for genetic programming. Using two domains, a two-agent pursuit-evasion game and the Tracker [6] trail-following task, we demonstrate that incremental evolution is most successful when applied near the beginning of an evolutionary run. We also show that incremental evolution can be successful when the intermediate evaluation functions are more difficult than the target evaluation function, as well as when they are easier than the target function. ",
"neighbors": [
1221,
1409,
2200
],
"mask": "Validation"
},
{
"node_id": 1798,
"label": 2,
"text": "Title: Toward a unified theory of spatiotemporal processing in the retina \nAbstract: Traditional evolutionary optimization algorithms assume a static evaluation function, according to which solutions are evolved. Incremental evolution is an approach through which a dynamic evaluation function is scaled over time in order to improve the performance of evolutionary optimization. In this paper, we present empirical results that demonstrate the effectiveness of this approach for genetic programming. Using two domains, a two-agent pursuit-evasion game and the Tracker [6] trail-following task, we demonstrate that incremental evolution is most successful when applied near the beginning of an evolutionary run. We also show that incremental evolution can be successful when the intermediate evaluation functions are more difficult than the target evaluation function, as well as when they are easier than the target function. ",
"neighbors": [
2120
],
"mask": "Validation"
},
{
"node_id": 1799,
"label": 1,
"text": "Title: On the Effectiveness of Evolutionary Search in High-Dimensional NK-Landscapes \nAbstract: NK-landscapes offer the ability to assess the performance of evolutionary algorithms on problems with different degrees of epistasis. In this paper, we study the performance of six algorithms in NK-landscapes with low and high dimension while keeping the amount of epistatic interactions constant. The results show that compared to genetic local search algorithms, the performance of standard genetic algorithms employing crossover or mutation significantly decreases with increasing problem size. Furthermore, with increasing K, crossover based algorithms are in both cases outperformed by mutation based algorithms. However, the relative performance differences between the algorithms grow significantly with the dimension of the search space, indicating that it is important to consider high-dimensional landscapes for evaluating the performance of evolutionary algorithms. ",
"neighbors": [
163,
727,
1424,
2205
],
"mask": "Validation"
},
{
"node_id": 1800,
"label": 3,
"text": "Title: Rational Belief Revision (Preliminary Report) \nAbstract: Theories of rational belief revision recently proposed by Alchourron, Gardenfors, Makin-son, and Nebel illuminate many important issues but impose unnecessarily strong standards for correct revisions and make strong assumptions about what information is available to guide revisions. We reconstruct these theories according to an economic standard of rationality in which preferences are used to select among alternative possible revisions. By permitting multiple partial specifications of preferences in ways closely related to preference-based nonmonotonic logics, the reconstructed theory employs information closer to that available in practice and offers more flexible ways of selecting revisions. We formally compare this new conception of rational belief revision with the original theories, adapt results about universal default theories to prove that there is unlikely to be any universal method of rational belief revision, and examine formally how different limitations on rationality affect belief revision.",
"neighbors": [
342,
1901,
1907,
1994,
1995,
2016
],
"mask": "Test"
},
{
"node_id": 1801,
"label": 2,
"text": "Title: A FAMILY OF FIXED-POINT ALGORITHMS FOR INDEPENDENT COMPONENT ANALYSIS \nAbstract: Independent Component Analysis (ICA) is a statistical signal processing technique whose main applications are blind source separation, blind deconvolution, and feature extraction. Estimation of ICA is usually performed by optimizing a 'contrast' function based on higher-order cumulants. In this paper, it is shown how almost any error function can be used to construct a contrast function to perform the ICA estimation. In particular, this means that one can use contrast functions that are robust against outliers. As a practical method for tnding the relevant extrema of such contrast functions, a txed-point iteration scheme is then introduced. The resulting algorithms are quite simple and converge fast and reliably. These algorithms also enable estimation of the independent components one-by-one, using a simple deation scheme. ",
"neighbors": [
576,
1067,
1814,
1821
],
"mask": "Train"
},
{
"node_id": 1802,
"label": 3,
"text": "Title: Toward a Market Model for Bayesian Inference \nAbstract: We present a methodology for representing probabilistic relationships in a general-equilibrium economic model. Specifically, we define a precise mapping from a Bayesian network with binary nodes to a market price system where consumers and producers trade in uncertain propositions. We demonstrate the correspondence between the equilibrium prices of goods in this economy and the probabilities represented by the Bayesian network. A computational market model such as this may provide a useful framework for investigations of belief aggregation, distributed probabilistic inference, resource allocation under uncertainty, and other problems of de centralized uncertainty.",
"neighbors": [
1777,
2064
],
"mask": "Train"
},
{
"node_id": 1803,
"label": 3,
"text": "Title: Change Point and Change Curve Modeling in Stochastic Processes and Spatial Statistics \nAbstract: 1 This article will appear in Volume 1, no. 4 (1994) of Journal of Applied Statistical Science. Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, GN-22, University of Washington, Seattle, WA 98195. This research was supported by ONR contract no. N-00014-91-J-1074, by NIH Grant no. 5R01HD26330-02, by the Ministere de la Recherche et de l'Espace, Paris, by the Universite de Paris VI, and by INRIA, Rocquencourt, France. Raftery thanks the latter two institutions, Paul Deheuvels and Gilles Celeux for hearty hospitality during his Paris sabbatical in which this article was written. This article was prepared for presentation at the Conference on Applied Change Point Analysis, University of Maryland-Baltimore, March 17-18, 1993. Parts of this article review collaborative research with others to whom I would like to express my appreciation, namely Volkan Akman, Jeff Banfield, Nhu Le, Steven Lewis, Doug Martin, Fionn Murtagh, Ross Taplin and Simon Tavare. ",
"neighbors": [
84,
99,
1913
],
"mask": "Validation"
},
{
"node_id": 1804,
"label": 0,
"text": "Title: CHARADE: a Platform for Emergencies Management Systems \nAbstract: This paper describe the functional architecture of CHARADE a software platform devoted to the development of a new generation of intelligent environmental decision support systems. The CHARADE platform is based on the a task-oriented approach to system design and on the exploitation of a new architecture for problem solving, that integrates case-based reasoning and constraint reasoning. The platform is developed in an objectoriented environment and upon that a demonstrator will be developed for managing first intervention attack to forest fires.",
"neighbors": [
1805,
2289
],
"mask": "Train"
},
{
"node_id": 1805,
"label": 0,
"text": "Title: Planning in a Complex Real Domain \nAbstract: Dimensions of complexity raised during the definition of a system aimed at supporting the planning of initial attack to forest fires are presented and discussed. The complexity deriving from the highly dynamic and unpredictable domain of forest fire, the one realated to the individuation and integration of planning techniques suitable to this domain, the complexity of addressing the problem of taking into account the role of the user to be supported by the system and finally the complexity of an architecture able to integrate different subsystems. In particular we focus on the severe constraints to the definition of a planning approach posed by the fire fighting domain, constraints which cannot be satisfied completely by any of the current planning paradigms. We propose an approach based on the integratation of skeletal planning and case based reasoning techniques with constraint reasoning. More specifically temporal constraints are used in two steps of the planning process: plan fitting and adaptation, and resource scheduling. Work on the development of the system software architecture with a OOD methodology is in progress. ",
"neighbors": [
1804,
2289
],
"mask": "Test"
},
{
"node_id": 1806,
"label": 2,
"text": "Title: MBP on T0: mixing floating- and fixed-point formats in BP learning \nAbstract: We examine the efficient implementation of back prop type algorithms on T0 [4], a vector processor with a fixed point engine, designed for neural network simulation. A matrix formulation of back prop, Matrix Back Prop [1], has been shown to be very efficient on some RISCs [2]. Using Matrix Back Prop, we achieve an asymptotically optimal performance on T0 (about 0.8 GOPS) for both forward and backward phases, which is not possible with the standard on-line method. Since high efficiency is futile if convergence is poor (due to the use of fixed point arithmetic), we use a mixture of fixed and floating point operations. The key observation is that the precision of fixed point is sufficient for good convergence, if the range is appropriately chosen. Though the most expensive computations are implemented in fixed point, we achieve a rate of convergence that is comparable to the floating point version. The time taken for conversion between fixed and floating point is also shown to be reasonable. ",
"neighbors": [
2279,
2570
],
"mask": "Train"
},
{
"node_id": 1807,
"label": 1,
"text": "Title: A Preliminary Investigation of Evolution as a Form Design Strategy \nAbstract: We describe the preliminary version of our investigative software, GGE Generative Genetic Explorer, in which genetic operations interact with Au-toCAD to generate novel 3D forms for the architect. GGE allows us to asess how evolutionary algorithms should be tailored to suit Architecture CAD tasks. ",
"neighbors": [
163,
2490
],
"mask": "Validation"
},
{
"node_id": 1808,
"label": 6,
"text": "Title: Where Do SE-trees Perform? (Part I) \nAbstract: As a classifier, a Set Enumeration (SE) tree can be viewed as a generalization of decision trees. We empirically characterize domains in which SE-trees are particularly advantageous relative to decision trees. Specifically, we show that: ",
"neighbors": [
638,
2210
],
"mask": "Train"
},
{
"node_id": 1809,
"label": 0,
"text": "Title: Rule Generation and Compaction in the wwtp \nAbstract: In this paper we discuss our approach to learning classification rules from data. We sketch out two modules of our architecture, namely LINNEO + and GAR. LINNEO + , which is a knowledge acquisition tool for ill-structured domains automatically generating classes from examples that incrementally works with an unsupervised strategy. LINNEO + 's output, a representation of the conceptual structure of the domain in terms of classes, is the input to GAR that is used to generate a set of classification rules for the original training set. GAR can generate both conjunctive and disjunctive rules. Herein we present an application of these techniques to data obtained from a real wastewater treatment plant in order to help the construction of a rule base. This rule will be used for a knowledge-based system that aims to supervise the whole process. ",
"neighbors": [
2071,
2585
],
"mask": "Train"
},
{
"node_id": 1810,
"label": 2,
"text": "Title: Computation and Psychophysics of Sensorimotor Integration \nAbstract: In this paper we discuss our approach to learning classification rules from data. We sketch out two modules of our architecture, namely LINNEO + and GAR. LINNEO + , which is a knowledge acquisition tool for ill-structured domains automatically generating classes from examples that incrementally works with an unsupervised strategy. LINNEO + 's output, a representation of the conceptual structure of the domain in terms of classes, is the input to GAR that is used to generate a set of classification rules for the original training set. GAR can generate both conjunctive and disjunctive rules. Herein we present an application of these techniques to data obtained from a real wastewater treatment plant in order to help the construction of a rule base. This rule will be used for a knowledge-based system that aims to supervise the whole process. ",
"neighbors": [
1766
],
"mask": "Validation"
},
{
"node_id": 1811,
"label": 2,
"text": "Title: Disambiguation and Grammar as Emergent Soft Constraints \nAbstract: When reading a sentence such as \"The diplomat threw the ball in the ballpark for the princess\" our interpretation changes from a dance event to baseball and back to dance. Such on-line disambiguation happens automatically and appears to be based on dynamically combining the strengths of association between the keywords and the two senses. Subsymbolic neural networks are very good at modeling such behavior. They learn word meanings as soft constraints on interpretation, and dynamically combine these constraints to form the most likely interpretation. On the other hand, it is very difficult to show how systematic language structures such as relative clauses could be processed in such a system. The network would only learn to associate them to specific contexts and would not be able to process new combinations of them. A closer look at understanding embedded clauses shows that humans are not very systematic in processing grammatical structures either. For example, \"The girl who the boy who the girl who lived next door blamed hit cried\" is very difficult to understand, whereas \"The car that the man who the dog that had rabies bit drives is in the garage\" is not. This difference emerges from the same semantic constraints that are at work in the disambiguation task. In this chapter we will show how the subsymbolic parser can be combined with high-level control that allows the system to process novel combinations of relative clauses systematically, while still being sensitive to the semantic constraints. ",
"neighbors": [
204,
427,
2410
],
"mask": "Train"
},
{
"node_id": 1812,
"label": 2,
"text": "Title: GENERALIZATION PERFORMANCE OF BACKPROPAGATION LEARNING ON A SYLLABIFICATION TASK \nAbstract: We investigated the generalization capabilities of backpropagation learning in feed-forward and recurrent feed-forward connectionist networks on the assignment of syllable boundaries to orthographic representations in Dutch (hyphenation). This is a difficult task because phonological and morphological constraints interact, leading to ambiguity in the input patterns. We compared the results to different symbolic pattern matching approaches, and to an exemplar-based generalization scheme, related to a k-nearest neighbour approach, but using a similarity metric weighed by the relative information entropy of positions in the training patterns. Our results indicate that the generalization performance of backpropagation learning for this task is not better than that of the best symbolic pattern matching approaches, and of exemplar-based generalization. ",
"neighbors": [
783,
785,
862,
1155,
1407,
1513,
2364
],
"mask": "Validation"
},
{
"node_id": 1813,
"label": 2,
"text": "Title: Pruning Strategies for the MTiling Constructive Learning Algorithm \nAbstract: We present a framework for incorporating pruning strategies in the MTiling constructive neural network learning algorithm. Pruning involves elimination of redundant elements (connection weights or neurons) from a network and is of considerable practical interest. We describe three elementary sensitivity based strategies for pruning neurons. Experimental results demonstrate a moderate to significant reduction in the network size without compromising the network's generalization performance. ",
"neighbors": [
503,
1818,
2393
],
"mask": "Train"
},
{
"node_id": 1814,
"label": 2,
"text": "Title: Independent Component Analysis by General Non-linear Hebbian-like Learning Rules \nAbstract: A number of neural learning rules have been recently proposed for Independent Component Analysis (ICA). The rules are usually derived from information-theoretic criteria such as maximum entropy or minimum mutual information. In this paper, we show that in fact, ICA can be performed by very simple Hebbian or anti-Hebbian learning rules, which may have only weak relations to such information-theoretical quantities. Rather suprisingly, practically any non-linear function can be used in the learning rule, provided only that the sign of the Hebbian/anti-Hebbian term is chosen correctly. In addition to the Hebbian-like mechanism, the weight vector is here constrained to have unit norm, and the data is preprocessed by prewhitening, or sphering. These results imply that one can choose the non-linearity so as to optimize desired statistical or numerical criteria.",
"neighbors": [
570,
576,
834,
1067,
1801
],
"mask": "Validation"
},
{
"node_id": 1815,
"label": 2,
"text": "Title: Submitted to Circuits, Systems and Signal Processing Neural Network Constructive Algorithms: Trading Generalization for Learning Efficiency? \nAbstract: There are currently several types of constructive, or growth, algorithms available for training a feed-forward neural network. This paper describes and explains the main ones, using a fundamental approach to the multi-layer perceptron problem-solving mechanisms. The claimed convergence properties of the algorithms are verified using just two mapping theorems, which consequently enables all the algorithms to be unified under a basic mechanism. The algorithms are compared and contrasted and the deficiencies of some highlighted. The fundamental reasons for the actual success of these algorithms are extracted, and used to suggest where they might most fruitfully be applied. A suspicion that they are not a panacea for all current neural network difficulties, and that one must somewhere along the line pay for the learning efficiency they promise, is developed into an argument that their generalization abilities will lie on average below that of back-propagation. ",
"neighbors": [
238,
253,
489,
2670,
2671
],
"mask": "Validation"
},
{
"node_id": 1816,
"label": 4,
"text": "Title: Generalized Prioritized Sweeping \nAbstract: Prioritized sweeping is a model-based reinforcement learning method that attempts to focus an agent's limited computational resources to achieve a good estimate of the value of environment states. To choose effectively where to spend a costly planning step, classic prioritized sweeping uses a simple heuristic to focus computation on the states that are likely to have the largest errors. In this paper, we introduce generalized prioritized sweeping, a principled method for generating such estimates in a representation-specific manner. This allows us to extend prioritized sweeping beyond an explicit, state-based representation to deal with compact representations that are necessary for dealing with large state spaces. We apply this method for generalized model approximators (such as Bayesian networks), and describe preliminary experiments that compare our approach with classical prioritized sweeping. ",
"neighbors": [
558,
559,
566,
1934,
2485
],
"mask": "Train"
},
{
"node_id": 1817,
"label": 2,
"text": "Title: Selection of Distance Metrics and Feature Subsets for k-Nearest Neighbor Classifiers \nAbstract: Prioritized sweeping is a model-based reinforcement learning method that attempts to focus an agent's limited computational resources to achieve a good estimate of the value of environment states. To choose effectively where to spend a costly planning step, classic prioritized sweeping uses a simple heuristic to focus computation on the states that are likely to have the largest errors. In this paper, we introduce generalized prioritized sweeping, a principled method for generating such estimates in a representation-specific manner. This allows us to extend prioritized sweeping beyond an explicit, state-based representation to deal with compact representations that are necessary for dealing with large state spaces. We apply this method for generalized model approximators (such as Bayesian networks), and describe preliminary experiments that compare our approach with classical prioritized sweeping. ",
"neighbors": [
2464
],
"mask": "Train"
},
{
"node_id": 1818,
"label": 2,
"text": "Title: Constructive Neural Network Learning Algorithms for Multi-Category Real-Valued Pattern Classification \nAbstract: Prioritized sweeping is a model-based reinforcement learning method that attempts to focus an agent's limited computational resources to achieve a good estimate of the value of environment states. To choose effectively where to spend a costly planning step, classic prioritized sweeping uses a simple heuristic to focus computation on the states that are likely to have the largest errors. In this paper, we introduce generalized prioritized sweeping, a principled method for generating such estimates in a representation-specific manner. This allows us to extend prioritized sweeping beyond an explicit, state-based representation to deal with compact representations that are necessary for dealing with large state spaces. We apply this method for generalized model approximators (such as Bayesian networks), and describe preliminary experiments that compare our approach with classical prioritized sweeping. ",
"neighbors": [
1813,
2029,
2073
],
"mask": "Validation"
},
{
"node_id": 1819,
"label": 5,
"text": "Title: The Difficulties of Learning Logic Programs with Cut \nAbstract: As real logic programmers normally use cut (!), an effective learning procedure for logic programs should be able to deal with it. Because the cut predicate has only a procedural meaning, clauses containing cut cannot be learned using an extensional evaluation method, as is done in most learning systems. On the other hand, searching a space of possible programs (instead of a space of independent clauses) is unfeasible. An alternative solution is to generate first a candidate base program which covers the positive examples, and then make it consistent by inserting cut where appropriate. The problem of learning programs with cut has not been investigated before and this seems to be a natural and reasonable approach. We generalize this scheme and investigate the difficulties that arise. Some of the major shortcomings are actually caused, in general, by the need for intensional evaluation. As a conclusion, the analysis of this paper suggests, on precise and technical grounds, that learning cut is difficult, and current induction techniques should probably be restricted to purely declarative logic languages.",
"neighbors": [
224,
1781,
2580
],
"mask": "Train"
},
{
"node_id": 1820,
"label": 2,
"text": "Title: The Gamma MLP for Speech Phoneme Recognition \nAbstract: We define a Gamma multi-layer perceptron (MLP) as an MLP with the usual synaptic weights replaced by gamma filters (as proposed by de Vries and Principe (de Vries & Principe 1992)) and associated gain terms throughout all layers. We derive gradient descent update equations and apply the model to the recognition of speech phonemes. We find that both the inclusion of gamma filters in all layers, and the inclusion of synaptic gains, improves the performance of the Gamma MLP. We compare the Gamma MLP with TDNN, Back-Tsoi FIR MLP, and Back-Tsoi IIR MLP architectures, and a local approximation scheme. We find that the Gamma MLP results in a substantial reduction in error rates. ",
"neighbors": [
2383,
2569
],
"mask": "Train"
},
{
"node_id": 1821,
"label": 2,
"text": "Title: One-unit Learning Rules for Independent Component Analysis \nAbstract: Neural one-unit learning rules for the problem of Independent Component Analysis (ICA) and blind source separation are introduced. In these new algorithms, every ICA neuron develops into a separator that tnds one of the independent components. The learning rules use very simple constrained Hebbian/anti-Hebbian learning in which decorrelating feedback may be added. To speed up the convergence of these stochastic gradient descent rules, a novel com putationally ecient txed-point algorithm is introduced.",
"neighbors": [
834,
1801
],
"mask": "Train"
},
{
"node_id": 1822,
"label": 2,
"text": "Title: Book Review New Kids on the Block way in the field of connectionist modeling. The\nAbstract: Connectionist Models is a collection of forty papers representing a wide variety of research topics in connectionism. The book is distinguished by a single feature: the papers are almost exclusively contributions of graduate students active in the field. The students were selected by a rigorous review process and participated in a two week long summer school devoted to connectionism 2 . As the ambitious editors state in the foreword: These are bold claims and, if true, the reader is presented with an exciting opportunity to sample the frontiers of connectionism. Their words imply two ways to approach the book. The book must be read not just as a random collection of scientific papers, but also as a challenge to evaluate a controversial field. 2 This summer school is actually the third in a series, previous ones being held in 1986 and 1988. The proceedings of the 1988 summer school (which I had the priviledge of participating in) are reviewed by Nigel Goddard in [4]. Continuing the pattern, a fourth school is scheduled to be held in 1993 in Boulder, CO. ",
"neighbors": [
1656,
2662
],
"mask": "Test"
},
{
"node_id": 1823,
"label": 6,
"text": "Title: The Complexity of Theory Revision \nAbstract: A knowledge-based system uses its database (a.k.a. its \"theory\") to produce answers to the queries it receives. Unfortunately, these answers may be incorrect if the underlying theory is faulty. Standard \"theory revision\" systems use a given set of \"labeled queries\" (each a query paired with its correct answer) to transform the given theory, by adding and/or deleting either rules and/or antecedents, into a related theory that is as accurate as possible. After formally defining the theory revision task and bounding its sample complexity, this paper addresses the task's computational complexity. It first proves that, unless P = N P , no polynomial time algorithm can identify the optimal theory, even given the exact distribution of queries, except in the most trivial of situations. It also shows that, except in such trivial situations, no polynomial-time algorithm can produce a theory whose inaccuracy is even close (i.e., within a particular polynomial factor) to optimal. These results justify the standard practice of hill-climbing to a locally-optimal theory, based on a given set of labeled sam ples.",
"neighbors": [
2487,
2580
],
"mask": "Test"
},
{
"node_id": 1824,
"label": 6,
"text": "Title: Constructing Conjunctions using Systematic Search on Decision Trees \nAbstract: This paper investigates a dynamic path-based method for constructing conjunctions as new attributes for decision tree learning. It searches for conditions (attribute-value pairs) from paths to form new attributes. Compared with other hypothesis-driven new attribute construction methods, the new idea of this method is that it carries out systematic search with pruning over each path of a tree to select conditions for generating a conjunction. Therefore, conditions for constructing new attributes are dynamically decided during search. Empirically evaluation in a set of artificial and real-world domains shows that the dynamic path-based method can improve the performance of selective decision tree learning in terms of both higher prediction accuracy and lower theory complexity. In addition, it shows some performance advantages over a fixed path-based method and a fixed rule-based method for learning decision trees. ",
"neighbors": [
102,
1595,
2006,
2675
],
"mask": "Train"
},
{
"node_id": 1825,
"label": 2,
"text": "Title: GUESSING CAN OUTPERFORM MANY LONG TIME LAG ALGORITHMS \nAbstract: Numerous recent papers focus on standard recurrent nets' problems with long time lags between relevant signals. Some propose rather sophisticated, alternative methods. We show: many problems used to test previous methods can be solved more quickly by random weight guessing. ",
"neighbors": [
68,
121,
978,
979,
1966
],
"mask": "Train"
},
{
"node_id": 1826,
"label": 1,
"text": "Title: A Computational View of Population Genetics (preliminary version) \nAbstract: This paper contributes to the study of nonlinear dynamical systems from a computational perspective. These systems are inherently more powerful than their linear counterparts (such as Markov chains), which have had a wide impact in Computer Science, and they seem likely to play an increasing role in future. However, there are as yet no general techniques available for handling the computational aspects of discrete nonlinear systems, and even the simplest examples seem very hard to analyze. We focus in this paper on a class of quadratic systems that are widely used as a model in population genetics and also in genetic algorithms. These systems describe a process where random matings occur between parental chromosomes via a mechanism known as \"crossover\": i.e., children inherit pieces of genetic material from different parents according to some random rule. Our results concern two fundamental quantitative properties of crossover systems: 1. We develop a general technique for computing the ",
"neighbors": [
689,
2630
],
"mask": "Train"
},
{
"node_id": 1827,
"label": 6,
"text": "Title: Efficient Algorithms for Inverting Evolution \nAbstract: Evolution is a stochastic process which operates on the DNA of species. The evolutionary process leaves tell-tale signs in the DNA which can be used to construct phylogenies, or evolutionary trees, for a set of species. Maximum Likelihood Estimations (MLE) methods seek the evolutionary tree which is most likely to have produced the DNA under consideration. While these methods are widely accepted and intellectually satisfying, they have been computationally intractable. In this paper, we address the intractability of MLE methods as follows. We introduce a metric on stochastic process models of evolution. We show that this metric is meaningful by proving that in order for any algorithm to distinguish between two stochatic models that are close according to this metric, it needs to be given a lot of observations. We complement this result with a simple and efficient algorithm for inverting the stochastic process of evolution, that is, for building the tree from observations on the DNA of the species. Our result can be viewed as a result on the PAC-learnability of the class of distributions produced by tree-like processes. Though there have been many heuristics suggested for this problem, our algorithm is the first one with a guaranteed convergence rate, and further, this rate is within a polynomial of the lower-bound rate we establish. Ours is also the the first polynomial-time algorithm which is guaranteed to converge at all to the correct tree. ",
"neighbors": [
299,
574,
1962,
2083,
2110,
2224
],
"mask": "Train"
},
{
"node_id": 1828,
"label": 4,
"text": "Title: Learning in Continuous Domains with Delayed Rewards \nAbstract: Much has been done to develop learning techniques for delayed reward problems in worlds where the actions and/or states are approximated by discrete representations. Although this is acceptable in some applications there are many more situations where such an approximation is difficult and unnatural. For instance, in applications such as robotic,s where real machines interact with the real world, learning techniques that use real valued continuous quantities are required. Presented in this paper is an extension to Q-learning that uses both real valued states and actions. This is achieved by introducing activation strengths to each actuator system of the robot. This allow all actuators to be active to some continuous amount simultaneously. Learning occurs by incrementally adapting both the expected future reward to goal evaluation function and the gradients of that function with respect to each actuator system. ",
"neighbors": [
567,
2014,
2018
],
"mask": "Test"
},
{
"node_id": 1829,
"label": 3,
"text": "Title: On the Connection Between Stochastic Smoothing, Filtering and Estimation with Incomplete Data \nAbstract: Connections between stochastic smoothing/filtering and estimation with incomplete data are investigated. It is shown, under the right censoring scheme, that the Kaplan-Meier estimator can be characterized as a moment estimate based on a stochastic filter/smoother(a pseudo-filter/smoother). Motivated by this result, a potentially useful martingale approach for estimation and convergence with incomplete data is proposed: estimators are characterized as pseudo-stochastic smoothers (which sometimes reduce to filters), which are described by a (system of) stochastic integral equation(s); recent results in convergence of stochastic integrals and stochastic differential equations are then applied to address convergence issues. As an illustration, the double censoring problem is revisited under this framework, a closed form estimator is proposed and convergence properties studied. Martingale theory plays a vital role in the entire analysis. This approach is in essence a self-consistency method. 1 ",
"neighbors": [
2421
],
"mask": "Validation"
},
{
"node_id": 1830,
"label": 0,
"text": "Title: Inductive CBR for Customer Support \nAbstract: Over the past few years, the telecommunications paradigm has been shifting rapidly from hardware to middleware. In particular, the traditional issues of service characteristics and network control are being replaced by the modern, customer-driven issues of network and service management (e.g., electronic commerce, one-stop shops). An area of service management which has extremely high visibility and negative impact when managed badly is that of problem handling. Problem handling is a very knowledge intensive activity, particularly nowadays with the increase in number and complexity of services becoming available. Trials at several BT support centres have already demonstrated the potential of case-based reasoning technology in improving current practice for problem detection and diagnosis. A major cost involved in implementing a case-based system is in the manual building of the initial case base and then in the subsequent maintenance of that case base over time. This paper shows how inductive machine learning can be combined with case-based reasoning to produce an intelligent system capable of both extracting knowledge from raw data automatically and reasoning from that knowledge. In addition to discovering knowledge in existing data repositories, the integrated system may be used to acquire and revise knowledge continually. Experiments with the suggested integrated approach demonstrate promise and justify the next step. ",
"neighbors": [
2466,
2585
],
"mask": "Train"
},
{
"node_id": 1831,
"label": 1,
"text": "Title: Some Training Subset Selection Methods for Supervised Learning in Genetic Programming \nAbstract: When using the Genetic Programming (GP) Algorithm on a difficult problem with a large set of training cases, a large population size is needed and a very large number of function-tree evaluations must be carried out. This paper describes how to reduce the number of such evaluations by selecting a small subset of the training data set on which to actually carry out the GP algorithm. Three subset selection methods described in the paper are: Dynamic Subset Selection (DSS), using the current GP run to select `difficult' and/or disused cases, Historical Subset Selection (HSS), using previous GP runs, Random Subset Selection (RSS). GP, GP+DSS, GP+HSS, GP+RSS are compared on a large classification problem. GP+DSS can produce better results in less than 20% of the time taken by GP. GP+HSS can nearly match the results of GP, and, perhaps surprisingly, GP+RSS can occasionally approach the results of GP. GP and GP+DSS are then compared on a smaller problem, and a hybrid Dynamic Fitness Function (DFF), based on DSS, is proposed.",
"neighbors": [
163,
1832,
1836
],
"mask": "Train"
},
{
"node_id": 1832,
"label": 1,
"text": "Title: Tackling the Boolean Even N Parity Problem with Genetic Programming and Limited-Error Fitness standard GP\nAbstract: This paper presents Limited Error Fitness (LEF), a modification to the standard supervised learning approach in Genetic Programming (GP), in which an individual's fitness score is based on how many cases remain uncovered in the ordered training set after the individual exceeds an error limit. The training set order and the error limit are both altered dynamically in response to the performance of the fittest individual in the previous generation. ",
"neighbors": [
55,
415,
1831,
1836,
2334
],
"mask": "Train"
},
{
"node_id": 1833,
"label": 6,
"text": "Title: Pruning Decision Trees with Misclassification Costs \nAbstract: We describe an experimental study of pruning methods for decision tree classifiers when the goal is minimizing loss rather than error. In addition to two common methods for error minimization, CART's cost-complexity pruning and C4.5's error-based pruning, we study the extension of cost-complexity pruning to loss and one pruning variant based on the Laplace correction. We perform an empirical comparison of these methods and evaluate them with respect to loss. We found that applying the Laplace correction to estimate the probability distributions at the leaves was beneficial to all pruning methods. Unlike in error minimization, and somewhat surprisingly, performing no pruning led to results that were on par with other methods in terms of the evaluation criteria. The main advantage of pruning was in the reduction of the decision tree size, sometimes by a factor of ten. While no method dominated others on all datasets, even for the same domain different pruning mechanisms are better for different loss matrices.",
"neighbors": [
2367
],
"mask": "Train"
},
{
"node_id": 1834,
"label": 1,
"text": "Title: Genetic Algorithms, Tournament Selection, and the Effects of Noise \nAbstract: IlliGAL Report No. 95006 July 1995 ",
"neighbors": [
1905
],
"mask": "Test"
},
{
"node_id": 1835,
"label": 6,
"text": "Title: Implementation Issues in the Fourier Transform Algorithm \nAbstract: The Fourier transform of boolean functions has come to play an important role in proving many important learnability results. We aim to demonstrate that the Fourier transform techniques are also a useful and practical algorithm in addition to being a powerful theoretical tool. We describe the more prominent changes we have introduced to the algorithm, ones that were crucial and without which the performance of the algorithm would severely deteriorate. One of the benefits we present is the confidence level for each prediction which measures the likelihood the prediction is correct.",
"neighbors": [
2011,
2182
],
"mask": "Validation"
},
{
"node_id": 1836,
"label": 1,
"text": "Title: Small Populations over Many Generations can beat Large Populations over Few Generations in Genetic Programming \nAbstract: This paper looks at the use of small populations in Genetic Programming (GP), where the trend in the literature appears to be towards using as large a population as possible, which requires more memory resources and CPU-usage is less efficient. Dynamic Subset Selection (DSS) and Limited Error Fitness (LEF) are two different, adaptive variations of the standard supervised learning method used in GP. This paper compares the performance of GP, GP+DSS, and GP+LEF, on a 958 case classification problem, using a small population size of 50. A similar comparison between GP and GP+DSS is done on a larger and messier 3772 case classification problem. For both problems, GP+DSS with the small population size consistently produces a better answer using fewer tree evaluations than other runs using much larger populations. Even standard GP can be seen to perform well with the much smaller population size, indicating that it is certainly worth an exploratory run or three with a small population size before assuming that a large population size is necessary. It is an interesting notion that smaller can mean faster and better. ",
"neighbors": [
415,
1831,
1832,
2334
],
"mask": "Train"
},
{
"node_id": 1837,
"label": 5,
"text": "Title: Data Mining and Knowledge Discovery, Adaptive Fraud Detection \nAbstract: One method for detecting fraud is to check for suspicious changes in user behavior. This paper describes the automatic design of user profiling methods for the purpose of fraud detection, using a series of data mining techniques. Specifically, we use a rule-learning program to uncover indicators of fraudulent behavior from a large database of customer transactions. Then the indicators are used to create a set of monitors, which profile legitimate customer behavior and indicate anomalies. Finally, the outputs of the monitors are used as features in a system that learns to combine evidence to generate high-confidence alarms. The system has been applied to the problem of detecting cellular cloning fraud based on a database of call records. Experiments indicate that this automatic approach performs better than hand-crafted methods for detecting fraud. Furthermore, this approach can adapt to the changing conditions typical of fraud detection environments. ",
"neighbors": [
382,
2132
],
"mask": "Validation"
},
{
"node_id": 1838,
"label": 3,
"text": "Title: Learning in neural networks with Bayesian prototypes \nAbstract: Given a set of samples of a probability distribution on a set of discrete random variables, we study the problem of constructing a good approximative neural network model of the underlying probability distribution. Our approach is based on an unsupervised learning scheme where the samples are first divided into separate clusters, and each cluster is then coded as a single vector. These Bayesian prototype vectors consist of conditional probabilities representing the attribute-value distribution inside the corresponding cluster. Using these prototype vectors, it is possible to model the underlying joint probability distribution as a simple Bayesian network (a tree), which can be realized as a feedforward neural network capable of probabilistic reasoning. In this framework, learning means choosing the size of the prototype set, partitioning the samples into the corresponding clusters, and constructing the cluster prototypes. We describe how the prototypes can be determined, given a partition of the samples, and present a method for evaluating the likelihood of the corresponding Bayesian tree. We also present a greedy heuristic for searching through the space of different partition schemes with different numbers of clusters, aiming at an optimal approximation of the probability distribution. ",
"neighbors": [
485,
719,
1908,
2380,
2514
],
"mask": "Test"
},
{
"node_id": 1839,
"label": 1,
"text": "Title: Context Preserving Crossover in Genetic Programming. \nAbstract: This paper introduces two new crossover operators for Genetic Programming (GP). Contrary to the regular GP crossover, the operators presented attempt to preserve the context in which subtrees appeared in the parent trees. A simple coordinate scheme for nodes in an S-expression tree is proposed, and crossovers are only allowed between nodes with exactly or partially matching coordinates. ",
"neighbors": [
290,
2688,
2705
],
"mask": "Validation"
},
{
"node_id": 1840,
"label": 1,
"text": "Title: Hierarchical Genetic Programming (HGP) extensions discover, modify, and exploit subroutines to accelerate the evolution of\nAbstract: A fundamental problem in learning from observation and interaction with an environment is defining a good representation, that is a representation which captures the underlying structure and functionality of the domain. This chapter discusses an extension of the genetic programming (GP) paradigm based on the idea that subroutines obtained from blocks of good representations act as building blocks and may enable a faster evolution of even better representations. This GP extension algorithm is called adaptive representation through learning (ARL). It has built-in mechanisms for (1) creation of new subroutines through discovery and generalization of blocks of code; (2) deletion of subroutines. The set of evolved subroutines extracts common knowledge emerging during the evolutionary process and acquires the necessary structure for solving the problem. ARL was successfully tested on the problem of controlling an agent in a dynamic and non-deterministic environment. Results with the automatic discovery of subroutines show the potential to better scale up the GP technique to complex problems. While HGP approaches improve the efficiency and scalability of genetic programming (GP) for many applications [Koza, 1994b], several issues remain unresolved. The scalability of HGP techniques could be further improved by solving two such issues. One is the characterization of the value of subroutines. Current methods for HGP do not attempt to decide what is relevant, i.e. which blocks of code or subroutines may be worth giving special attention, but employ genetic operations on subroutines at random points. The other issue is the time-course of the generation of new subroutines. Current HGP techniques do not make informed choices to automatically decide when creation or modification of subroutines is advantageous or necessary. The Adaptive Representation through Learning (ARL) algorithm copes with both of these problems. The what issue is addressed by relying on local measures such as parent-offspring differential fitness and block activation in order to discover useful subroutines and by learning which subroutines are useful. The when issue is addressed by relying on global population measures such as population entropy in order to predict when search reaches local optima and escape them. ARL co-evolves a set of subroutines which extends the set of problem primitives. ",
"neighbors": [
2688,
2705
],
"mask": "Train"
},
{
"node_id": 1841,
"label": 4,
"text": "Title: Learning Without State-Estimation in Partially Observable Markovian Decision Processes \nAbstract: Reinforcement learning (RL) algorithms provide a sound theoretical basis for building learning control architectures for embedded agents. Unfortunately all of the theory and much of the practice (see Barto et al., 1983, for an exception) of RL is limited to Marko-vian decision processes (MDPs). Many real-world decision tasks, however, are inherently non-Markovian, i.e., the state of the environment is only incompletely known to the learning agent. In this paper we consider only partially observable MDPs (POMDPs), a useful class of non-Markovian decision processes. Most previous approaches to such problems have combined computationally expensive state-estimation techniques with learning control. This paper investigates learning in POMDPs without resorting to any form of state estimation. We present results about what TD(0) and Q-learning will do when applied to POMDPs. It is shown that the conventional discounted RL framework is inadequate to deal with POMDPs. Finally we develop a new framework for learning without state-estimation in POMDPs by including stochastic policies in the search space, and by defining the value or utility of a dis tribution over states.",
"neighbors": [
564,
1741
],
"mask": "Train"
},
{
"node_id": 1842,
"label": 3,
"text": "Title: Fall Diagnosis using Dynamic Belief Networks \nAbstract: The task is to monitor walking patterns and give early warning of falls using foot switch and mercury trigger sensors. We describe a dynamic belief network model for fall diagnosis which, given evidence from sensor observations, outputs beliefs about the current walking status and makes predictions regarding future falls. The model represents possible sensor error and is parametrised to allow customisation to the individual being monitored.",
"neighbors": [
1268,
1757,
2341
],
"mask": "Train"
},
{
"node_id": 1843,
"label": 0,
"text": "Title: Goal-based Explanation Evaluation 1 \nAbstract: 1 I would like to thank my dissertation advisor, Roger Schank, for his very valuable guidance on this research, and to thank the Cognitive Science reviewers for their helpful comments on a draft of this paper. The research described here was conducted primarily at Yale University, supported in part by the Defense Advanced Research Projects Agency, monitored by the Office of Naval Research under contract N0014-85-K-0108 and by the Air Force Office of Scientific Research under contract F49620-88-C-0058. ",
"neighbors": [
2626
],
"mask": "Validation"
},
{
"node_id": 1844,
"label": 4,
"text": "Title: Two Methods for Hierarchy Learning in Reinforcement Environments \nAbstract: This paper describes two methods for hierarchically organizing temporal behaviors. The first is more intuitive: grouping together common sequences of events into single units so that they may be treated as individual behaviors. This system immediately encounters problems, however, because the units are binary, meaning the behaviors must execute completely or not at all, and this hinders the construction of good training algorithms. The system also runs into difficulty when more than one unit is (or should be) active at the same time. The second system is a hierarchy of transition values. This hierarchy dynamically modifies the values that specify the degree to which one unit should follow another. These values are continuous, allowing the use of gradient descent during learning. Furthermore, many units are active at the same time as part of the system's normal functionings.",
"neighbors": [
1845,
1979
],
"mask": "Test"
},
{
"node_id": 1845,
"label": 4,
"text": "Title: ON LEARNING HOW TO LEARN LEARNING STRATEGIES \nAbstract: This paper introduces the \"incremental self-improvement paradigm\". Unlike previous methods, incremental self-improvement encourages a reinforcement learning system to improve the way it learns, and to improve the way it improves the way it learns ..., without significant theoretical limitations | the system is able to \"shift its inductive bias\" in a universal way. Its major features are: (1) There is no explicit difference between \"learning\", \"meta-learning\", and other kinds of information processing. Using a Turing machine equivalent programming language, the system itself occasionally executes self-delimiting, initially highly random \"self-modification programs\" which modify the context-dependent probabilities of future action sequences (including future self-modification programs). (2) The system keeps only those probability modifications computed by \"useful\" self-modification programs: those which bring about more payoff (reward, reinforcement) per time than all previous self-modification programs. (3) The computation of payoff per time takes into account all the computation time required for learning | the entire system life is considered: boundaries between learning trials are ignored (if there are any). A particular implementation based on the novel paradigm is presented. It is designed to exploit what conventional digital machines are good at: fast storage addressing, arithmetic operations etc. Experiments illustrate the system's mode of operation. Keywords: Self-improvement, self-reference, introspection, machine-learning, reinforcement learning. Note: This is the revised and extended version of an earlier report from November 24, 1994. ",
"neighbors": [
68,
979,
1844,
1979
],
"mask": "Train"
},
{
"node_id": 1846,
"label": 2,
"text": "Title: A Neural Network Architecture for High-Speed Database Query Processing \nAbstract: Artificial neural networks (ANN), due to their inherent parallelism and potential fault tolerance, offer an attractive paradigm for robust and efficient implementations of large modern database and knowledge base systems. This paper explores a neural network model for efficient implementation of a database query system. The application of the proposed model to a high-speed library query system for retrieval of multiple items is based on partial match of the specified query criteria with the stored records. The performance of the ANN realization of the database query module is analyzed and compared with other techniques commonly in current computer systems. The results of this analysis suggest that the proposed ANN design offers an attractive approach for the realization of query modules in large database and knowledge base systems, especially for retrieval based on partial matches. ",
"neighbors": [
1847,
1927,
2537
],
"mask": "Train"
},
{
"node_id": 1847,
"label": 2,
"text": "Title: A Neural Network Architecture for Syntax Analysis \nAbstract: Artificial neural networks (ANN), due to their inherent parallelism and potential fault tolerance, offer an attractive paradigm for robust and efficient implementations of large modern database and knowledge base systems. This paper explores a neural network model for efficient implementation of a database query system. The application of the proposed model to a high-speed library query system for retrieval of multiple items is based on partial match of the specified query criteria with the stored records. The performance of the ANN realization of the database query module is analyzed and compared with other techniques commonly in current computer systems. The results of this analysis suggest that the proposed ANN design offers an attractive approach for the realization of query modules in large database and knowledge base systems, especially for retrieval based on partial matches. ",
"neighbors": [
1846,
1927
],
"mask": "Test"
},
{
"node_id": 1848,
"label": 6,
"text": "Title: On the Power of Equivalence Queries \nAbstract: In 1990, Angluin showed that no class exhibiting a combinatorial property called \"approximate fingerprints\" can be identified exactly using polynomially many Equivalence queries (of polynomial size). Here we show that this is a necessary condition: every class without approximate fingerprints has an identification strategy that makes a polynomial number of Equivalence queries. Furthermore, if the class is \"honest\" in a technical sense, the computational power required by the strategy is within the polynomial-time hierarchy, so proving non learnability is at least as hard as showing P 6= NP.",
"neighbors": [
2483
],
"mask": "Train"
},
{
"node_id": 1849,
"label": 5,
"text": "Title: Profile-Driven Instruction Level Parallel Scheduling with Application to Super Blocks \nAbstract: Code scheduling to exploit instruction level parallelism (ILP) is a critical problem in compiler optimization research, in light of the increased use of long-instruction-word machines. Unfortunately, optimum scheduling is com-putationally intractable, and one must resort to carefully crafted heuristics in practice. If the scope of application of a scheduling heuristic is limited to basic blocks, considerable performance loss may be incurred at block boundaries. To overcome this obstacle, basic blocks can be coalesced across branches to form larger regions such as super blocks. In the literature, these regions are typically scheduled using algorithms that are either oblivious to profile information (under the assumption that the process of forming the region has fully utilized the profile information), or use the profile information as an addendum to classical scheduling techniques. We believe that even for the simple case of linear code regions such as super blocks, additional performance improvement can be gained by utilizing the profile information in scheduling as well. We propose a general paradigm for converting any profile-insensitive list sched-uler to a profile-sensitive scheduler. Our technique is developed via a theoretical analysis of a simplified abstract model of the general problem of profile-driven scheduling over any acyclic code region, yielding a scoring measure for ranking branch instructions. The ranking digests the profile information and has the useful property that scheduling with respect to rank is provably good for minimizing the expected completion time of the region, within the limits of the abstraction. While the ranking scheme is computation-ally intractable in the most general case, it is practicable for super blocks and suggests the heuristic that we present in this paper for profile-driven scheduling of super blocks. Experiments show that our heuristic offers substantial performance improvement over prior methods on a range of integer benchmarks and several machine models. ",
"neighbors": [
2163
],
"mask": "Test"
},
{
"node_id": 1850,
"label": 1,
"text": "Title: Genetic Programming for Pedestrians \nAbstract: We propose an extension to the Genetic Programming paradigm which allows users of traditional Genetic Algorithms to evolve computer programs. To this end, we have to introduce mechanisms like transscription, editing and repairing into Genetic Programming. We demonstrate the feasibility of the approach by using it to develop programs for the prediction of sequences of integer numbers.",
"neighbors": [
163,
2554
],
"mask": "Validation"
},
{
"node_id": 1851,
"label": 2,
"text": "Title: Faster Learning in Multi-Layer Networks by Handling \nAbstract: Generalized delta rule, popularly known as back-propagation (BP) [9, 5] is probably one of the most widely used procedures for training multi-layer feed-forward networks of sigmoid units. Despite reports of success on a number of interesting problems, BP can be excruciatingly slow in converging on a set of weights that meet the desired error criterion. Several modifications for improving the learning speed have been proposed in the literature [2, 4, 8, 1, 6]. BP is known to suffer from the phenomenon of flat spots [2]. The slowness of BP is a direct consequence of these flat-spots together with the formulation of the BP Learning rule. This paper proposes a new approach to minimizing the error that is suggested by the mathematical properties of the conventional error function and that effectively handles flat-spots occurring in the output layer. The robustness of the proposed technique is demonstrated on a number of data-sets widely studied in the machine learning community. ",
"neighbors": [
503,
1896
],
"mask": "Validation"
},
{
"node_id": 1852,
"label": 3,
"text": "Title: On Sequential Simulation-Based Methods for Bayesian Filtering \nAbstract: In this report, we present an overview of sequential simulation-based methods for Bayesian filtering of nonlinear and non-Gaussian dynamic models. It includes in a general framework numerous methods proposed independently in various areas of science and proposes some original developments. ",
"neighbors": [
99,
2592
],
"mask": "Train"
},
{
"node_id": 1853,
"label": 6,
"text": "Title: 99-113. Construction of Phylogenetic Trees, Science, Fitting the Gene Lineage Into Its Species Lineage. A\nAbstract: 6] Farach, M. and Thorup, M. 1993. Fast Comparison of Evolutionary Trees, Technical Report 93-46, DIMACS, Rutgers University, Piscataway, NJ. ",
"neighbors": [
1861,
2320
],
"mask": "Train"
},
{
"node_id": 1854,
"label": 0,
"text": "Title: Case Retrieval Nets: Basic Ideas and Extensions \nAbstract: An efficient retrieval of a relatively small number of relevant cases from a huge case base is a crucial subtask of Case-Based Reasoning. In this article, we present Case Retrieval Nets (CRNs), a memory model that has recently been developed for this task. The main idea is to apply a spreading activation process to a net-like case memory in order to retrieve cases being similar to a posed query case. We summarize the basic ideas of CRNs, suggest some useful extensions, and present some initial experimental results which suggest that CRNs can successfully handle case bases larger than considered usually in the CBR community. ",
"neighbors": [
75,
1855,
1864,
1976,
2048,
2122,
2299,
2482,
2645
],
"mask": "Train"
},
{
"node_id": 1855,
"label": 0,
"text": "Title: Applying Case Retrieval Nets to Diagnostic Tasks in Technical Domains \nAbstract: This paper presents Objectdirected Case Retrieval Nets, a memory model developed for an application of Case-Based Reasoning to the task of technical diagnosis. The key idea is to store cases, i.e. observed symptoms and diagnoses, in a network and to enhance this network with an object model encoding knowledge about the devices in the application domain. ",
"neighbors": [
75,
1854,
1864,
2075,
2122,
2299,
2482
],
"mask": "Train"
},
{
"node_id": 1856,
"label": 3,
"text": "Title: Identifiability, Improper Priors and Gibbs Sampling for Generalized Linear Models \nAbstract: Alan E. Gelfand is a Professor in the Department of Statistics at the University of Connecti-cut, Storrs, CT 06269. Sujit K. Sahu is a Lecturer at the School of Mathematics, University of Wales, Cardiff, CF2 4YH, UK. The research of the first author was supported in part by NSF grant DMS 9301316 while the second author was supported in part by an EPSRC grant from UK. The authors thank Brad Carlin, Kate Cowles, Gareth Roberts and an anonymous referee for valuable comments. ",
"neighbors": [
2421
],
"mask": "Train"
},
{
"node_id": 1857,
"label": 3,
"text": "Title: Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification \nAbstract: Technical Report No. 9702, Department of Statistics, University of Toronto Abstract. Gaussian processes are a natural way of defining prior distributions over functions of one or more input variables. In a simple nonparametric regression problem, where such a function gives the mean of a Gaussian distribution for an observed response, a Gaussian process model can easily be implemented using matrix computations that are feasible for datasets of up to about a thousand cases. Hyperparameters that define the covariance function of the Gaussian process can be sampled using Markov chain methods. Regression models where the noise has a t distribution and logistic or probit models for classification applications can be implemented by sampling as well for latent values underlying the observations. Software is now available that implements these methods using covariance functions with hierarchical parameterizations. Models defined in this way can discover high-level properties of the data, such as which inputs are relevant to predicting the response. ",
"neighbors": [
125,
160,
2020,
2540,
2681
],
"mask": "Train"
},
{
"node_id": 1858,
"label": 6,
"text": "Title: NP-Completeness of Minimum Rule Sets \nAbstract: Rule induction systems seek to generate rule sets which are optimal in the complexity of the rule set. This paper develops a formal proof of the NP-Completeness of the problem of generating the simplest rule set (MIN RS) which accurately predicts examples in the training set for a particular type of generalization algorithm algorithm and complexity measure. The proof is then informally extended to cover a broader spectrum of complexity measures and learning algorithms. ",
"neighbors": [
2481,
2528
],
"mask": "Train"
},
{
"node_id": 1859,
"label": 4,
"text": "Title: Self-Improving Factory Simulation using Continuous-time Average-Reward Reinforcement Learning \nAbstract: Many factory optimization problems, from inventory control to scheduling and reliability, can be formulated as continuous-time Markov decision processes. A primary goal in such problems is to find a gain-optimal policy that minimizes the long-run average cost. This paper describes a new average-reward algorithm called SMART for finding gain-optimal policies in continuous time semi-Markov decision processes. The paper presents a detailed experimental study of SMART on a large unreliable production inventory problem. SMART outperforms two well-known reliability heuristics from industrial engineering. A key feature of this study is the integration of the reinforcement learning algorithm directly into two commercial discrete-event simulation packages, ARENA and CSIM, paving the way for this approach to be applied to many other factory optimization problems for which there already exist simulation models.",
"neighbors": [
471,
548,
554,
565,
621,
1791
],
"mask": "Train"
},
{
"node_id": 1860,
"label": 0,
"text": "Title: Efficient Locally Weighted Polynomial Regression Predictions \nAbstract: Locally weighted polynomial regression (LWPR) is a popular instance-based algorithm for learning continuous non-linear mappings. For more than two or three inputs and for more than a few thousand dat-apoints the computational expense of predictions is daunting. We discuss drawbacks with previous approaches to dealing with this problem, and present a new algorithm based on a multiresolution search of a quickly-constructible augmented kd-tree. Without needing to rebuild the tree, we can make fast predictions with arbitrary local weighting functions, arbitrary kernel widths and arbitrary queries. The paper begins with a new, faster, algorithm for exact LWPR predictions. Next we introduce an approximation that achieves up to a two-orders-of-magnitude speedup with negligible accuracy losses. Increasing a certain approximation parameter achieves greater speedups still, but with a correspondingly larger accuracy degradation. This is nevertheless useful during operations such as the early stages of model selection and locating optima of a fitted surface. We also show how the approximations can permit real-time query-specific optimization of the kernel width. We conclude with a brief discussion of potential extensions for tractable instance-based learning on datasets that are too large to fit in a com puter's main memory. ",
"neighbors": [
548,
683,
906,
2428,
2430,
2658
],
"mask": "Train"
},
{
"node_id": 1861,
"label": 6,
"text": "Title: A Six-Point Condition for Ordinal Matrices keywords: additive, algorithm, evolution, ordinal, phylogeny \nAbstract: Ordinal assertions in an evolutionary context are of the form \"species s is more similar to species x than to species y\" and can be deduced from a distance matrix M of interspecies dissimilarities (M [s; x] < M [s; y]). Given species x and y, the ordinal binary character c xy of M is defined by c xy (s) = 1 if and only if M [s; x] < M[s; y], for all species s. In this paper we present several results concerning the inference of evolutionary trees or phylogenies from ordinal assertions. In particular, we present A six-point condition that characterizes those distance matrices whose ordinal binary characters are pairwise compatible. This characterization is analogous to the four-point condition for additive matrices. An optimal O(n 2 ) algorithm, where n is the number of species, for recovering a phylogeny that realizes the ordinal binary characters of a distance matrix that satisfies the six-point condition. An NP-completeness result on determining if there is a phylogeny that realizes k or more of the ordinal binary characters of a given distance matrix.",
"neighbors": [
1853
],
"mask": "Train"
},
{
"node_id": 1862,
"label": 6,
"text": "Title: Continuous-valued Xof-N Attributes Versus Nominal Xof-N Attributes for Constructive Induction: A Case Study \nAbstract: An Xof-N is a set containing one or more attribute-value pairs. For a given instance, its value corresponds to the number of its attribute-value pairs that are true. In this paper, we explore the characteristics and performance of continuous-valued Xof-N attributes versus nominal Xof-N attributes for constructive induction. Nominal Xof-Ns are more representationally powerful than continuous-valued Xof-Ns, but the former suffer the \"fragmentation\" problem, although some mechanisms such as subsetting can help to solve the problem. Two approaches to constructive induction using continuous-valued Xof-Ns are described. Continuous-valued Xof-Ns perform better than nominal ones on domains that need Xof-Ns with only one cut point. On domains that need Xof-N representations with more than one cut point, nominal Xof-Ns perform better than continuous-valued ones. Experimental results on a set of artificial and real-world domains support these statements. ",
"neighbors": [
1595,
1644,
1863,
1964,
2675
],
"mask": "Test"
},
{
"node_id": 1863,
"label": 6,
"text": "Title: Effects of Different Types of New Attribute on Constructive Induction \nAbstract: This paper studies the effects on decision tree learning of constructing four types of attribute (conjunctive, disjunctive, Mof-N, and Xof-N representations). To reduce effects of other factors such as tree learning methods, new attribute search strategies, evaluation functions, and stopping criteria, a single tree learning algorithm is developed. With different option settings, it can construct four different types of new attribute, but all other factors are fixed. The study reveals that conjunctive and disjunctive representations have very similar performance in terms of prediction accuracy and theory complexity on a variety of concepts. Moreover, the study demonstrates that the stronger representation power of Mof-N than conjunction and disjunction and the stronger representation power of Xof-N than these three types of new attribute can be reflected in the performance of decision tree learning. ",
"neighbors": [
1595,
1644,
1862,
1964
],
"mask": "Test"
},
{
"node_id": 1864,
"label": 0,
"text": "Title: An Investigation of Marker-Passing Algorithms for Analogue Retrieval \nAbstract: If analogy and case-based reasoning systems are to scale up to very large case bases, it is important to analyze the various methods used for retrieving analogues to identify the features of the problem for which they are appropriate. This paper reports on one such analysis, a comparison of retrieval by marker passing or spreading activation in a semantic network with Knowledge-Directed Spreading Activation, a method developed to be well-suited for retrieving semantically distant analogues from a large knowledge base. The analysis has two complementary components: (1) a theoretical model of the retrieval time based on a number of problem characteristics, and (2) experiments showing how the retrieval time of the approaches varies with the knowledge base size. These two components, taken together, suggest that KDSA is more likely than SA to be able to scale up to retrieval in large knowledge bases.",
"neighbors": [
1854,
1855,
2122,
2276
],
"mask": "Train"
},
{
"node_id": 1865,
"label": 2,
"text": "Title: DNA Sequence Classification Using Compression-Based Induction \nAbstract: DIMACS Technical Report 95-04 April 1995 ",
"neighbors": [
2107
],
"mask": "Train"
},
{
"node_id": 1866,
"label": 2,
"text": "Title: A Model of Rapid Memory Formation in the Hippocampal System \nAbstract: Our ability to remember events and situations in our daily life demonstrates our ability to rapidly acquire new memories. There is a broad consensus that the hippocampal system (HS) plays a critical role in the formation and retrieval of such memories. A computational model is described that demonstrates how the HS may rapidly transform a transient pattern of activity representing an event or a situation into a persistent structural encoding via long-term potentiation and long-term depression. ",
"neighbors": [
2272
],
"mask": "Train"
},
{
"node_id": 1867,
"label": 2,
"text": "Title: A comparison of neural net and conventional techniques for lighting control \nAbstract: We compare two techniques for lighting control in an actual room equipped with seven banks of lights and photoresistors to detect the lighting level at four sensing points. Each bank of lights can be independently set to one of sixteen intensity levels. The task is to determine the device intensity levels that achieve a particular configuration of sensor readings. One technique we explored uses a neural network to approximate the mapping between sensor readings and device intensity levels. The other technique we examined uses a conventional feedback control loop. The neural network approach appears superior both in that it does not require experimentation on the fly (and hence fluctuating light intensity levels during settling, and lengthy settling times) and in that it can deal with complex interactions that conventional control techniques do not handle well. This comparison was performed as part of the \"Adaptive House\" project, which is described briefly. Further directions for control in the ",
"neighbors": [
1718,
1754
],
"mask": "Train"
},
{
"node_id": 1868,
"label": 3,
"text": "Title: Convergence in Norm for Alternating Expectation-Maximization (EM) Type Algorithms 1 \nAbstract: We provide a sufficient condition for convergence of a general class of alternating estimation-maximization (EM) type continuous-parameter estimation algorithms with respect to a given norm. This class includes EM, penalized EM, Green's OSL-EM, and other approximate EM algorithms. The convergence analysis can be extended to include alternating coordinate-maximization EM algorithms such as Meng and Rubin's ECM and Fessler and Hero's SAGE. The condition for monotone convergence can be used to establish norms under which the distance between successive iterates and the limit point of the EM-type algorithm approaches zero monotonically. For illustration, we apply our results to estimation of Poisson rate parameters in emission tomography and establish that in the final iterations the logarithm of the EM iterates converge monotonically in a weighted Euclidean norm. ",
"neighbors": [
2421
],
"mask": "Test"
},
{
"node_id": 1869,
"label": 2,
"text": "Title: Refining PID Controllers using Neural Networks \nAbstract: The Kbann approach uses neural networks to refine knowledge that can be written in the form of simple propositional rules. We extend this idea further by presenting the Manncon algorithm by which the mathematical equations governing a PID controller determine the topology and initial weights of a network, which is further trained using backpropagation. We apply this method to the task of controlling the outflow and temperature of a water tank, producing statistically-significant gains in accuracy over both a standard neural network approach and a non-learning PID controller. Furthermore, using the PID knowledge to initialize the weights of the network produces statistically less variation in testset accuracy when compared to networks initialized with small random numbers.",
"neighbors": [
1754,
2409
],
"mask": "Train"
},
{
"node_id": 1870,
"label": 3,
"text": "Title: Von Mises type statistics for single site updated local interaction random fields \nAbstract: Random field models in image analysis and spatial statistics usually have local interactions. They can be simulated by Markov chains which update a single site at a time. The updating rules typically condition on only a few neighboring sites. If we want to approximate the expectation of a bounded function, can we make better use of the simulations than through the empirical estimator? We describe symmetrizations of the empirical estimator which are computationally feasible and can lead to considerable variance reduction. The method is reminiscent of the idea behind generalized von Mises statistics. To simplify the exposition, we consider mainly nearest neighbor random fields and the Gibbs sampler. ",
"neighbors": [
1713,
2362
],
"mask": "Validation"
},
{
"node_id": 1871,
"label": 2,
"text": "Title: 3D Object Recognition Using Unsupervised Feature Extraction \nAbstract: Intrator (1990) proposed a feature extraction method that is related to recent statistical theory (Huber, 1985; Friedman, 1987), and is based on a biologically motivated model of neuronal plasticity (Bienenstock et al., 1982). This method has been recently applied to feature extraction in the context of recognizing 3D objects from single 2D views (Intrator and Gold, 1991). Here we describe experiments designed to analyze the nature of the extracted features, and their relevance to the theory and psychophysics of object recognition.",
"neighbors": [
359,
2499
],
"mask": "Test"
},
{
"node_id": 1872,
"label": 1,
"text": "Title: Modeling Building-Block Interdependency Dynamical and Evolutionary Machine Organization Group \nAbstract: The Building-Block Hypothesis appeals to the notion of problem decomposition and the assembly of solutions from sub-solutions. Accordingly, there have been many varieties of GA test problems with a structure based on building-blocks. Many of these problems use deceptive fitness functions to model interdependency between the bits within a block. However, very few have any model of interdependency between building-blocks; those that do are not consistent in the type of interaction used intra-block and inter-block. This paper discusses the inadequacies of the various test problems in the literature and clarifies the concept of building-block interdependency. We formulate a principled model of hierarchical interdependency that can be applied through many levels in a consistent manner and introduce Hierarchical If-and-only-if (H-IFF) as a canonical example. We present some empirical results of GAs on H-IFF showing that if population diversity is maintained and linkage is tight then the GA is able to identify and manipulate building-blocks over many levels of assembly, as the Building-Block Hypothesis suggests. ",
"neighbors": [
163,
1257,
1696,
1771
],
"mask": "Train"
},
{
"node_id": 1873,
"label": 2,
"text": "Title: Achieving Super Computer Performance with a DSP Array Processor \nAbstract: The MUSIC system (MUlti Signal processor system with Intelligent Communication) is a parallel distributed memory architecture based on digital signal processors (DSP). A system with 60 processor elements is operational. It has a peak performance of 3.8 GFlops, an electrical power consumption of less than 800 W (including forced air cooling) and fits into a 19\" rack. Two applications (the back-propagation algorithm for neural net learning and molecular dynamics simulations) run about 6 times faster than on a CRAY Y-MP and 2 times faster than on a NEC SX-3. A sustained performance of more than 1 GFlops is reached. The selling price of such a system would be in the range of about 300'000 US$. ",
"neighbors": [
1998
],
"mask": "Validation"
},
{
"node_id": 1874,
"label": 2,
"text": "Title: DYNAMICAL BEHAVIOR OF ARTIFICIAL NEURAL NETWORKS WITH RANDOM WEIGHTS \nAbstract: In this paper we report a Monte Carlo study of the dynamics of large untrained, feedforward, neural networks with randomly chosen weights and feedback. The analysis consists of looking at the percent of the systems that exhibit chaos, the distrubution of largest Lyapunov exponents, and the distrubution of correlation dimensions. As the systems become more complex (increasing inputs and neurons), the probability of chaos approaches unity. The correlation dimension is typically much smaller than the system dimension. ",
"neighbors": [
1920
],
"mask": "Validation"
},
{
"node_id": 1875,
"label": 6,
"text": "Title: On the Effect of Analog Noise in Discrete-Time Analog Computations \nAbstract: We introduce a model for analog computation with discrete time in the presence of analog noise that is flexible enough to cover the most important concrete cases, such as noisy analog neural nets and networks of spiking neurons. This model subsumes the classical model for digital computation in the presence of noise. We show that the presence of arbitrarily small amounts of analog noise reduces the power of analog computational models to that of finite automata, and we also prove a new type of upper bound for the ",
"neighbors": [
407,
1891,
2439,
2553
],
"mask": "Validation"
},
{
"node_id": 1876,
"label": 3,
"text": "Title: An Improved Model for Spatially Correlated Binary Responses \nAbstract: In this paper we extend the basic autologistic model to include covariates and an indication of sampling effort. The model is applied to sampled data instead of the traditional use for image analysis where complete data are available. We adopt a Bayesian set-up and develop a hybrid Gibbs sampling estimation procedure. Using simulated examples, we show that the autologistic model with covariates for sample data improves predictions as compared to the simple logistic regression model and the standard autologistic model (without covariates). ",
"neighbors": [
2634
],
"mask": "Train"
},
{
"node_id": 1877,
"label": 0,
"text": "Title: Learning High Utility Rules by Incorporating Search Control Guidance Committee \nAbstract: In this paper we extend the basic autologistic model to include covariates and an indication of sampling effort. The model is applied to sampled data instead of the traditional use for image analysis where complete data are available. We adopt a Bayesian set-up and develop a hybrid Gibbs sampling estimation procedure. Using simulated examples, we show that the autologistic model with covariates for sample data improves predictions as compared to the simple logistic regression model and the standard autologistic model (without covariates). ",
"neighbors": [
251,
414,
578,
717,
2215
],
"mask": "Test"
},
{
"node_id": 1878,
"label": 1,
"text": "Title: Evolving Deterministic Finite Automata Using Cellular Encoding programming and cellular encoding. Programs are evolved that\nAbstract: This paper presents a method for the initial singlestate zygote. The results evolution of deterministic finite",
"neighbors": [
2571,
2624
],
"mask": "Train"
},
{
"node_id": 1879,
"label": 2,
"text": "Title: Rochester Connectionist Simulator \nAbstract: Specifying, constructing and simulating structured connectionist networks requires significant programming effort. System tools can greatly reduce the effort required, and by providing a conceptual structure within which to work, make large and complex network simulations possible. The Rochester Connectionist Simulator is a system tool designed to aid specification, construction and simulation of connectionist networks. This report describes this tool in detail: the facilities provided and how to use them, as well as details of the implementation. Through this we hope not only to make designing and verifying connectionist networks easier, but also to encourage the development and refinement of connectionist research tools themselves. ",
"neighbors": [
763,
1760,
2355
],
"mask": "Test"
},
{
"node_id": 1880,
"label": 1,
"text": "Title: On the relationship between distributed group-behaviour and the behavioural complexity of individuals [Wilson Sober 1994])\nAbstract: ",
"neighbors": [
2302
],
"mask": "Train"
},
{
"node_id": 1881,
"label": 5,
"text": "Title: Integrity Constraints in ILP using a Monte Carlo approach \nAbstract: Many state-of-the-art ILP systems require large numbers of negative examples to avoid overgeneralization. This is a considerable disadvantage for many ILP applications, namely indu ctive program synthesis where relativelly small and sparse example sets are a more realistic scenario. Integrity constraints are first order clauses that can play the role of negative examples in an inductive process. One integrity constraint can replace a long list of ground negative examples. However, checking the consistency of a program with a set of integrity constraints usually involves heavy the orem-proving. We propose an efficient constraint satisfaction algorithm that applies to a wide variety of useful integrity constraints and uses a Monte Carlo strategy. It looks for inconsistencies by ra ndom generation of queries to the program. This method allows the use of integrity constraints instead of (or together with) negative examples. As a consequence programs to induce can be specified more rapidly by the user and the ILP system tends to obtain more accurate definitions. Average running times are not greatly affected by the use of integrity constraints compared to ground negative examples. ",
"neighbors": [
344,
2449,
2450
],
"mask": "Train"
},
{
"node_id": 1882,
"label": 0,
"text": "Title: Symposium Title: Tutorial Discourse What Makes Human Explanations Effective? \nAbstract: Many state-of-the-art ILP systems require large numbers of negative examples to avoid overgeneralization. This is a considerable disadvantage for many ILP applications, namely indu ctive program synthesis where relativelly small and sparse example sets are a more realistic scenario. Integrity constraints are first order clauses that can play the role of negative examples in an inductive process. One integrity constraint can replace a long list of ground negative examples. However, checking the consistency of a program with a set of integrity constraints usually involves heavy the orem-proving. We propose an efficient constraint satisfaction algorithm that applies to a wide variety of useful integrity constraints and uses a Monte Carlo strategy. It looks for inconsistencies by ra ndom generation of queries to the program. This method allows the use of integrity constraints instead of (or together with) negative examples. As a consequence programs to induce can be specified more rapidly by the user and the ILP system tends to obtain more accurate definitions. Average running times are not greatly affected by the use of integrity constraints compared to ground negative examples. ",
"neighbors": [
1989
],
"mask": "Train"
},
{
"node_id": 1883,
"label": 1,
"text": "Title: A Trade Network Game With Endogenous Partner Selection 1 \nAbstract: This paper develops an evolutionary trade network game (TNG) that combines evolutionary game play with endogenous partner selection. Successive generations of resource-constrained buyers and sellers choose and refuse trade partners on the basis of continually updated expected payoffs. Trade partner selection takes place in accordance with a modified Gale-Shapley matching mechanism, and trades are implemented using trade strategies evolved via a standardly specified genetic algorithm. The trade partnerships resulting from the matching mechanism are shown to be core stable and Pareto optimal in each successive trade cycle. Nevertheless, computer experiments suggest that these static optimality properties may be inadequate measures of optimality from an evolutionary perspective. ",
"neighbors": [
2177
],
"mask": "Test"
},
{
"node_id": 1884,
"label": 2,
"text": "Title: Using Neural Networks to Identify Jets \nAbstract: A neural network method for identifying the ancestor of a hadron jet is presented. The idea is to find an efficient mapping between certain observed hadronic kinematical variables and the quark/gluon identity. This is done with a neuronic expansion in terms of a network of sigmoidal functions using a gradient descent procedure, where the errors are back-propagated through the network. With this method we are able to separate gluon from quark jets originating from Monte Carlo generated e + e events with ~ 85% accuracy. The result is independent on the MC model used. This approach for isolating the gluon jet is then used to study the so-called string effect. In addition, heavy quarks (b and c) in e + e reactions can be identified on the 50% level by just observing the hadrons. In particular we are able to separate b-quarks with an efficiency and purity, which is comparable with what is expected from vertex detectors. We also speculate on how the neural network method can be used to disentangle different hadronization schemes by compressing the dimensionality of the state space of hadrons. ",
"neighbors": [
745,
1885,
1886,
1902
],
"mask": "Train"
},
{
"node_id": 1885,
"label": 2,
"text": "Title: LU TP 90-3 Finding Gluon Jets with a Neural Trigger \nAbstract: Using a neural network classifier we are able to separate gluon from quark jets originating from Monte Carlo generated e + e events with 85 90% accuracy. PACS numbers: 13.65.+i, 12.38Qk, 13.87.Fh ",
"neighbors": [
745,
1884,
1886,
1902
],
"mask": "Train"
},
{
"node_id": 1886,
"label": 2,
"text": "Title: LU TP 91-4 Self-organizing Networks for Extracting Jet Features \nAbstract: Self-organizing neural networks are briefly reviewed and compared with supervised learning algorithms like back-propagation. The power of self-organization networks is in their capability of displaying typical features in a transparent manner. This is successfully demonstrated with two applications from hadronic jet physics; hadronization model discrimination and separation of b,c and light quarks. ",
"neighbors": [
745,
1884,
1885,
1902
],
"mask": "Validation"
},
{
"node_id": 1887,
"label": 2,
"text": "Title: LU TP 93-13 On Langevin Updating in Multilayer Perceptrons \nAbstract: The Langevin updating rule, in which noise is added to the weights during learning, is presented and shown to improve learning on problems with initially ill-conditioned Hessians. This is particularly important for multilayer perceptrons with many hidden layers, that often have ill-conditioned Hessians. In addition, Manhattan updating is shown to have a similar effect. ",
"neighbors": [
1902,
2258
],
"mask": "Validation"
},
{
"node_id": 1888,
"label": 6,
"text": "Title: Approximating Hyper-Rectangles: Learning and Pseudo-random Sets \nAbstract: The PAC learning of rectangles has been studied because they have been found experimentally to yield excellent hypotheses for several applied learning problems. Also, pseudorandom sets for rectangles have been actively studied recently because (i) they are a subprob-lem common to the derandomization of depth-2 (DNF) circuits and derandomizing Randomized Logspace, and (ii) they approximate the distribution of n independent multivalued random variables. We present improved upper bounds for a class of such problems of approximating high-dimensional rectangles that arise in PAC learning and pseudorandomness. ",
"neighbors": [
109,
507,
2427
],
"mask": "Train"
},
{
"node_id": 1889,
"label": 2,
"text": "Title: The Functional Transfer of Knowledge for Coronary Artery Disease Diagnosis \nAbstract: A distinction between two forms of task knowledge transfer, representational and functional, is reviewed followed by a discussion of MTL, a modified version of the multiple task learning (MTL) neural network method of functional transfer. The MTL method employs a separate learning rate, k , for each task output node k. k varies as a function of a measure of relatedness, R k , between the kth task and the primary task of interest. An MTL network is applied to a diagnostic domain of four levels of coronary artery disease. Results of experiments demonstrate the ability of MTL to develop a predictive model for one level of disease which has superior diagnostic ability over models produced by either single task learning or standard multiple task learning. ",
"neighbors": [
562,
730,
2586,
2648
],
"mask": "Test"
},
{
"node_id": 1890,
"label": 1,
"text": "Title: Genetic Algorithms for Adaptive Planning of Path and Trajectory of a Mobile Robot in 2D Terrains \nAbstract: This paper proposes genetic algorithms (GAs) for path planning and trajectory planning of an autonomous mobile robot. Our GA-based approach has an advantage of adaptivity such that the GAs work even if an environment is time-varying or unknown. Therefore, it is suitable for both off-line and on-line motion planning. We first presents a GA for path planning in a 2D terrain. Simulation results on the performance and adaptivity of the GA on randomly generated terrains are shown. Then, we discuss extensions of the GA for solving both path planning and trajectory planning simultaneously.",
"neighbors": [
163,
1060,
2039
],
"mask": "Validation"
},
{
"node_id": 1891,
"label": 2,
"text": "Title: Vapnik-Chervonenkis Dimension of Recurrent Neural Networks \nAbstract: Most of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on feedforward networks. However, recurrent networks are also widely used in learning applications, in particular when time is a relevant parameter. This paper provides lower and upper bounds for the VC dimension of such networks. Several types of activation functions are discussed, including threshold, polynomial, piecewise-polynomial and sigmoidal functions. The bounds depend on two independent parameters: the number w of weights in the network, and the length k of the input sequence. In contrast, for feedforward networks, VC dimension bounds can be expressed as a function of w only. An important difference between recurrent and feedforward nets is that a fixed recurrent net can receive inputs of arbitrary length. Therefore we are particularly interested in the case k w. Ignoring multiplicative constants, the main results say roughly the following: * For architectures with activation = any fixed nonlinear polynomial, the VC dimension is wk. * For architectures with activation = any fixed piecewise polynomial, the VC dimension is between wk and w 2 k. * For architectures with activation = H (threshold nets), the VC dimension is between w log(k=w) and minfwk log wk; w 2 + w log wkg. * For the standard sigmoid (x) = 1=(1 + e x ), the VC dimension is between wk and w 4 k 2 . An earlier version of this paper has appeared in Proc. 3rd European Workshop on Computational Learning Theory, LNCS 1208, pages 223-237, Springer, 1997. ",
"neighbors": [
58,
200,
206,
411,
1149,
1774,
1875
],
"mask": "Validation"
},
{
"node_id": 1892,
"label": 0,
"text": "Title: Abstraction and Decomposition in Hillclimbing Design Optimization \nAbstract: The performance of hillclimbing design optimization can be improved by abstraction and decomposition of the design space. Methods for automatically finding and exploiting such abstractions and decompositions are presented in this paper. A technique called \"Operator Importance Analysis\" finds useful abstractions. It does so by determining which of a given set of operators are the most important for a given class of design problems. Hillclimbing search runs faster when performed using this this smaller set of operators. A technique called \"Operator Interaction Analysis\" finds useful decompositions. It does so by measuring the pairwise interaction between operators. It uses such measurements to form an ordered partition of the operator set. This partition can then be used in a \"hierarchic\" hillclimbing algorithm which runs faster than ordinary hillclimbing with an unstructured operator set. We have implemented both techniques and tested them in the domain of racing yacht hull design. Our experimental results show that these two methods can produce substantial speedups with little or no loss in quality of the resulting designs. ",
"neighbors": [
2131
],
"mask": "Validation"
},
{
"node_id": 1893,
"label": 2,
"text": "Title: Learning NonLinearly Separable Boolean Functions With Linear Threshold Unit Trees and Madaline-Style Networks \nAbstract: This paper investigates an algorithm for the construction of decisions trees comprised of linear threshold units and also presents a novel algorithm for the learning of non-linearly separable boolean functions using Madaline-style networks which are isomorphic to decision trees. The construction of such networks is discussed, and their performance in learning is compared with standard BackPropagation on a sample problem in which many irrelevant attributes are introduced. Littlestone's Winnow algorithm is also explored within this architecture as a means of learning in the presence of many irrelevant attributes. The learning ability of this Madaline-style architecture on nonoptimal (larger than necessary) networks is also explored. ",
"neighbors": [
102,
1895,
1908
],
"mask": "Train"
},
{
"node_id": 1894,
"label": 3,
"text": "Title: Causal inference, path analysis, and recursive struc-tural equations models. In C. Clogg, editor, Sociological Methodology,\nAbstract: Lipid Research Clinic Program 84] Lipid Research Clinic Program. The Lipid Research Clinics Coronary Primary Prevention Trial results, parts I and II. Journal of the American Medical Association, 251(3):351-374, January 1984. [Pearl 93] Judea Pearl. Aspects of graphical models connected with causality. Technical Report R-195-LL, Cognitive Systems Laboratory, UCLA, June 1993. Submitted to Biometrika (June 1993). Short version in Proceedings of the 49th Session of the International Statistical Institute: Invited papers, Flo rence, Italy, August 1993, Tome LV, Book 1, pp. 391-401. ",
"neighbors": [
827,
909,
1527,
2144,
2524
],
"mask": "Train"
},
{
"node_id": 1895,
"label": 2,
"text": "Title: Generating Neural Networks Through the Induction of Threshold Logic Unit Trees (Extended Abstract) \nAbstract: We investigate the generation of neural networks through the induction of binary trees of threshold logic units (TLUs). Initially, we describe the framework for our tree construction algorithm and how such trees can be transformed into an isomorphic neural network topology. Several methods for learning the linear discriminant functions at each node of the tree structure are examined and shown to produce accuracy results that are comparable to classical information theoretic methods for constructing decision trees (which use single feature tests at each node). Our TLU trees, however, are smaller and thus easier to understand. Moreover, we show that it is possible to simultaneously learn both the topology and weight settings of a neural network simply using the training data set that we are given. ",
"neighbors": [
102,
638,
1893
],
"mask": "Train"
},
{
"node_id": 1896,
"label": 2,
"text": "Title: Experiments with the Cascade-Correlation Algorithm \nAbstract: Technical Report # 91-16 July 1991; Revised August 1991 ",
"neighbors": [
496,
1851,
2393
],
"mask": "Test"
},
{
"node_id": 1897,
"label": 6,
"text": "Title: On Learning Visual Concepts and DNF Formulae \nAbstract: We consider the problem of learning DNF formulae in the mistake-bound and the PAC models. We develop a new approach, which is called polynomial explainability, that is shown to be useful for learning some new subclasses of DNF (and CNF) formulae that were not known to be learnable before. Unlike previous learnability results for DNF (and CNF) formulae, these subclasses are not limited in the number of terms or in the number of variables per term; yet, they contain the subclasses of k-DNF and k-term-DNF (and the corresponding classes of CNF) as special cases. We apply our DNF results to the problem of learning visual concepts and obtain learning algorithms for several natural subclasses of visual concepts that appear to have no natural boolean counterpart. On the other hand, we show that learning some other natural subclasses of visual concepts is as hard as learning the class of all DNF formulae. We also consider the robustness of these results under various types of noise. ",
"neighbors": [
25,
640,
732,
1003,
2146,
2182
],
"mask": "Train"
},
{
"node_id": 1898,
"label": 3,
"text": "Title: Accounting for Context in Plan Recognition, with Application to Traffic Monitoring \nAbstract: Typical approaches to plan recognition start from a representation of an agent's possible plans, and reason evidentially from observations of the agent's actions to assess the plausibility of the various candidates. A more expansive view of the task (consistent with some prior work) accounts for the context in which the plan was generated, the mental state and planning process of the agent, and consequences of the agent's actions in the world. We present a general Bayesian framework encompassing this view, and focus on how context can be exploited in plan recognition. We demonstrate the approach on a problem in traffic monitoring, where the objective is to induce the plan of the driver from observation of vehicle movements. Starting from a model of how the driver generates plans, we show how the highway context can appropriately influence the recognizer's interpretation of observed driver behavior. ",
"neighbors": [
278,
1268,
2108,
2140
],
"mask": "Train"
},
{
"node_id": 1899,
"label": 3,
"text": "Title: Logarithmic Time Parallel Bayesian Inference \nAbstract: I present a parallel algorithm for exact probabilistic inference in Bayesian networks. For polytree networks with n variables, the worst-case time complexity is O(log n) on a CREW PRAM (concurrent-read, exclusive-write parallel random-access machine) with n processors, for any constant number of evidence variables. For arbitrary networks, the time complexity is O(r 3w log n) for n processors, or O(w log n) for r 3w n processors, where r is the maximum range of any variable, and w is the induced width (the maximum clique size), after moralizing and trian gulating the network.",
"neighbors": [
327,
2292
],
"mask": "Test"
},
{
"node_id": 1900,
"label": 3,
"text": "Title: Learning Convex Sets of Probability from Data \nAbstract: Several theories of inference and decision employ sets of probability distributions as the fundamental representation of (subjective) belief. This paper investigates a frequentist connection between empirical data and convex sets of probability distributions. Building on earlier work by Walley and Fine, a framework is advanced in which a sequence of random outcomes can be described as being drawn from a convex set of distributions, rather than just from a single distribution. The extra generality can be detected from observable characteristics of the outcome sequence. The paper presents new asymptotic convergence results paralleling the laws of large numbers in probability theory, and concludes with a comparison between this approach and approaches based on prior subjective constraints. c fl1997 Carnegie Mellon University",
"neighbors": [
2492
],
"mask": "Train"
},
{
"node_id": 1901,
"label": 3,
"text": "Title: Learning Convex Sets of Probability from Data \nAbstract: This reproduces a report submitted to Rome Laboratory on October 27, 1994. c flCopyright 1994 by Jon Doyle. All rights reserved. Freely available via http://www.medg.lcs.mit.edu/doyle. Final Report on Rational Distributed Reason Maintenance for Abstract Efficiency dictates that plans for large-scale distributed activities be revised incrementally, with parts of plans being revised only if the expected utility of identifying and revising the sub-plans improve on the expected utility of using the original plan. The problems of identifying and reconsidering the subplans affected by changed circumstances or goals are closely related to the problems of revising beliefs as new or changed information is gained. But traditional techniques of reason maintenance|the standard method for belief revision|choose revisions arbitrarily and enforce global notions of consistency and groundedness which may mean reconsidering all beliefs or plan elements at each step. We develop revision methods aiming to revise only those beliefs and plans worth revising, and to tolerate incoherence and ungroundedness when these are judged less detrimental than a costly revision effort. We use an artificial market economy in planning and revision tasks to arrive at overall judgments of worth, and present a representation for qualitative preferences that permits capture of common forms of dominance information. ",
"neighbors": [
1800,
2301
],
"mask": "Train"
},
{
"node_id": 1902,
"label": 2,
"text": "Title: LU TP 91-25 Mass Reconstruction with a Neural Network \nAbstract: A feed-forward neural network method is developed for reconstructing the invariant mass of hadronic jets appearing in a calorimeter. The approach is illustrated in W ! q q, where W -bosons are produced in pp reactions at SPS collider energies. The neural network method yields results that are superior to conventional methods. This neural network application differs from the classification ones in the sense that an analog number (the mass) is computed by the network, rather than a binary decision being made. As a by-product our application clearly demonstrates the need for using \"intelligent\" variables in instances when the amount of training instances is limited. ",
"neighbors": [
1884,
1885,
1886,
1887
],
"mask": "Train"
},
{
"node_id": 1903,
"label": 2,
"text": "Title: DNA: A New ASOCS Model With Improved Implementation Potential \nAbstract: A new class of highspeed, self-adaptive, massively parallel computing models called ASOCS (Adaptive Self-Organizing Concurrent Systems) has been proposed. Current analysis suggests that there may be problems implementing ASOCS models in VLSI using the hierarchical network structures originally proposed. The problems are not inherent in the models, but rather in the technology used to implement them. This has led to the development of a new ASOCS model called DNA (Discriminant-Node ASOCS) that does not depend on a hierarchical node structure for success. Three areas of the DNA model are briefly discussed in this paper: DNA's flexible nodes, how DNA overcomes problems other models have allocating unused nodes, and how DNA operates during processing and learning. ",
"neighbors": [
2612
],
"mask": "Train"
},
{
"node_id": 1904,
"label": 0,
"text": "Title: Using Case-Based Reasoning for Mobile Robot Navigation \nAbstract: This paper presents an approach to mobile robot path planning using case-based reasoning together with map-based path planning. The map-based path planner is used to seed the case-base with innovative solutions. The casebase stores the paths and the information about their traversability. While planning the route those paths are preferred that according to the former experience are least risky.",
"neighbors": [
643,
2556
],
"mask": "Test"
},
{
"node_id": 1905,
"label": 1,
"text": "Title: Determining Successful Negotiation Strategies: An Evolutionary Approach \nAbstract: To be successful in open, multi-agent environments, autonomous agents must be capable of adapting their negotiation strategies and tactics to their prevailing circumstances. To this end, we present an empirical study showing the relative success of different strategies against different types of opponent in different environments. In particular, we adopt an evolutionary approach in which strategies and tactics correspond to the genetic material in a genetic algorithm. We conduct a series of experiments to determine the most successful strategies and to see how and when these strategies evolve depending on the context and negotiation stance of the agent's opponent. ",
"neighbors": [
55,
163,
1834
],
"mask": "Test"
},
{
"node_id": 1906,
"label": 3,
"text": "Title: Bayesian Estimation and Model Choice in Item Response Models \nAbstract: To be successful in open, multi-agent environments, autonomous agents must be capable of adapting their negotiation strategies and tactics to their prevailing circumstances. To this end, we present an empirical study showing the relative success of different strategies against different types of opponent in different environments. In particular, we adopt an evolutionary approach in which strategies and tactics correspond to the genetic material in a genetic algorithm. We conduct a series of experiments to determine the most successful strategies and to see how and when these strategies evolve depending on the context and negotiation stance of the agent's opponent. ",
"neighbors": [
2421
],
"mask": "Train"
},
{
"node_id": 1907,
"label": 3,
"text": "Title: Toward Rational Planning and Replanning Rational Reason Maintenance, Reasoning Economies, and Qualitative Preferences formal notions\nAbstract: Efficiency dictates that plans for large-scale distributed activities be revised incrementally, with parts of plans being revised only if the expected utility of identifying and revising the subplans improves on the expected utility of using the original plan. The problems of identifying and reconsidering the subplans affected by changed circumstances or goals are closely related to the problems of revising beliefs as new or changed information is gained. But traditional techniques of reason maintenance|the standard method for belief revision|choose revisions arbitrarily and enforce global notions of consistency and groundedness which may mean reconsidering all beliefs or plan elements at each step. To address these problems, we developed (1) revision methods aimed at revising only those beliefs and plans worth revising, and tolerating incoherence and ungroundedness when these are judged less detrimental than a costly revision effort, (2) an artificial market economy in planning and revision tasks for arriving at overall judgments of worth, and (3) a representation for qualitative preferences that permits capture of common forms of dominance information. We view the activities of intelligent agents as stemming from interleaved or simultaneous planning, replanning, execution, and observation subactivities. In this model of the plan construction process, the agents continually evaluate and revise their plans in light of what happens in the world. Planning is necessary for the organization of large-scale activities because decisions about actions to be taken in the future have direct impact on what should be done in the shorter term. But even if well-constructed, the value of a plan decays as changing circumstances, resources, information, or objectives render the original course of action inappropriate. When changes occur before or during execution of the plan, it may be necessary to construct a new plan by starting from scratch or by revising a previous plan. only the portions of the plan actually affected by the changes. Given the information accrued during plan execution, which remaining parts of the original plan should be salvaged and in what ways should other parts be changed? Incremental replanning first involves localizing the potential changes or conflicts by identifying the subset of the extant beliefs and plans in which they occur. It then involves choosing which of the identified beliefs and plans to keep and which to change. For greatest efficiency, the choices of what portion of the plan to revise and how to revise it should be based on coherent expectations about and preferences among the consequences of different alternatives so as to be rational in the sense of decision theory (Savage 1972). Our work toward mechanizing rational planning and replanning has focussed on four main issues: This paper focusses on the latter three issues; for our approach to the first, see (Doyle 1988; 1992). Replanning in an incremental and local manner requires that the planning procedures routinely identify the assumptions made during planning and connect plan elements with these assumptions, so that replan-ning may seek to change only those portions of a plan dependent upon assumptions brought into question by new information. Consequently, the problem of revising plans to account for changed conditions has much ",
"neighbors": [
1800,
1995,
2301
],
"mask": "Train"
},
{
"node_id": 1908,
"label": 3,
"text": "Title: Induction of Selective Bayesian Classifiers \nAbstract: In this paper, we examine previous work on the naive Bayesian classifier and review its limitations, which include a sensitivity to correlated features. We respond to this problem by embedding the naive Bayesian induction scheme within an algorithm that carries out a greedy search through the space of features. We hypothesize that this approach will improve asymptotic accuracy in domains that involve correlated features without reducing the rate of learning in ones that do not. We report experimental results on six natural domains, including comparisons with decision-tree induction, that support these hypotheses. In closing, we discuss other approaches to extending naive Bayesian classifiers and outline some directions for future research. ",
"neighbors": [
442,
1582,
1647,
1838,
1893,
2514,
2561
],
"mask": "Test"
},
{
"node_id": 1909,
"label": 3,
"text": "Title: A Comparison of Induction Algorithms for Selective and non-Selective Bayesian Classifiers \nAbstract: In this paper we present a novel induction algorithm for Bayesian networks. This selective Bayesian network classifier selects a subset of attributes that maximizes predictive accuracy prior to the network learning phase, thereby learning Bayesian networks with a bias for small, high-predictive-accuracy networks. We compare the performance of this classifier with selective and non-selective naive Bayesian classifiers. We show that the selective Bayesian network classifier performs significantly better than both versions of the naive Bayesian classifier on almost all databases analyzed, and hence is an enhancement of the naive Bayesian classifier. Relative to the non-selective Bayesian network classifier, our selective Bayesian network classifier generates networks that are computationally simpler to evaluate and that display predictive accuracy comparable to that of Bayesian networks which model all features.",
"neighbors": [
632,
1545,
1582,
2677
],
"mask": "Validation"
},
{
"node_id": 1910,
"label": 3,
"text": "Title: Minimax Estimation via Wavelet Shrinkage a pleasure to acknowledge friendly conversations with Gerard Kerkyacharian, \nAbstract: We attempt to recover an unknown function from noisy, sampled data. Using orthonormal bases of compactly supported wavelets we develop a nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coefficients. The shrinkage can be tuned to be nearly minimax over any member of a wide range of Triebel- and Besov-type smoothness constraints, and asymptotically minimax over Besov bodies with p q. Linear estimates cannot achieve even the minimax rates over Triebel and Besov classes with p < 2, so our method can significantly outperform every linear method (kernel, smoothing spline, sieve, : : : ) in a minimax sense. Variants of our method based on simple threshold nonlinearities are nearly minimax. Our method possesses the interpretation of spatial adaptivity: it reconstructs using a kernel which may vary in shape and bandwidth from point to point, depending on the data. Least favorable distributions for certain of the Triebel and Besov scales generate objects with sparse wavelet transforms. Many real objects have similarly sparse transforms, which suggests that these minimax results are relevant for practical problems. Sequels to this paper discuss practical implementation, spatial adaptation properties and applications to inverse problems. Acknowledgements. This work was completed while the first author was on leave from U.C. Berkeley, where his research was supported by NSF DMS 88-10192, by NASA Contract NCA2-488, and by a grant from ATT Foundation. The second author was supported in part by NSF grants DMS 84-51750, 86-00235, and NIH PHS grant GM21215-12. Supersedes an earlier version, titled \"Wavelets and Optimal Function Estimation\", dated November 10, 1990, and issued as Technical reports by the Departments of Statistics at both Stanford and at U.C. Berkeley. ",
"neighbors": [
705,
1668,
2081,
2159,
2242,
2366,
2375,
2416,
2488,
2506,
2575,
2661
],
"mask": "Test"
},
{
"node_id": 1911,
"label": 1,
"text": "Title: Genetic Programming and Data Structures \nAbstract: We attempt to recover an unknown function from noisy, sampled data. Using orthonormal bases of compactly supported wavelets we develop a nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coefficients. The shrinkage can be tuned to be nearly minimax over any member of a wide range of Triebel- and Besov-type smoothness constraints, and asymptotically minimax over Besov bodies with p q. Linear estimates cannot achieve even the minimax rates over Triebel and Besov classes with p < 2, so our method can significantly outperform every linear method (kernel, smoothing spline, sieve, : : : ) in a minimax sense. Variants of our method based on simple threshold nonlinearities are nearly minimax. Our method possesses the interpretation of spatial adaptivity: it reconstructs using a kernel which may vary in shape and bandwidth from point to point, depending on the data. Least favorable distributions for certain of the Triebel and Besov scales generate objects with sparse wavelet transforms. Many real objects have similarly sparse transforms, which suggests that these minimax results are relevant for practical problems. Sequels to this paper discuss practical implementation, spatial adaptation properties and applications to inverse problems. Acknowledgements. This work was completed while the first author was on leave from U.C. Berkeley, where his research was supported by NSF DMS 88-10192, by NASA Contract NCA2-488, and by a grant from ATT Foundation. The second author was supported in part by NSF grants DMS 84-51750, 86-00235, and NIH PHS grant GM21215-12. Supersedes an earlier version, titled \"Wavelets and Optimal Function Estimation\", dated November 10, 1990, and issued as Technical reports by the Departments of Statistics at both Stanford and at U.C. Berkeley. ",
"neighbors": [
290,
860,
1098,
1719,
2087,
2206
],
"mask": "Validation"
},
{
"node_id": 1912,
"label": 2,
"text": "Title: Theory of Correlations in Stochastic Neural Networks \nAbstract: One of the main experimental tools in probing the interactions between neurons has been the measurement of the correlations in their activity. In general, however the interpretation of the observed correlations is difficult, since the correlation between a pair of neurons is influenced not only by the direct interaction between them but also by the dynamic state of the entire network to which they belong. Thus, a comparison between the observed correlations and the predictions from specific model networks is needed. In this paper we develop the theory of neuronal correlation functions in large networks comprising of several highly connected subpopulations, and obeying stochastic dynamic rules. When the networks are in asynchronous states, the cross-correlations are relatively weak, i.e., their amplitude relative to that of the auto-correlations is of order of 1=N , N being the size of the interacting populations. Using the weakness of the cross-correlations, general equations which express the matrix of cross-correlations in terms of the mean neuronal activities, and the effective interaction matrix are presented. The effective interactions are the synaptic efficacies multiplied by the the gain of the postsynaptic neurons. The time-delayed cross-correlation matrix can be expressed as a sum of exponentially decaying modes that correspond to the (non-orthogonal) eigenvectors of the effective interaction matrix. The theory is extended to networks with random connectivity, such as randomly dilute networks. This allows for the comparison between the contribution from the internal common input and that from the direct ",
"neighbors": [
304,
1932
],
"mask": "Train"
},
{
"node_id": 1913,
"label": 3,
"text": "Title: A Note on the Dirichlet Process Prior in Bayesian Nonparametric Inference with Partial Exchangeability 1 \nAbstract: Technical Report no. 297 Department of Statistics University of Washington 1 Sonia Petrone is Assistant Professor, Universita di Pavia, Dipartimento di Economia Politica e Metodi Quantitativi, I-27100 Pavia, Italy and Adrian E. Raftery is Professor of Statistics and Sociology, Department of Statistics, University of Washington, Box 354322, Seattle, WA 98195-4322. This research was supported by ONR grant no. N-00014-91-J-1074 and by grants from MURST, Rome. ",
"neighbors": [
416,
1803
],
"mask": "Test"
},
{
"node_id": 1914,
"label": 2,
"text": "Title: Local Feedforward Networks \nAbstract: Although feedforward neural networks are well suited to function approximation, in some applications networks experience problems when learning a desired function. One problem is interference which occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are referred to as spatially local networks. To understand these properties, a theoretical framework, consisting of a measure of interference and a measure of network localization, is developed that incorporates not only the network weights and architecture but also the learning algorithm. Using this framework to analyze sigmoidal multi-layer perceptron (MLP) networks that employ the back-prop learning algorithm, we address a familiar misconception that sigmoidal networks are inherently non-local by demonstrating that given a sufficiently large number of adjustable parameters, sigmoidal MLPs can be made arbitrarily local while retaining the ability to represent any continuous function on a compact domain. ",
"neighbors": [
427,
2176
],
"mask": "Train"
},
{
"node_id": 1915,
"label": 2,
"text": "Title: A Mixture of Experts Model Exhibiting Prosopagnosia \nAbstract: A considerable body of evidence from prosopagnosia, a deficit in face recognition dissociable from nonface object recognition, indicates that the visual system devotes a specialized functional area to mechanisms appropriate for face processing. We present a modular neural network composed of two expert networks and one mediating gate network with the task of learning to recognize the faces of 12 individuals and classifying 36 nonface objects as members of one of three classes. While learning the task, the network tends to divide labor between the two expert modules, with one expert specializing in face processing and the other specializing in nonface object processing. After training, we observe the network's performance on a test set as one of the experts is progressively damaged. The results roughly agree with data reported for prosopagnosic patients: as damage to the face expert increases, the network's face recognition performance decreases dramatically while its object classification performance drops slowly. We conclude that data-driven competitive learning between two unbiased functional units can give rise to localized face processing, and that selective damage in such a system could underlie prosopagnosia. ",
"neighbors": [
1981,
2497
],
"mask": "Test"
},
{
"node_id": 1916,
"label": 2,
"text": "Title: Modeling Cortical Plasticity Based on Adapting Lateral Interaction \nAbstract: A neural network model called LISSOM for the cooperative self-organization of afferent and lateral connections in cortical maps is applied to modeling cortical plasticity. After self-organization, the LISSOM maps are in a dynamic equilibrium with the input, and reorganize like the cortex in response to simulated cortical lesions and intracortical microstimulation. The model predicts that adapting lateral interactions are fundamental to cortical reorganization, and suggests techniques to hasten recovery following sensory cortical surgery. ",
"neighbors": [
2400
],
"mask": "Train"
},
{
"node_id": 1917,
"label": 1,
"text": "Title: Go and Genetic Programming Playing Go with Filter Functions \nAbstract: A neural network model called LISSOM for the cooperative self-organization of afferent and lateral connections in cortical maps is applied to modeling cortical plasticity. After self-organization, the LISSOM maps are in a dynamic equilibrium with the input, and reorganize like the cortex in response to simulated cortical lesions and intracortical microstimulation. The model predicts that adapting lateral interactions are fundamental to cortical reorganization, and suggests techniques to hasten recovery following sensory cortical surgery. ",
"neighbors": [
1796,
2334
],
"mask": "Train"
},
{
"node_id": 1918,
"label": 6,
"text": "Title: Stochastic Logic Programs \nAbstract: One way to represent a machine learning algorithm's bias over the hypothesis and instance space is as a pair of probability distributions. This approach has been taken both within Bayesian learning schemes and the framework of U-learnability. However, it is not obvious how an Inductive Logic Programming (ILP) system should best be provided with a probability distribution. This paper extends the results of a previous paper by the author which introduced stochastic logic programs as a means of providing a structured definition of such a probability distribution. Stochastic logic programs are a generalisation of stochastic grammars. A stochastic logic program consists of a set of labelled clauses p : C where p is from the interval [0; 1] and C is a range-restricted definite clause. A stochastic logic program P has a distributional semantics, that is one which assigns a probability distribution to the atoms of each predicate in the Herbrand base of the clauses in P . These probabilities are assigned to atoms according to an SLD-resolution strategy which employs a stochastic selection rule. It is shown that the probabilities can be computed directly for fail-free logic programs and by normalisation for arbitrary logic programs. The stochastic proof strategy can be used to provide three distinct functions: 1) a method of sampling from the Herbrand base which can be used to provide selected targets or example sets for ILP experiments, 2) a measure of the information content of examples or hypotheses; this can be used to guide the search in an ILP system and 3) a simple method for conditioning a given stochastic logic program on samples of data. Functions 1) and 3) are used to measure the generality of hypotheses in the ILP system Progol4.2. This supports an implementation of a Bayesian technique for learning from positive examples only. fl This paper is an extension of a paper with the same title which appeared in [12] ",
"neighbors": [
1290,
2329
],
"mask": "Train"
},
{
"node_id": 1919,
"label": 5,
"text": "Title: Inductive Constraint Logic and the Mutagenesis Problem \nAbstract: A novel approach to learning first order logic formulae from positive and negative examples is incorporated in a system named ICL (Inductive Constraint Logic). In ICL, examples are viewed as interpretations which are true or false for the target theory, whereas in present inductive logic programming systems, examples are true and false ground facts (or clauses). Furthermore, ICL uses a clausal representation, which corresponds to a conjunctive normal form where each conjunct forms a constraint on positive examples, whereas classical learning techniques have concentrated on concept representations in disjunctive normal form. We present some experiments with this new system on the mutagenesis problem. These experiments illustrate some of the differences with other systems, and indicate that our approach should work at least as well as the more classical approaches.",
"neighbors": [
638,
1007,
2426,
2431
],
"mask": "Train"
},
{
"node_id": 1920,
"label": 2,
"text": "Title: On the Probability of Chaos in Large Dynamical Systems: A Monte Carlo Study \nAbstract: In this paper we report the result of a Monte Carlo study on the probability of chaos in large dynamical systems. We use neural networks as the basis functions for the system dynamics and choose parameter values for the networks randomly. Our results show that as the dimension of the system and the complexity of the network increase, the probability of chaotic dynamics increases to 100%. Since neural networks are dense in the set of dynamical systems, our conclusion is that most large systems are chaotic. ",
"neighbors": [
1874
],
"mask": "Test"
},
{
"node_id": 1921,
"label": 1,
"text": "Title: Computation. Automated Synthesis of Analog Electrical Circuits by Means of Genetic Programming \nAbstract: The design (synthesis) of analog electrical circuits starts with a high-level statement of the circuit's desired behavior and requires creating a circuit that satisfies the specified design goals. Analog circuit synthesis entails the creation of both the topology and the sizing (numerical values) of all of the circuit's components. The difficulty of the problem of analog circuit synthesis is well known and there is no previously known general automated technique for synthesizing an analog circuit from a high-level statement of the circuit's desired behavior. This paper presents a single uniform approach using genetic programming for the automatic synthesis of both the topology and sizing of a suite of eight different prototypical analog circuits, including a lowpass filter, a crossover (woofer and tweeter) filter, a source identification circuit, an amplifier, a computational circuit, a time-optimal controller circuit, a temperaturesensing circuit, and a voltage reference circuit. The problemspecific information required for each of the eight problems is minimal and consists primarily of the number of inputs and outputs of the desired circuit, the types of available components, and a fitness measure that restates the high-level statement of the circuit's desired behavior as a measurable mathematical quantity. The eight genetically evolved circuits constitute an instance of an evolutionary computation technique producing results on a task that is usually thought of as requiring human intelligence. The fact that a single uniform approach yielded a satisfactory design for each of the eight circuits as well as the fact that a satisfactory design was created on the first or second run of each problem are evidence for the general applicability of genetic programming for solving the problem of automatic synthesis of analog electrical circuits. ",
"neighbors": [
523,
1408,
1931,
2402
],
"mask": "Train"
},
{
"node_id": 1922,
"label": 3,
"text": "Title: Maximum Likelihood and Covariant Algorithms for Independent Component Analysis somewhat more biologically plausible, involving no\nAbstract: Bell and Sejnowski (1995) have derived a blind signal processing algorithm for a non-linear feedforward network from an information maximization viewpoint. This paper first shows that the same algorithm can be viewed as a maximum likelihood algorithm for the optimization of a linear generative model. Third, this paper gives a partial proof of the `folk-theorem' that any mixture of sources with high-kurtosis histograms is separable by the classic ICA algorithm. ",
"neighbors": [
570,
576,
2026
],
"mask": "Train"
},
{
"node_id": 1923,
"label": 2,
"text": "Title: EM Algorithms for PCA and SPCA \nAbstract: I present an expectation-maximization (EM) algorithm for principal component analysis (PCA). The algorithm allows a few eigenvectors and eigenvalues to be extracted from large collections of high dimensional data. It is computationally very efficient in space and time. It also naturally accommodates missing information. I also introduce a new variant of PCA called sensible principal component analysis (SPCA) which defines a proper density model in the data space. Learning for SPCA is also done with an EM algorithm. I report results on synthetic and real data showing that these EM algorithms correctly and efficiently find the leading eigenvectors of the covariance of datasets in a few iterations using up to hundreds of thousands of datapoints in thousands of dimensions.",
"neighbors": [
71,
1928,
2114,
2227
],
"mask": "Train"
},
{
"node_id": 1924,
"label": 6,
"text": "Title: Training Algorithms for Hidden Markov Models Using Entropy Based Distance Functions \nAbstract: We present new algorithms for parameter estimation of HMMs. By adapting a framework used for supervised learning, we construct iterative algorithms that maximize the likelihood of the observations while also attempting to stay close to the current estimated parameters. We use a bound on the relative entropy between the two HMMs as a distance measure between them. The result is new iterative training algorithms which are similar to the EM (Baum-Welch) algorithm for training HMMs. The proposed algorithms are composed of a step similar to the expectation step of Baum-Welch and a new update of the parameters which replaces the maximization (re-estimation) step. The algorithm takes only negligibly more time per iteration and an approximated version uses the same expectation step as Baum-Welch. We evaluate experimentally the new algorithms on synthetic and natural speech pronunciation data. For sparse models, i.e. models with relatively small number of non-zero parameters, the proposed algorithms require significantly fewer iterations.",
"neighbors": [
345,
2040,
2327
],
"mask": "Test"
},
{
"node_id": 1925,
"label": 1,
"text": "Title: Boolean Functions Fitness Spaces \nAbstract: We investigate the distribution of performance of the Boolean functions of 3 Boolean inputs (particularly that of the parity functions), the always-on-6 and even-6 parity functions. We us enumeration, uniform Monte-Carlo random sampling and sampling random full trees. As expected XOR dramatically changes the fitness distributions. In all cases once some minimum size threshold has been exceeded, the distribution of performance is approximately independent of program length. However the distribution of the performance of full trees is different from that of asymmetric trees and varies with tree depth. We consider but reject testing the No Free Lunch (NFL) theorems on these functions.",
"neighbors": [
2133,
2206,
2392
],
"mask": "Test"
},
{
"node_id": 1926,
"label": 3,
"text": "Title: Analysis of a Non-Reversible Markov Chain Sampler \nAbstract: Technical Report BU-1385-M, Biometrics Unit, Cornell University Abstract We analyse the convergence to stationarity of a simple non-reversible Markov chain that serves as a model for several non-reversible Markov chain sampling methods that are used in practice. Our theoretical and numerical results show that non-reversibility can indeed lead to improvements over the diffusive behavior of simple Markov chain sampling schemes. The analysis uses both probabilistic techniques and an explicit diagonalisation. We thank David Aldous, Martin Hildebrand, Brad Mann, and Laurent Saloff-Coste for their help. ",
"neighbors": [
748,
1941
],
"mask": "Validation"
},
{
"node_id": 1927,
"label": 2,
"text": "Title: A Neural Architecture for Content as well as Address-Based Storage and Recall: Theory and Applications \nAbstract: Technical Report BU-1385-M, Biometrics Unit, Cornell University Abstract We analyse the convergence to stationarity of a simple non-reversible Markov chain that serves as a model for several non-reversible Markov chain sampling methods that are used in practice. Our theoretical and numerical results show that non-reversibility can indeed lead to improvements over the diffusive behavior of simple Markov chain sampling schemes. The analysis uses both probabilistic techniques and an explicit diagonalisation. We thank David Aldous, Martin Hildebrand, Brad Mann, and Laurent Saloff-Coste for their help. ",
"neighbors": [
1846,
1847,
2537
],
"mask": "Validation"
},
{
"node_id": 1928,
"label": 2,
"text": "Title: Mixtures of Probabilistic Principal Component Analysers \nAbstract: Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition. ",
"neighbors": [
74,
667,
1923,
2114,
2124,
2570
],
"mask": "Train"
},
{
"node_id": 1929,
"label": 2,
"text": "Title: Geometry of Early Stopping in Linear Networks \nAbstract: ",
"neighbors": [
2349
],
"mask": "Train"
},
{
"node_id": 1930,
"label": 1,
"text": "Title: A NEW METHODOLOGY FOR REDUCING BRITTLENESS IN GENETIC PROGRAMMING optimized maneuvers for an extended two-dimensional\nAbstract: programs were independently evolved using fixed and randomly-generated fitness cases. These programs were subsequently tested against a large, representative fixed population of pursuers to determine their relative effectiveness. This paper describes the implementation of both the original and modified systems, and summarizes the results of these tests. ",
"neighbors": [
2512
],
"mask": "Test"
},
{
"node_id": 1931,
"label": 1,
"text": "Title: AUTOMATED TOPOLOGY AND SIZING OF ANALOG CIRCUITS AUTOMATED DESIGN OF BOTH THE TOPOLOGY AND SIZING\nAbstract: This paper describes an automated process for designing analog electrical circuits based on the principles of natural selection, sexual recombination, and developmental biology. The design process starts with the random creation of a large population of program trees composed of circuit-constructing functions. Each program tree specifies the steps by which a fully developed circuit is to be progressively developed from a common embryonic circuit appropriate for the type of circuit that the user wishes to design. Each fully developed circuit is translated into a netlist, simulated using a modified version of SPICE, and evaluated as to how well it satisfies the user's design requirements. The fitness measure is a user-written computer program that may incorporate any calculable characteristic or combination of characteristics of the circuit, including the circuit's behavior in the time domain, its behavior in the frequency domain, its power consumption, the number of components, cost of components, or surface area occupied by its components. The population of program trees is genetically bred over a series of many generations using genetic programming. Genetic programming is driven by a fitness measure and employs genetic operations such as Darwinian reproduction, sexual recombination (crossover), and occasional mutation to create offspring. This automated evolutionary process produces both the topology of the circuit and the numerical values for each component. This paper describes how genetic programming can evolve the circuit for a difficult-to-design low-pass filter. ",
"neighbors": [
523,
1408,
1921,
2277,
2402,
2624
],
"mask": "Train"
},
{
"node_id": 1932,
"label": 2,
"text": "Title: Constrained Optimization for Neural Map Formation: A Unifying Framework for Weight Growth and Normalization \nAbstract: Computational models of neural map formation can be considered on at least three different levels of abstraction: detailed models including neural activity dynamics, weight dynamics that abstract from the neural activity dynamics by an adiabatic approximation, and constrained optimization from which equations governing weight dynamics can be derived. Constrained optimization uses an objective function, from which a weight growth rule can be derived as a gradient flow, and some constraints, from which normalization rules are derived. In this paper we present an example of how an optimization problem can be derived from detailed non-linear neural dynamics. A systematic investigation reveals how different weight dynamics introduced previously can be derived from two types of objective function terms and two types of constraints. This includes dynamic link matching as a special case of neural map formation. We focus in particular on the role of coordinate transformations to derive different weight dynamics from the same optimization problem. Several examples illustrate how the constrained optimization framework can help in understanding, generating, and comparing different models of neural map formation. The techniques used in this analysis may also be useful in investigating other types of neural dynamics.",
"neighbors": [
18,
576,
745,
1912,
2024
],
"mask": "Validation"
},
{
"node_id": 1933,
"label": 3,
"text": "Title: Continuous sigmoidal belief networks trained using slice sampling \nAbstract: Real-valued random hidden variables can be useful for modelling latent structure that explains correlations among observed variables. I propose a simple unit that adds zero-mean Gaussian noise to its input before passing it through a sigmoidal squashing function. Such units can produce a variety of useful behaviors, ranging from deterministic to binary stochastic to continuous stochastic. I show how \"slice sampling\" can be used for inference and learning in top-down networks of these units and demonstrate learning on two simple problems. ",
"neighbors": [
36,
748,
2660
],
"mask": "Test"
},
{
"node_id": 1934,
"label": 3,
"text": "Title: Sequential Update of Bayesian Network Structure \nAbstract: There is an obvious need for improving the performance and accuracy of a Bayesian network as new data is observed. Because of errors in model construction and changes in the dynamics of the domains, we cannot afford to ignore the information in new data. While sequential update of parameters for a fixed structure can be accomplished using standard techniques, sequential update of network structure is still an open problem. In this paper, we investigate sequential update of Bayesian networks were both parameters and structure are expected to change. We introduce a new approach that allows for the flexible manipulation of the tradeoff between the quality of the learned networks and the amount of information that is maintained about past observations. We formally describe our approach including the necessary modifications to the scoring functions for learning Bayesian networks, evaluate its effectiveness through and empirical study, and extend it to the case of missing data.",
"neighbors": [
76,
423,
558,
1816,
2463
],
"mask": "Train"
},
{
"node_id": 1935,
"label": 2,
"text": "Title: Observations on Cortical Mechanisms for Object Recognition and Learning \nAbstract: This paper sketches several aspects of a hypothetical cortical architecture for visual object recognition, based on a recent computational model. The scheme relies on modules for learning from examples, such as Hyperbf-like networks, as its basic components. Such models are not intended to be precise theories of the biological circuitry but rather to capture a class of explanations we call Memory-Based Models (MBM) that contains sparse population coding, memory-based recognition and codebooks of prototypes. Unlike the sigmoidal units of some artificial neural networks, the units of MBMs are consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli. We will describe how an example of MBM may be realized in terms of cortical circuitry and biophysical mechanisms, consistent with psychophysical and physiological data. A number of predictions, testable with physiological techniques, are made. This memo describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and at the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. This research is sponsored by grants from the Office of Naval Research under contracts N00014-92-J-1879 and N00014-93-1-0385; and by a grant from the National Science Foundation under contract ASC-9217041 (this award includes funds from ARPA provided under the HPCC program). Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ARPA contract N00014-91-J-4038. Tomaso Poggio is supported by the Uncas and Helen Whitaker Chair at MIT's Whitaker College. ",
"neighbors": [
611,
2340,
2499
],
"mask": "Train"
},
{
"node_id": 1936,
"label": 1,
"text": "Title: COLLECTIVE ADAPTATION: THE SHARING OF BUILDING BLOCKS \nAbstract: This paper sketches several aspects of a hypothetical cortical architecture for visual object recognition, based on a recent computational model. The scheme relies on modules for learning from examples, such as Hyperbf-like networks, as its basic components. Such models are not intended to be precise theories of the biological circuitry but rather to capture a class of explanations we call Memory-Based Models (MBM) that contains sparse population coding, memory-based recognition and codebooks of prototypes. Unlike the sigmoidal units of some artificial neural networks, the units of MBMs are consistent with the usual description of cortical neurons as tuned to multidimensional optimal stimuli. We will describe how an example of MBM may be realized in terms of cortical circuitry and biophysical mechanisms, consistent with psychophysical and physiological data. A number of predictions, testable with physiological techniques, are made. This memo describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and at the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. This research is sponsored by grants from the Office of Naval Research under contracts N00014-92-J-1879 and N00014-93-1-0385; and by a grant from the National Science Foundation under contract ASC-9217041 (this award includes funds from ARPA provided under the HPCC program). Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ARPA contract N00014-91-J-4038. Tomaso Poggio is supported by the Uncas and Helen Whitaker Chair at MIT's Whitaker College. ",
"neighbors": [
1943
],
"mask": "Train"
},
{
"node_id": 1937,
"label": 3,
"text": "Title: Using Qualitative Relationships for Bounding Probability Distributions \nAbstract: We exploit qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks. Using the signs of qualitative relationships, we can implement abstraction operations that are guaranteed to bound the distributions of interest in the desired direction. By evaluating incrementally improved approximate networks, our algorithm obtains monotonically tightening bounds that converge to exact distributions. For supermodular utility functions, the tightening bounds monotonically reduce the set of admissible decision alternatives as well. ",
"neighbors": [
107,
389,
623,
1064,
2293
],
"mask": "Validation"
},
{
"node_id": 1938,
"label": 3,
"text": "Title: Latent and manifest monotonicity in item response models \nAbstract: We exploit qualitative probabilistic relationships among variables for computing bounds of conditional probability distributions of interest in Bayesian networks. Using the signs of qualitative relationships, we can implement abstraction operations that are guaranteed to bound the distributions of interest in the desired direction. By evaluating incrementally improved approximate networks, our algorithm obtains monotonically tightening bounds that converge to exact distributions. For supermodular utility functions, the tightening bounds monotonically reduce the set of admissible decision alternatives as well. ",
"neighbors": [
1764,
1765,
1770
],
"mask": "Test"
},
{
"node_id": 1939,
"label": 2,
"text": "Title: Serial and Parallel Multicategory Discrimination \nAbstract: A parallel algorithm is proposed for a fundamental problem of machine learning, that of mul-ticategory discrimination. The algorithm is based on minimizing an error function associated with a set of highly structured linear inequalities. These inequalities characterize piecewise-linear separation of k sets by the maximum of k affine functions. The error function has a Lipschitz continuous gradient that allows the use of fast serial and parallel unconstrained minimization algorithms. A serial quasi-Newton algorithm is considerably faster than previous linear programming formulations. A parallel gradient distribution algorithm is used to parallelize the error-minimization problem. Preliminary computational results are given for both a DECstation ",
"neighbors": [
2307
],
"mask": "Train"
},
{
"node_id": 1940,
"label": 1,
"text": "Title: A Comparison of Crossover and Mutation in Genetic Programming \nAbstract: This paper presents a large and systematic body of data on the relative effectiveness of mutation, crossover, and combinations of mutation and crossover in genetic programming (GP). The literature of traditional genetic algorithms contains related studies, but mutation and crossover in GP differ from their traditional counterparts in significant ways. In this paper we present the results from a very large experimental data set, the equivalent of approximately 12,000 typical runs of a GP system, systematically exploring a range of parameter settings. The resulting data may be useful not only for practitioners seeking to optimize parameters for GP runs, but also for theorists exploring issues such as the role of building blocks in GP.",
"neighbors": [
860,
2175,
2220,
2250
],
"mask": "Test"
},
{
"node_id": 1941,
"label": 3,
"text": "Title: SUPPRESSING RANDOM WALKS IN MARKOV CHAIN MONTE CARLO USING ORDERED OVERRELAXATION \nAbstract: Markov chain Monte Carlo methods such as Gibbs sampling and simple forms of the Metropolis algorithm typically move about the distribution being sampled via a random walk. For the complex, high-dimensional distributions commonly encountered in Bayesian inference and statistical physics, the distance moved in each iteration of these algorithms will usually be small, because it is difficult or impossible to transform the problem to eliminate dependencies between variables. The inefficiency inherent in taking such small steps is greatly exacerbated when the algorithm operates via a random walk, as in such a case moving to a point n steps away will typically take around n 2 iterations. Such random walks can sometimes be suppressed using \"overrelaxed\" variants of Gibbs sampling (a.k.a. the heatbath algorithm), but such methods have hitherto been largely restricted to problems where all the full conditional distributions are Gaussian. I present an overrelaxed Markov chain Monte Carlo algorithm based on order statistics that is more widely applicable. In particular, the algorithm can be applied whenever the full conditional distributions are such that their cumulative distribution functions and inverse cumulative distribution functions can be efficiently computed. The method is demonstrated on an inference problem for a simple hierarchical Bayesian model. ",
"neighbors": [
748,
1926
],
"mask": "Test"
},
{
"node_id": 1942,
"label": 3,
"text": "Title: On Functional Relation between Recognition Error and Class-Selective Reject \nAbstract: This report reviews various optimum decision rules for pattern recognition, namely, Bayes rule, Chow's rule (optimum error-reject tradeoff), and a recently proposed class-selective rejection rule. The latter provides an optimum tradeoff between the error rate and the average number of (selected) classes. A new general relation between the error rate and the average number of classes is presented. The error rate can directly be computed from the class-selective reject function, which in turn can be estimated from unlabelled patterns, by simply counting the rejects. Theoretical as well as practical implications are discussed and some future research directions are proposed. ",
"neighbors": [
2573
],
"mask": "Validation"
},
{
"node_id": 1943,
"label": 1,
"text": "Title: Distributed Collective Adaptation Applied to a Hard Combinatorial Optimization Problem \nAbstract: We utilize collective memory to integrate weak and strong search heuristics to find cliques in FC, a family of graphs. We construct FC such that pruning of partial solutions will be ineffective. Each weak heuristic maintains a local cache of the collective memory. We examine the impact on the distributed search from the various characteristics of the distribution of the collective memory, the search algorithms, and our family of graphs. We find the distributed search performs better than the individuals, even though the space of partial solutions is combinatorially explosive. ",
"neighbors": [
163,
1696,
1936
],
"mask": "Train"
},
{
"node_id": 1944,
"label": 5,
"text": "Title: Knowledge Acquisition with a Knowledge-Intensive Machine Learning System \nAbstract: In this paper, we investigate the integration of knowledge acquisition and machine learning techniques. We argue that existing machine learning techniques can be made more useful as knowledge acquisition tools by allowing the expert to have greater control over and interaction with the learning process. We describe a number of extensions to FOCL (a multistrategy Horn-clause learning program) that have greatly enhanced its power as a knowledge acquisition tool, paying particular attention to the utility of maintaining a connection between a rule and the set of examples explained by the rule. The objective of this research is to make the modification of a domain theory analogous to the use of a spread sheet. A prototype knowledge acquisition tool, FOCL-1-2-3, has been constructed in order to evaluate the strengths and weaknesses of this approach. ",
"neighbors": [
1259,
2091
],
"mask": "Train"
},
{
"node_id": 1945,
"label": 3,
"text": "Title: Defining Relative Likelihood in Partially-Ordered Preferential Structures \nAbstract: Starting with a likelihood or preference order on worlds, we extend it to a likelihood ordering on sets of worlds in a natural way, and examine the resulting logic. Lewis earlier considered such a notion of relative likelihood in the context of studying counterfactuals, but he assumed a total preference order on worlds. Complications arise when examining partial orders that are not present for total orders. There are subtleties involving the exact approach to lifting the order on worlds to an order on sets of worlds. In addition, the axiomatization of the logic of relative likelihood in the case of partial orders gives insight into the connection between relative likelihood and default reasoning.",
"neighbors": [
342,
729,
1993
],
"mask": "Train"
},
{
"node_id": 1946,
"label": 1,
"text": "Title: Plateaus and Plateau Search in Boolean Satisfiability Problems: When to Give Up Searching and Start Again \nAbstract: We empirically investigate the properties of the search space and the behavior of hill-climbing search for solving hard, random Boolean satisfiability problems. In these experiments it was frequently observed that rather than attempting to escape from plateaus by extensive search, it was better to completely restart from a new random initial state. The optimum point to terminate search and restart was determined empirically over a range of problem sizes and complexities. The growth rate of the optimum cutoff is faster than linear with the number of features, although the exact growth rate was not determined. Based on these empirical results, a simple run-time heuristic is proposed to determine when to give up searching a plateau and restart. This heuristic closely approximates the empirically determined optimum values over a range of problem sizes and complexities, and consequently allows the search algorithm to automatically adjust its strategy for each particular problem without prior knowledge of the problem's complexity. ",
"neighbors": [
1030,
2516
],
"mask": "Train"
},
{
"node_id": 1947,
"label": 2,
"text": "Title: Rapid Quality Estimation of Neural Network Input Representations \nAbstract: The choice of an input representation for a neural network can have a profound impact on its accuracy in classifying novel instances. However, neural networks are typically computationally expensive to train, making it difficult to test large numbers of alternative representations. This paper introduces fast quality measures for neural network representations, allowing one to quickly and accurately estimate which of a collection of possible representations for a problem is the best. We show that our measures for ranking representations are more accurate than a previously published measure, based on experiments with three difficult, real-world pattern recognition problems.",
"neighbors": [
2557
],
"mask": "Train"
},
{
"node_id": 1948,
"label": 2,
"text": "Title: Rapid Quality Estimation of Neural Network Input Representations \nAbstract: FURTHER RESULTS ON CONTROLLABILITY PROPERTIES OF DISCRETE-TIME NONLINEAR SYSTEMS fl ABSTRACT Controllability questions for discrete-time nonlinear systems are addressed in this paper. In particular, we continue the search for conditions under which the group-like notion of transitivity implies the stronger and semigroup-like property of forward accessibility. We show that this implication holds, pointwise, for states which have a weak Poisson stability property, and globally, if there exists a global \"attractor\" for the system. ",
"neighbors": [
1746
],
"mask": "Train"
},
{
"node_id": 1949,
"label": 2,
"text": "Title: Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, A Globally Convergent Inexact Newton Method for\nAbstract: We propose an algorithm for solving systems of monotone equations which combines Newton, proximal point, and projection methodologies. An important property of the algorithm is that the whole sequence of iterates is always globally convergent to a solution of the system without any additional regularity assumptions. Moreover, under standard assumptions the local su-perlinear rate of convergence is achieved. As opposed to classical globalization strategies for Newton methods, for computing the stepsize we do not use line-search aimed at decreasing the value of some merit function. Instead, linesearch in the approximate Newton direction is used to construct an appropriate hy-perplane which separates the current iterate from the solution set. This step is followed by projecting the current iterate onto this hyperplane, which ensures global convergence of the algorithm. Computational cost of each iteration of our method is of the same order as that of the classical damped Newton method. The crucial advantage is that our method is truly globally convergent. In particular, it cannot get trapped in a stationary point of a merit function. The presented algorithm is motivated by the hybrid projection-proximal point method proposed in [25]. ",
"neighbors": [
1960
],
"mask": "Train"
},
{
"node_id": 1950,
"label": 1,
"text": "Title: Cultural Transmission of Information in Genetic Programming \nAbstract: This paper shows how the performance of a genetic programming system can be improved through the addition of mechanisms for non-genetic transmission of information between individuals (culture). Teller has previously shown how genetic programming systems can be enhanced through the addition of memory mechanisms for individual programs [Teller 1994]; in this paper we show how Teller's memory mechanism can be changed to allow for communication between individuals within and across generations. We show the effects of indexed memory and culture on the performance of a genetic programming system on a symbolic regression problem, on Koza's Lawnmower problem, and on Wum-pus world agent problems. We show that culture can reduce the computational effort required to solve all of these problems. We conclude with a discussion of possible improvements.",
"neighbors": [
2220,
2226
],
"mask": "Train"
},
{
"node_id": 1951,
"label": 0,
"text": "Title: What Kind of Adaptation do CBR Systems Need?: A Review of Current Practice \nAbstract: This paper reviews a large number of CBR systems to determine when and what sort of adaptation is currently used. Three taxonomies are proposed: an adaptation-relevant taxonomy of CBR systems, a taxonomy of the tasks performed by CBR systems and a taxonomy of adaptation knowledge. To the extent that the set of existing systems reflects constraints on what is feasible, this review shows interesting dependencies between different system-types, the tasks these systems achieve and the adaptation needed to meet system goals. The CBR system designer may find the partition of CBR systems and the division of adaptation knowledge suggested by this paper useful. Moreover, this paper may help focus the initial stages of systems development by suggesting (on the basis of existing work) what types of adaptation knowledge should be supported by a new system. In addition, the paper provides a framework for the preliminary evaluation and comparison of systems. ",
"neighbors": [
2303
],
"mask": "Train"
},
{
"node_id": 1952,
"label": 2,
"text": "Title: Analysis of Decision Boundaries Generated by Constructive Neural Network Learning Algorithms \nAbstract: Constructive learning algorithms offer an approach to incremental construction of near-minimal artificial neural networks for pattern classification. Examples of such algorithms include Tower, Pyramid, Upstart, and Tiling algorithms which construct multilayer networks of threshold logic units (or, multilayer perceptrons). These algorithms differ in terms of the topology of the networks that they construct which in turn biases the search for a decision boundary that correctly classifies the training set. This paper presents an analysis of such algorithms from a geometrical perspective. This analysis helps in a better characterization of the search bias employed by the different algorithms in relation to the geometrical distribution of examples in the training set. Simple experiments with non linearly separable training sets support the results of mathematical analysis of such algorithms. This suggests the possibility of designing more efficient constructive algorithms that dynamically choose among different biases to build near-minimal networks for pattern classification. ",
"neighbors": [
503,
2029,
2393,
2396
],
"mask": "Train"
},
{
"node_id": 1953,
"label": 2,
"text": "Title: that fits the asymptotics of the problem. References \nAbstract: 1] D. Aldous and P. Shields. A diffusion limit for a class of randomly growing binary trees. Probability Theory, 79:509-542, 1988. [2] R. Breathnach, C. Benoist, K. O'Hare, F. Gannon, and P. Chambon. Ovalbumin gene: Evidence for leader sequence in mRNA and DNA sequences at the exon-intron boundaries. Proceedings of the National Academy of Science, 75:4853-4857, 1978. [3] S. Brunak, J. Engelbrecht, and S. Knudsen. Prediction of human mRNA donor and acceptor sites from the DNA sequence. Journal of Molecular Biology, 220:49, 1991. [4] Jack Cophen and Ian Stewart. The information in your hand. The Mathematical Intelligencer, 13(3), 1991. [5] R. G. Gallager. Information Theory and Reliable Communication. John Wiley & Sons, Inc., 1968. [6] Ali Hariri, Bruce Weber, and John Olmstead. On the validity of Shannon-information calculations for molecular biological sequence. Journal of Theoretical Biology, 147:235-254, 1990. [7] W. B. Davenport Jr. and W. L. Root. An Introduction to the Theory of Random Signals and Noise. McGraw-Hill, 1958. [8] Andrzej Knopka and John Owens. Complexity charts can be used to map functional domains in DNA. Gene Anal. Techn., 6, 1989. [9] S.M. Mount. A catalogue of splice-junction sequences. Nucleic Acids Research, 10:459-472, 1982. [10] H.M. Seidel, D.L. Pompliano, and J.R. Knowles. Exons as microgenes? Science, 257, September 1992. [11] C. E. Shannon. A mathematical theory of communication. Bell System Tech. J., 27:379-423, 623-656, 1948. [12] Peter S. Shenkin, Batu Erman, and Lucy D. Mastrandrea. Information-theoretical entropy as a measure of sequence variability. Proteins, 11(4):297, 1991. [13] R. Staden. Measurements of the effects that coding for a protein has on a DNA sequence and their use for finding genes. Nucleic Acids Research, 12:551-567, 1984. [14] J.A. Steitz. Snurps. Scientific American, 258(6), June 1988. [15] H. van Trees. Detection, estimation and modulation theory. Wiley, 1971. [16] J. D. Watson, N. H. Hopkins, J. W. Roberts, J. Ar-getsinger Steitz, and A. M. Weiner. Molecular Biology of the Gene. Benjamin/Cummings, Menlo Park, CA, fourth edition, 1987. [17] A.D. Wyner and A.J. Wyner. An improved version of the Lempel-Ziv algorithm. Transactions of Information Theory. [18] A.J. Wyner. String Matching Theorems and Applications to Data Compression and Statistics. PhD thesis, Stanford University, 1993. [19] J. Ziv and A. Lempel. A universal algorithm for sequential data compression. IEEE Transactions on Information Theory, IT-23(3):337-343, 1977. ",
"neighbors": [
2107
],
"mask": "Train"
},
{
"node_id": 1954,
"label": 4,
"text": "Title: TD Models: Modeling the World at a Mixture of Time Scales \nAbstract: Temporal-difference (TD) learning can be used not just to predict rewards, as is commonly done in reinforcement learning, but also to predict states, i.e., to learn a model of the world's dynamics. We present theory and algorithms for intermixing TD models of the world at different levels of temporal abstraction within a single structure. Such multi-scale TD models can be used in model-based reinforcement-learning architectures and dynamic programming methods in place of conventional Markov models. This enables planning at higher and varied levels of abstraction, and, as such, may prove useful in formulating methods for hierarchical or multi-level planning and reinforcement learning. In this paper we treat only the prediction problem|that of learning a model and value function for the case of fixed agent behavior. Within this context, we establish the theoretical foundations of multi-scale models and derive TD algorithms for learning them. Two small computational experiments are presented to test and illustrate the theory. This work is an extension and generalization of the work of Singh (1992), Dayan (1993), and Sutton & Pinette (1985).",
"neighbors": [
98,
321,
978,
2150,
2183,
2222
],
"mask": "Test"
},
{
"node_id": 1955,
"label": 5,
"text": "Title: Abstract \nAbstract: This paper is a scientific comparison of two code generation techniques with identical goals generation of the best possible software pipelined code for computers with instruction level parallelism. Both are variants of modulo scheduling, a framework for generation of software pipelines pioneered by Rau and Glaser [RaGl81], but are otherwise quite dissimilar. One technique was developed at Silicon Graphics and is used in the MIPSpro compiler. This is the production compiler for SGI s systems which are based on the MIPS R8000 processor [Hsu94]. It is essentially a branchandbound enumeration of possible schedules with extensive pruning. This method is heuristic because of the way it prunes and also because of the interaction between register allocation and scheduling. 1 The second technique aims to produce optimal results by formulating the scheduling and register allocation problem as an integrated integer linear programming (ILP 1 ) problem. This idea has received much recent exposure in the literature [AlGoGa95, Feautrier94, GoAlGa94a, GoAlGa94b, Eichenberger95], but to our knowledge all previous implementations have been too preliminary for detailed measurement and evaluation. In particular, we believe this to be the first published measurement of runtime performance for ILP based generation of software pipelines. A particularly valuable result of this study was evaluation of the heuristic pipelining technology in the SGI compiler . One of the motivations behind the McGill research was the hope that optimal software pipelining, while not in itself practical for use in production compilers, would be useful for their evaluation and validation. Our comparison has indeed provided a quantitative validation of the SGI compilers pipeliner, leading us to increased confidence in both techniques. ",
"neighbors": [
2149,
2190,
2194
],
"mask": "Train"
},
{
"node_id": 1956,
"label": 5,
"text": "Title: Instructions \nAbstract: Paper and BibTeX entry are available at http://www.complang.tuwien.ac.at/papers/. This paper was published in: Compiler Construction (CC '94), Springer LNCS 786, 1994, pages 158-171 Delayed Exceptions | Speculative Execution of Abstract. Superscalar processors, which execute basic blocks sequentially, cannot use much instruction level parallelism. Speculative execution has been proposed to execute basic blocks in parallel. A pure software approach suffers from low performance, because exception-generating instructions cannot be executed speculatively. We propose delayed exceptions, a combination of hardware and compiler extensions that can provide high performance and correct exception handling in compiler-based speculative execution. Delayed exceptions exploit the fact that exceptions are rare. The compiler assumes the typical case (no exceptions), schedules the code accordingly, and inserts run-time checks and fix-up code that ensure correct execution when exceptions do happen.",
"neighbors": [
735,
2527,
2649
],
"mask": "Test"
},
{
"node_id": 1957,
"label": 4,
"text": "Title: AVERAGED REWARD REINFORCEMENT LEARNING APPLIED TO FUZZY RULE TUNING \nAbstract: Fuzzy rules for control can be effectively tuned via reinforcement learning. Reinforcement learning is a weak learning method, which only requires information on the success or failure of the control application. The tuning process allows people to generate fuzzy rules which are unable to accurately perform control and have them tuned to be rules which provide smooth control. This paper explores a new simplified method of using reinforcement learning for the tuning of fuzzy control rules. It is shown that the learned fuzzy rules provide smoother control in the pole balancing domain than another approach. ",
"neighbors": [
565,
2536
],
"mask": "Train"
},
{
"node_id": 1958,
"label": 1,
"text": "Title: Automatic Generation of Adaptive Programs Automatic Generation of Adaptive Programs. In From Animals to Animats\nAbstract: Fuzzy rules for control can be effectively tuned via reinforcement learning. Reinforcement learning is a weak learning method, which only requires information on the success or failure of the control application. The tuning process allows people to generate fuzzy rules which are unable to accurately perform control and have them tuned to be rules which provide smooth control. This paper explores a new simplified method of using reinforcement learning for the tuning of fuzzy control rules. It is shown that the learned fuzzy rules provide smoother control in the pole balancing domain than another approach. ",
"neighbors": [
2220,
2226
],
"mask": "Train"
},
{
"node_id": 1959,
"label": 1,
"text": "Title: Evolution-based Discovery of Hierarchical Behaviors \nAbstract: Procedural representations of control policies have two advantages when facing the scale-up problem in learning tasks. First they are implicit, with potential for inductive generalization over a very large set of situations. Second they facilitate modularization. In this paper we compare several randomized algorithms for learning modular procedural representations. The main algorithm, called Adaptive Representation through Learning (ARL) is a genetic programming extension that relies on the discovery of subroutines. ARL is suitable for learning hierarchies of subroutines and for constructing policies to complex tasks. ARL was successfully tested on a typical reinforcement learning problem of controlling an agent in a dynamic and nondeterministic environment where the discovered subroutines correspond to agent behaviors. ",
"neighbors": [
120,
177,
2259
],
"mask": "Train"
},
{
"node_id": 1960,
"label": 2,
"text": "Title: Journal of Convex Analysis (accepted for publication) A HYBRID PROJECTION-PROXIMAL POINT ALGORITHM \nAbstract: We propose a modification of the classical proximal point algorithm for finding zeroes of a maximal monotone operator in a Hilbert space. In particular, an approximate proximal point iteration is used to construct a hyperplane which strictly separates the current iterate from the solution set of the problem. This step is then followed by a projection of the current iterate onto the separating hyperplane. All information required for this projection operation is readily available at the end of the approximate proximal step, and therefore this projection entails no additional computational cost. The new algorithm allows significant relaxation of tolerance requirements imposed on the solution of proximal point subproblems, which yields a more practical framework. Weak global convergence and local linear rate of convergence are established under suitable assumptions. Additionally, presented analysis yields an alternative proof of convergence for the exact proximal point method, which allows a nice geometric interpretation, and is somewhat more intuitive than the classical proof. ",
"neighbors": [
1949
],
"mask": "Validation"
},
{
"node_id": 1961,
"label": 5,
"text": "Title: Resource Spackling: A Framework for Integrating Register Allocation in Local and Global Schedulers \nAbstract: We present Resource Spackling, a framework for integrating register allocation and instruction scheduling that is based on a Measure and Reduce paradigm. The technique measures the resource requirements of a program and uses the measurements to distribute code for better resource allocation. The technique is applicable to the allocation of different types of resources. A program's resource requirements for both register and functional unit resources are first measured using a unified representation. These measurements are used to find areas where resources are either under or over utilized, called resource holes and excessive sets, respectively. Conditions are determined for increasing resource utilization in the resource holes. These conditions are applicable to both local and global code motion. ",
"neighbors": [
2100,
2527
],
"mask": "Train"
},
{
"node_id": 1962,
"label": 6,
"text": "Title: Learning Distributions from Random Walks \nAbstract: We introduce a new model of distributions generated by random walks on graphs. This model suggests a variety of learning problems, using the definitions and models of distribution learning defined in [6]. Our framework is general enough to model previously studied distribution learning problems, as well as to suggest new applications. We describe special cases of the general problem, and investigate their relative difficulty. We present algorithms to solve the learning problem under various conditions.",
"neighbors": [
574,
1827,
2509
],
"mask": "Train"
},
{
"node_id": 1963,
"label": 5,
"text": "Title: Learning Problem-Oriented Decision Structures from Decision Rules: The AQDT-2 System \nAbstract: We introduce a new model of distributions generated by random walks on graphs. This model suggests a variety of learning problems, using the definitions and models of distribution learning defined in [6]. Our framework is general enough to model previously studied distribution learning problems, as well as to suggest new applications. We describe special cases of the general problem, and investigate their relative difficulty. We present algorithms to solve the learning problem under various conditions.",
"neighbors": [
286,
378,
2195
],
"mask": "Test"
},
{
"node_id": 1964,
"label": 6,
"text": "Title: Constructing Nominal Xof-N Attributes \nAbstract: Most constructive induction researchers focus only on new boolean attributes. This paper reports a new constructive induction algorithm, called XofN, that constructs new nominal attributes in the form of Xof-N representations. An Xof-N is a set containing one or more attribute-value pairs. For a given instance, its value corresponds to the number of its attribute-value pairs that are true. The promising preliminary experimental results, on both artificial and real-world domains, show that constructing new nominal attributes in the form of Xof-N representations can significantly improve the performance of selective induction in terms of both higher prediction accuracy and lower theory complexity.",
"neighbors": [
102,
1595,
1644,
1862,
1863,
2675
],
"mask": "Validation"
},
{
"node_id": 1965,
"label": 1,
"text": "Title: Constructing Nominal Xof-N Attributes \nAbstract: Co-evolution of Pursuit and Evasion II: Simulation Methods and Results fl Abstract In a previous SAB paper [10], we presented the scientific rationale for simulating the coevolution of pursuit and evasion strategies. Here, we present an overview of our simulation methods and some results. Our most notable results are as follows. First, co-evolution works to produce good pursuers and good evaders through a pure bootstrapping process, but both types are rather specially adapted to their opponents' current counter-strategies. Second, eyes and brains can also co-evolve within each simulated species for example, pursuers usually evolved eyes on the front of their bodies (like cheetahs), while evaders usually evolved eyes pointing sideways or even backwards (like gazelles). Third, both kinds of coevolution are promoted by allowing spatially distributed populations, gene duplication, and an explicitly spatial morphogenesis program for eyes and brains that allows bilateral symmetry. The paper concludes by discussing some possible applications of simulated pursuit-evasion coevolu tion in biology and entertainment.",
"neighbors": [
712,
757,
2089
],
"mask": "Train"
},
{
"node_id": 1966,
"label": 2,
"text": "Title: LONG SHORT-TERM MEMORY Neural Computation 9(8):1735-1780, 1997 \nAbstract: Learning to store information over extended time intervals via recurrent backpropagation takes a very long time, mostly due to insufficient, decaying error back flow. We briefly review Hochreiter's 1991 analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called \"Long Short-Term Memory\" (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete time steps by enforcing constant error flow through \"constant error carrousels\" within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with RTRL, BPTT, Recurrent Cascade-Correlation, Elman nets, and Neural Sequence Chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long time lag tasks that have never been solved by previous recurrent network algorithms.",
"neighbors": [
1825
],
"mask": "Test"
},
{
"node_id": 1967,
"label": 6,
"text": "Title: Separability is a Learner's Best Friend \nAbstract: Geometric separability is a generalisation of linear separability, familiar to many from Minsky and Papert's analysis of the Perceptron learning method. The concept forms a novel dimension along which to conceptualise learning methods. The present paper shows how geometric separability can be defined and demonstrates that it accurately predicts the performance of a at least one empirical learning method.",
"neighbors": [
695,
2346
],
"mask": "Test"
},
{
"node_id": 1968,
"label": 2,
"text": "Title: A Delay-Line Based Motion Detection Chip \nAbstract: Inspired by a visual motion detection model for the rabbit retina and by a computational architecture used for early audition in the barn owl, we have designed a chip that employs a correlation model to report the one-dimensional field motion of a scene in real time. Using subthreshold analog VLSI techniques, we have fabricated and successfully tested a 8000 transistor chip using a standard MOSIS process.",
"neighbors": [
527,
1774,
2619
],
"mask": "Train"
},
{
"node_id": 1969,
"label": 4,
"text": "Title: Generalization and scaling in reinforcement learning \nAbstract: In associative reinforcement learning, an environment generates input vectors, a learning system generates possible output vectors, and a reinforcement function computes feedback signals from the input-output pairs. The task is to discover and remember input-output pairs that generate rewards. Especially difficult cases occur when rewards are rare, since the expected time for any algorithm can grow exponentially with the size of the problem. Nonetheless, if a reinforcement function possesses regularities, and a learning algorithm exploits them, learning time can be reduced below that of non-generalizing algorithms. This paper describes a neural network algorithm called complementary reinforcement back-propagation (CRBP), and reports simulation results on problems designed to offer differing opportunities for generalization.",
"neighbors": [
2051,
2200,
2309,
2363
],
"mask": "Train"
},
{
"node_id": 1970,
"label": 2,
"text": "Title: The Canonical Distortion Measure in Feature Space and 1-NN Classification \nAbstract: We prove that the Canonical Distortion Measure (CDM) [2, 3] is the optimal distance measure to use for 1 nearest-neighbour (1-NN) classification, and show that it reduces to squared Euclidean distance in feature space for function classes that can be expressed as linear combinations of a fixed set of features. PAC-like bounds are given on the sample-complexity required to learn the CDM. An experiment is presented in which a neural network CDM was learnt for a Japanese OCR environ ment and then used to do 1-NN classification.",
"neighbors": [
2486
],
"mask": "Test"
},
{
"node_id": 1971,
"label": 1,
"text": "Title: Voting for Schemata \nAbstract: The schema theorem states that implicit parallel search is behind the power of the genetic algorithm. We contend that chromosomes can vote, proportionate to their fitness, for candidate schemata. We maintain a population of binary strings and ternary schemata. The string population not only works on solving its problem domain, but it supplies fitness for the schema population, which indirectly can solve the original problem.",
"neighbors": [
163,
995,
1696,
2211
],
"mask": "Test"
},
{
"node_id": 1972,
"label": 3,
"text": "Title: ON MCMC METHODS IN BAYESIAN REGRESSION ANALYSIS AND MODEL SELECTION \nAbstract: The objective of statistical data analysis is not only to describe the behaviour of a system, but also to propose, construct (and then to check) a model of observed processes. Bayesian methodology offers one of possible approaches to estimation of unknown components of the model (its parameters or functional components) in a framework of a chosen model type. However, in many instances the evaluation of Bayes posterior distribution (which is basal for Bayesian solutions) is difficult and practically intractable (even with the help of numerical approximations). In such cases the Bayesian analysis may be performed with the help of intensive simulation techniques called the `Markov chain Monte Carlo'. The present paper reviews the best known approaches to MCMC generation. It deals with several typical situations of data analysis and model construction where MCMC methods have been successfully applied. Special attention is devoted to the problem of selection of optimal regression model constructed from regression splines or from other functional units. ",
"neighbors": [
2620
],
"mask": "Train"
},
{
"node_id": 1973,
"label": 1,
"text": "Title: ON MCMC METHODS IN BAYESIAN REGRESSION ANALYSIS AND MODEL SELECTION \nAbstract: 1] R.K. Belew, J. McInerney, and N. Schraudolph, Evolving networks: using the genetic algorithm with connectionist learning, in Artificial Life II, SFI Studies in the Science of Complexity, C.G. Langton, C. Taylor, J.D. Farmer, S. Rasmussen Eds., vol. 10, Addison-Wesley, 1991. [2] M. McInerney, and A.P. Dhawan, Use of genetic algorithms with back propagation in training of feed-forward neural networks, in IEEE International Conference on Neural Networks, vol. 1, pp. 203-208, 1993. [3] F.Z. Brill, D.E. Brown, and W.N. Martin, Fast genetic selection of features for neural network classifiers, IEEE Transactions on Neural Networks, vol. 3, no. 2, pp. 324-328, 1992. [4] F. Dellaert, and J. Vandewalle, Automatic design of cellular neural networks by means of genetic algorithms: finding a feature detector, in The Third IEEE International Workshop on Cellular Neural Networks and Their Applications, IEEE, New Jersey, pp. 189-194, 1994. [5] D.E. Moriarty, and R. Miikkulainen, Efficient reinforcement learning through symbiotic evolution, Machine Learning, vol. 22, pp. 11-33, 1996. [6] L. Davis, Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, 1991. [7] D. Whitely, The GENITOR algorithm and selective pressure, in Proceedings of the Third Interanational Conference on Genetic Algorithms, J.D. Schaffer Ed., Morgan Kauffman, San Mateo, CA, 1989, pp. 116-121. [8] van Camp, D., T. Plate and G.E. Hinton (1992). The Xerion Neural Network Simulator and Documentation. Department of Computer Science, University of Toronto, Toronto. ",
"neighbors": [
129,
247,
2451
],
"mask": "Test"
},
{
"node_id": 1974,
"label": 2,
"text": "Title: Data Mining for Association Rules with Unsupervised Neural Networks \nAbstract: results for Gaussian mixture models and factor analysis are discussed. ",
"neighbors": [
36,
667,
2227
],
"mask": "Train"
},
{
"node_id": 1975,
"label": 4,
"text": "Title: Associative Reinforcement Learning: Functions in k-DNF \nAbstract: An agent that must learn to act in the world by trial and error faces the reinforcement learning problem, which is quite different from standard concept learning. Although good algorithms exist for this problem in the general case, they are often quite inefficient and do not exhibit generalization. One strategy is to find restricted classes of action policies that can be learned more efficiently. This paper pursues that strategy by developing algorithms that can efficiently learn action maps that are expressible in k-DNF. The algorithms are compared with existing methods in empirical trials and are shown to have very good performance. ",
"neighbors": [
427,
565,
2408,
2655,
2689
],
"mask": "Test"
},
{
"node_id": 1976,
"label": 0,
"text": "Title: Utilizing Connectionist Learning Procedures in Symbolic Case Retrieval Nets \nAbstract: This paper describes a method which, under certain circumstances, allows to automatically learn or adjust similarity measures. For this, ideas of connectionist learning procedures, in particular those related to Hebbian learning, are combined with a Case-Based Reasoning engine. ",
"neighbors": [
1854
],
"mask": "Train"
},
{
"node_id": 1977,
"label": 3,
"text": "Title: A note on acceptance rate criteria for CLTs for Hastings-Metropolis algorithms \nAbstract: This note considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis-Hastings algorithms which are constructed in terms of a rejection probability, (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis adjusted Langevin algorithm. The examples are rather specialised, although in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used. 0 I would like to thank Kerrie Mengersen Jeff Rosenthal and Richard Tweedie for useful conversations on the subject of this paper. ",
"neighbors": [
2153,
2219,
2318,
2510
],
"mask": "Train"
},
{
"node_id": 1978,
"label": 3,
"text": "Title: Two convergence properties of hybrid samplers \nAbstract: This note considers positive recurrent Markov chains where the probability of remaining in the current state is arbitrarily close to 1. Specifically, conditions are given which ensure the non-existence of central limit theorems for ergodic averages of functionals of the chain. The results are motivated by applications for Metropolis-Hastings algorithms which are constructed in terms of a rejection probability, (where a rejection involves remaining at the current state). Two examples for commonly used algorithms are given, for the independence sampler and the Metropolis adjusted Langevin algorithm. The examples are rather specialised, although in both cases, the problems which arise are typical of problems commonly occurring for the particular algorithm being used. 0 I would like to thank Kerrie Mengersen Jeff Rosenthal and Richard Tweedie for useful conversations on the subject of this paper. ",
"neighbors": [
1713,
2510
],
"mask": "Test"
},
{
"node_id": 1979,
"label": 4,
"text": "Title: ENVIRONMENT-INDEPENDENT REINFORCEMENT ACCELERATION difference between time and space is that you can't reuse time. \nAbstract: A reinforcement learning system with limited computational resources interacts with an unrestricted, unknown environment. Its goal is to maximize cumulative reward, to be obtained throughout its limited, unknown lifetime. System policy is an arbitrary modifiable algorithm mapping environmental inputs and internal states to outputs and new internal states. The problem is: in realistic, unknown environments, each policy modification process (PMP) occurring during system life may have unpredictable influence on environmental states, rewards and PMPs at any later time. Existing reinforcement learning algorithms cannot properly deal with this. Neither can naive exhaustive search among all policy candidates | not even in case of very small search spaces. In fact, a reasonable way of measuring performance improvements in such general (but typical) situations is missing. I define such a measure based on the novel \"reinforcement acceleration criterion\" (RAC). At a given time, RAC is satisfied if the beginning of each completed PMP that computed a currently valid policy modification has been followed by long-term acceleration of average reinforcement intake (the computation time for later PMPs is taken into account). I present a method called \"environment-independent reinforcement acceleration\" (EIRA) which is guaranteed to achieve RAC. EIRA does neither care whether the system's policy allows for changing itself, nor whether there are multiple, interacting learning systems. Consequences are: (1) a sound theoretical framework for \"meta-learning\" (because the success of a PMP recursively depends on the success of all later PMPs, for which it is setting the stage). (2) A sound theoretical framework for multi-agent learning. The principles have been implemented (1) in a single system using an assembler-like programming language to modify its own policy, and (2) a system consisting of multiple agents, where each agent is in fact just a connection in a fully recurrent reinforcement learning neural net. A by-product of this research is a general reinforcement learning algorithm for such nets. Preliminary experiments illustrate the theory. ",
"neighbors": [
68,
1844,
1845
],
"mask": "Train"
},
{
"node_id": 1980,
"label": 1,
"text": "Title: An Overview of Evolutionary Computation \nAbstract: Evolutionary computation uses computational models of evolution - ary processes as key elements in the design and implementation of computer-based problem solving systems. In this paper we provide an overview of evolutionary computation, and describe several evolutionary algorithms that are currently of interest. Important similarities and di fferences are noted, which lead to a discussion of important issues that need to be resolved, and items for future research.",
"neighbors": [
262,
758,
2202,
2457
],
"mask": "Train"
},
{
"node_id": 1981,
"label": 2,
"text": "Title: Task and Spatial Frequency Effects on Face Specialization \nAbstract: There is strong evidence that face processing is localized in the brain. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent mechanisms in the brain. Is neural specialization innate or learned? We suggest that this specialization could be the result of a competitive learning mechanism that, during development, devotes neural resources to the tasks they are best at performing. Further, we suggest that the specialization arises as an interaction between task requirements and developmental constraints. In this paper, we present a feed-forward computational model of visual processing, in which two modules compete to classify input stimuli. When one module receives low spatial frequency information and the other receives high spatial frequency information, and the task is to identify the faces while simply classifying the objects, the low frequency network shows a strong specialization for faces. No other combination of tasks and inputs shows this strong specialization. We take these results as support for the idea that an innately-specified face processing module is unnecessary.",
"neighbors": [
1915,
2491
],
"mask": "Train"
},
{
"node_id": 1982,
"label": 3,
"text": "Title: Theoretical rates of convergence for Markov chain Monte Carlo \nAbstract: We present a general method for proving rigorous, a priori bounds on the number of iterations required to achieve convergence of Markov chain Monte Carlo. We describe bounds for specific models of the Gibbs sampler, which have been obtained from the general method. We discuss possibilities for obtaining bounds more generally. ",
"neighbors": [
41,
1713,
1716,
2153
],
"mask": "Train"
},
{
"node_id": 1983,
"label": 3,
"text": "Title: Correlated Action Effects in Decision Theoretic Regression \nAbstract: Much recent research in decision theoretic planning has adopted Markov decision processes (MDPs) as the model of choice, and has attempted to make their solution more tractable by exploiting problem structure. One particular algorithm, structured policy construction achieves this by means of a decision theoretic analog of goal regression, using action descriptions based on Bayesian networks with tree-structured conditional probability tables. The algorithm as presented is not able to deal with actions with correlated effects. We describe a new decision theoretic regression operator that corrects this weakness. While conceptually straightforward, this extension requires a somewhat more complicated technical approach.",
"neighbors": [
2078
],
"mask": "Train"
},
{
"node_id": 1984,
"label": 1,
"text": "Title: Better Trained Ants \nAbstract: The problem of programming an artificial ant to follow the Santa Fe trail has been repeatedly used as a benchmark problem. Recently we have shown performance of several techniques is not much better than the best performance obtainable using uniform random search. We suggested that this could be because the program fitness landscape is difficult for hill climbers and the problem is also difficult for Genetic Algorithms as it contains multiple levels of deception. Here we redefine the problem so the ant is obliged to traverse the trail in approximately the correct order. A simple genetic programming system, with no size or depth restriction, is show to perform approximately three times better with the improved training function.",
"neighbors": [
2206,
2379
],
"mask": "Train"
},
{
"node_id": 1985,
"label": 1,
"text": "Title: ABSTRACT \nAbstract: In general, the machine learning process can be accelerated through the use of heuristic knowledge about the problem solution. For example, monomorphic typed Genetic Programming (GP) uses type information to reduce the search space and improve performance. Unfortunately, monomorphic typed GP also loses the generality of untyped GP: the generated programs are only suitable for inputs with the specified type. Polymorphic typed GP improves over mono-morphic and untyped GP by allowing the type information to be expressed in a more generic manner, and yet still imposes constraints on the search space. This paper describes a polymorphic GP system which can generate polymorphic programs: programs which take inputs of more than one type and produces outputs of more than one type. We also demonstrate its operation through the generation of the map polymorphic program.",
"neighbors": [
995,
1231,
2065
],
"mask": "Train"
},
{
"node_id": 1986,
"label": 6,
"text": "Title: BOOSTING AND NAIVE BAYESIAN LEARNING \nAbstract: Although so-called naive Bayesian classification makes the unrealistic assumption that the values of the attributes of an example are independent given the class of the example, this learning method is remarkably successful in practice, and no uniformly better learning method is known. Boosting is a general method of combining multiple classifiers due to Yoav Freund and Rob Schapire. This paper shows that boosting applied to naive Bayesian classifiers yields combination classifiers that are representationally equivalent to standard feedforward multilayer perceptrons. (An ancillary result is that naive Bayesian classification is a nonparametric, nonlinear generalization of logistic regression.) As a training algorithm, boosted naive Bayesian learning is quite different from backpropagation, and has definite advantages. Boosting requires only linear time and constant space, and hidden nodes are learned incrementally, starting with the most important. On the real-world datasets on which the method has been tried so far, generalization performance is as good as or better than the best published result using any other learning method. Unlike all other standard learning algorithms, naive Bayesian learning, with and without boosting, can be done in logarithmic time with a linear number of parallel computing units. Accordingly, these learning methods are highly plausible computationally as models of animal learning. Other arguments suggest that they are plausible behaviorally also. ",
"neighbors": [
70,
569,
1329,
2338,
2462
],
"mask": "Validation"
},
{
"node_id": 1987,
"label": 0,
"text": "Title: Improving Minority Class Prediction Using Case-Specific Feature Weights \nAbstract: This paper addresses the problem of handling skewed class distributions within the case-based learning (CBL) framework. We first present as a baseline an information-gain-weighted CBL algorithm and apply it to three data sets from natural language processing (NLP) with skewed class distributions. Although overall performance of the baseline CBL algorithm is good, we show that the algorithm exhibits poor performance on minority class instances. We then present two CBL algorithms designed to improve the performance of minority class predictions. Each variation creates test-case-specific feature weights by first observing the path taken by the test case in a decision tree created for the learning task, and then using path-specific information gain values to create an appropriate weight vector for use during case retrieval. When applied to the NLP data sets, the algorithms are shown to significantly increase the accuracy of minority class predictions while maintaining or improving overall classification accuracy.",
"neighbors": [
2607
],
"mask": "Test"
},
{
"node_id": 1988,
"label": 3,
"text": "Title: Trellis-Constrained Codes \nAbstract: We introduce a class of iteratively decodable trellis-constrained codes as a generalization of turbocodes, low-density parity-check codes, serially-concatenated convolutional codes, and product codes. In a trellis-constrained code, multiple trellises interact to define the allowed set of codewords. As a result of these interactions, the minimum-complexity single trellis for the code can have a state space that grows exponentially with block length. However, as with turbocodes and low-density parity-check codes, a decoder can approximate bit-wise maximum a posteriori decoding by using the sum-product algorithm on the factor graph that describes the code. We present two new families of codes, homogenous trellis-constrained codes and ring-connected trellis-constrained codes, and give results that show these codes perform in the same regime as do turbo-codes and low-density parity-check codes. ",
"neighbors": [
2401
],
"mask": "Train"
},
{
"node_id": 1989,
"label": 0,
"text": "Title: Participating in Instructional Dialogues: Finding and Exploiting Relevant Prior Explanations \nAbstract: In this paper we present our research on identifying and modeling the strategies that human tutors use for integrating previous explanations into current explanations. We have used this work to develop a computational model that has been partially implemented in an explanation facility for an existing tutoring system known as SHERLOCK. We are implementing a system that uses case-based reasoning to identify previous situations and explanations that could potentially affect the explanation being constructed. We have identified heuristics for constructing explanations that exploit this information in ways similar to what we have observed in When human tutors engage in dialogue, they freely exploit all aspects of the mutually known context, including the previous discourse. Utterances that do not draw on previous discourse seem awkward, unnatural, or even incoherent. Previous discourse must be taken into account in order to relate new information effectively to recently conveyed material, and to avoid repeating old material that would distract the student from what is new. Thus, strategies for using the dialogue history in generating explanations are of great importance to research in natural language generation for tutorial applications. The goal of our work is to produce a computational model of the effects of discourse context on explanations in instructional dialogues, and to implement this model in an intelligent tutoring system that maintains a dialogue history and uses it in planning its explanations. Based on a study of human-human instructional dialogues, we have developed a taxonomy that classifies the types of contextual effects that occur in our data according to the explanatory functions they serve (16). In this paper, we focus on one important category from our taxonomy: situations in which the tutor explicitly refers to a previous explanation in order to point out similarities (differences) between the material currently being explained and material presented in earlier explanation(s). We are implementing a system that uses case-based reasoning to identify previous situations and explanations that could potentially affect the explanation being constructed. We have identified heuristics for constructing explanations that exploit this information in ways similar to what we have observed in instructional dialogues produced by human tutors. By building a computer system that has this capability as an optional facility that can be enabled or disabled, we will be able to systematically evaluate our hypothesis that this is a useful tutoring strategy. In order to test our hypotheses about the effects of previous discourse on explanations, we are building an explanation component for an existing intelligent training system, Sherlock (11). Sherlock is an intelligent coached practice environment for training avionics technicians to troubleshoot complex electronic equipment. Using Sherlock, trainees solve problems with minimal tutor interaction and then review their troubleshooting behavior in a post-problem reflective follow-up session (rfu) where the tutor instructional dialogues produced by human tutors.",
"neighbors": [
1882
],
"mask": "Train"
},
{
"node_id": 1990,
"label": 2,
"text": "Title: A FIXED SIZE STORAGE O(n 3 TIME COMPLEXITY LEARNING ALGORITHM FOR FULLY RECURRENT CONTINUALLY RUNNING\nAbstract: The RTRL algorithm for fully recurrent continually running networks (Robinson and Fallside, 1987)(Williams and Zipser, 1989) requires O(n 4 ) computations per time step, where n is the number of non-input units. I describe a method suited for on-line learning which computes exactly the same gradient and requires fixed-size storage of the same order but has an average time complexity 1 per time step of O(n 3 ).",
"neighbors": [
121,
201,
233,
2093
],
"mask": "Validation"
},
{
"node_id": 1991,
"label": 3,
"text": "Title: APPLICATIONS OF CHEEGER'S CONSTANT TO THE CONVERGENCE RATE OF MARKOV CHAINS ON R n \nAbstract: Quantitative geometric rates of convergence for reversible Markov chains are closely related to the Cheeger's constant, which is hard to calculate for general state spaces. This article describes a geometric argument to bound the Cheeger's constant for chains on bounded subsets of R n . ",
"neighbors": [
1713,
1716,
2510
],
"mask": "Validation"
},
{
"node_id": 1992,
"label": 3,
"text": "Title: Estimating L 1 Error of Kernel Estimator: Monitoring Convergence of Markov Samplers \nAbstract: In many Markov chain Monte Carlo problems, the target density function is known up to a normalization constant. In this paper, we take advantage of this knowledge to facilitate the convergence diagnostic of a Markov sampler by estimating the L 1 error of a kernel estimator. Firstly, we propose an estimator of the normalization constant which is shown to be asymptotically normal under mixing and moment conditions. Secondly, the L 1 error of the kernel estimator is estimated using the normalization constant estimator, and the ratio of the estimated L 1 error to the true L 1 error is shown to converge to 1 in probability under similar conditions. Thirdly, we propose a sequential plot of the estimated L 1 error as a tool to monitor the convergence of the Markov sampler. Finally, a 2-dimensional bimodal example is given to illustrate the proposal, fl Bin Yu is Assistant Professor, Department of Statistics, University of California, Berkeley, CA 94720-3860. Research supported in part by the Junior Faculty Research Grant from University of California at Berkeley, grants DAAL03-91-G-007 and DAAH04-94-G-0232 from the Army Research Office, and grant DMS-9322817 from the National Science Foundation. The author is very grateful to Professors Peter Bickel and Andrew Gelman for many helpful discussions and their comments on the draft. Special thanks are due to Mr. Sam Buttrey for his help on simulation, to Professor Per Mykland and Mr. Karl Broman for commenting on the draft, and to two anonymous and two Markov samplers are compared in the example using the proposed diagnostic plot.",
"neighbors": [
1713,
2153
],
"mask": "Validation"
},
{
"node_id": 1993,
"label": 3,
"text": "Title: Plausibility Measures and Default Reasoning \nAbstract: We introduce a new approach to modeling uncertainty based on plausibility measures. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. We focus on one application of plausibility measures in this paper: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, *-semantics, possibilistic structures, and -rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. While this was viewed as a surprise, we show here that it is almost inevitable. In the framework of plausibility measures, we can give a necessary condition for the KLM axioms to be sound, and an additional condition necessary and sufficient to ensure that the KLM axioms are complete. This additional condition is so weak that it is almost always met whenever the axioms are sound. In particular, it is easily seen to hold for all the proposals made in the literature. ",
"neighbors": [
276,
342,
729,
1945,
2546
],
"mask": "Test"
},
{
"node_id": 1994,
"label": 3,
"text": "Title: Constructive Belief and Rational Representation \nAbstract: It is commonplace in artificial intelligence to divide an agent's explicit beliefs into two parts: the beliefs explicitly represented or manifest in memory, and the implicitly represented or constructive beliefs that are repeatedly reconstructed when needed rather than memorized. Many theories of knowledge view the relation between manifest and constructive beliefs as a logical relation, with the manifest beliefs representing the constructive beliefs through a logic of belief. This view, however, limits the ability of a theory to treat incomplete or inconsistent sets of beliefs in useful ways. We argue that a more illuminating view is that belief is the result of rational representation. In this theory, the agent obtains its constructive beliefs by using its manifest beliefs and preferences to rationally (in the sense of decision theory) choose the most useful conclusions indicated by the manifest beliefs.",
"neighbors": [
1800,
1995,
2097,
2241
],
"mask": "Validation"
},
{
"node_id": 1995,
"label": 3,
"text": "Title: Rationality and its Roles in Reasoning \nAbstract: The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning. We survey the growing literature on how the basic notions of probability, utility, and rational choice, coupled with practical limitations on information and resources, influence the design and analysis of reasoning and representation systems.",
"neighbors": [
1800,
1907,
1994,
2097
],
"mask": "Train"
},
{
"node_id": 1996,
"label": 2,
"text": "Title: A New Algorithm for DNA Sequence Assembly Running Title: A New Algorithm for DNA Sequence Assembly \nAbstract: The economic theory of rationality promises to equal mathematical logic in its importance for the mechanization of reasoning. We survey the growing literature on how the basic notions of probability, utility, and rational choice, coupled with practical limitations on information and resources, influence the design and analysis of reasoning and representation systems.",
"neighbors": [
1997
],
"mask": "Train"
},
{
"node_id": 1997,
"label": 2,
"text": "Title: AMASS: A Structured Pattern Matching Approach to Shotgun Sequence Assembly \nAbstract: In this paper, we propose an efficient, reliable shotgun sequence assembly algorithm based on a fingerprinting scheme that is robust to both noise and repetitive sequences in the data. Our algorithm uses exact matches of short patterns randomly selected from fragment data to identify fragment overlaps, construct an overlap map, and finally deliver a consensus sequence. We show how statistical clues made explicit in our approach can easily be exploited to correctly assemble results even in the presence of extensive repetitive sequences. Our approach is exceptionally fast in practice: e.g., we have successfully assembled a whole Mycoplasma genitalium genome (approximately 580 kbps) in roughly 8 minutes of 64MB 200MHz Pentium Pro CPU time from real shotgun data, where most existing algorithms can be expected to run for several hours to a day on the same data. Moreover, experiments with shotgun data synthetically prepared from real DNA sequences from a wide range of organisms (including human DNA) and containing extensive repeating regions demonstrate our algorithm's robustness to noise and the presence of repetitive sequences. For example, we have correctly assembled a 238kbp Human DNA sequence in less than 3 minutes of 64MB 200MHz Pentium Pro CPU time. fl Support for this research was provided in part by the Office of Naval Research through grant N0014-94-1-1178.",
"neighbors": [
1996
],
"mask": "Train"
},
{
"node_id": 1998,
"label": 2,
"text": "Title: Programming Environment for a High Performance Parallel Supercomputer with Intelligent Communication \nAbstract: At the Electronics Lab of the Swiss Federal Institute of Techology (ETH) in Zurich, the high performance Parallel Supercomputer MUSIC (MUlti processor System with Intelligent Communication) has beed developed. As applications in neural network simulation and molecular dynamics show, the Electronics Lab Supercomputer is absolutely on a par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1000, wight is reduced by a factor of 400 and price is reduced by a factor of 100. Software development is a key using such a parallel system. This report focus on the programming environment of the MUSIC system and on it's applications.",
"neighbors": [
1873
],
"mask": "Train"
},
{
"node_id": 1999,
"label": 1,
"text": "Title: Evolutionary Training of CLP-Constrained Neural Networks \nAbstract: The paper is concerned with the integration of constraint logic programming systems (CLP) with systems based on genetic algorithms (GA). The resulting framework is tailored for applications that require a first phase in which a number of constraints need to be generated, and a second phase in which an optimal solution satisfying these constraints is produced. The first phase is carried by the CLP and the second one by the GA. We present a specific framework where ECL i PS e (ECRC Common Logic Programming System) and GENOCOP (GEnetic algorithm for Numerical Optimization for COnstrained Problems) are integrated in a framework called CoCo (COmputational intelligence plus COnstraint logic programming). The CoCo system is applied to the training problem for neural networks. We consider constrained networks, e.g. neural networks with shared weights, constraints on the weights for example domain constraints for hardware implementation etc. Then ECL i PS e is used to generate the chromosome representation together with other constraints which ensure, in most cases, that each network is specified by exactly one chromosome. Thus the problem becomes a constrained optimization problem, where the optimization criterion is to optimize the error of the network, and GENOCOP is used to find an optimal solution. Note: The work of the second author was partially supported by SION, a department of the NWO, the National Foundation for Scientific Research. This work has been carried out while the third author was visiting CWI, Amsterdam, and the fourth author was visiting Leiden University. ",
"neighbors": [
427,
2003,
2515
],
"mask": "Train"
},
{
"node_id": 2000,
"label": 3,
"text": "Title: A LOGICAL APPROACH TO REASONING ABOUT UNCERTAINTY: A TUTORIAL \nAbstract: fl This paper will appear in Discourse, Interaction, and Communication, X. Arrazola, K. Korta, and F. J. Pelletier, eds., Kluwer, 1997. Much of this work was performed while the author was at IBM Almaden Research Center. IBM's support is gratefully acknowledged. ",
"neighbors": [
467,
2115
],
"mask": "Train"
},
{
"node_id": 2001,
"label": 1,
"text": "Title: Comparison of the SAW-ing Evolutionary Algorithm and the Grouping Genetic Algorithm for Graph Coloring 1 \nAbstract: 1 This report is also available through http://www.wi.leidenuniv.nl/~gusz/ sawvsgga.ps.gz ",
"neighbors": [
833,
1796
],
"mask": "Train"
},
{
"node_id": 2002,
"label": 3,
"text": "Title: Geometric Ergodicity of Gibbs and Block Gibbs Samplers for a Hierarchical Random Effects Model \nAbstract: 1 This report is also available through http://www.wi.leidenuniv.nl/~gusz/ sawvsgga.ps.gz ",
"neighbors": [
1713,
2153,
2510
],
"mask": "Validation"
},
{
"node_id": 2003,
"label": 1,
"text": "Title: Constraining of Weights using Regularities \nAbstract: In this paper we study how global optimization methods (like genetic algorithms) can be used to train neural networks. We introduce the notion of regularity, for studying properties of the error function that expand the search space in an artificial way. Regularities are used to generate constraints on the weights of the network. In order to find a satisfiable set of constraints we use a constraint logic programming system. Then the training of the network becomes a constrained optimization problem. We also relate the notion of regularity to so-called network transformations.",
"neighbors": [
1999,
2515
],
"mask": "Test"
},
{
"node_id": 2004,
"label": 6,
"text": "Title: On Learning Bounded-Width Branching Programs \nAbstract: In this paper, we study PAC-learning algorithms for specialized classes of deterministic finite automata (DFA). In particular, we study branching programs, and we investigate the influence of the width of the branching program on the difficulty of the learning problem. We first present a distribution-free algorithm for learning width-2 branching programs. We also give an algorithm for the proper learning of width-2 branching programs under uniform distribution on labeled samples. We then show that the existence of an efficient algorithm for learning width-3 branching programs would imply the existence of an efficient algorithm for learning DNF, which is not known to be the case. Finally, we show that the existence of an algorithm for learning width-3 branching programs would also yield an algorithm for learning a very restricted version of parity with noise.",
"neighbors": [
672,
2040,
2360
],
"mask": "Train"
},
{
"node_id": 2005,
"label": 6,
"text": "Title: The Parameterized Complexity of Sequence Alignment and Consensus \nAbstract: The Longest common subsequence problem is examined from the point of view of parameterized computational complexity. There are several different ways in which parameters enter the problem, such as the number of sequences to be analyzed, the length of the common subsequence, and the size of the alphabet. Lower bounds on the complexity of this basic problem imply lower bounds on a number of other sequence alignment and consensus problems. At issue in the theory of parameterized complexity is whether a problem which takes input (x; k) can be solved in time f (k) n ff where ff is independent of k (termed fixed-parameter tractability). It can be argued that this is the appropriate asymptotic model of feasible computability for problems for which a small range of parameter values covers important applications | a situation which certainly holds for many problems in biological sequence analysis. Our main results show that: (1) The Longest Common Subsequence (LCS) parameterized by the number of sequences to be analyzed is hard for W [t] for all t. (2) The LCS problem problem, parameterized by the length of the common subsequence, belongs to W [P ] and is hard for W [2]. (3) The LCS problem parameterized both by the number of sequences and the length of the common subsequence, is complete for W [1]. All of the above results are obtained for unrestricted alphabet sizes. For alphabets of a fixed size, problems (2) and (3) are fixed-parameter tractable. We conjecture that (1) remains hard. ",
"neighbors": [
2345
],
"mask": "Train"
},
{
"node_id": 2006,
"label": 6,
"text": "Title: Constructing New Attributes for Decision Tree Learning \nAbstract: The Longest common subsequence problem is examined from the point of view of parameterized computational complexity. There are several different ways in which parameters enter the problem, such as the number of sequences to be analyzed, the length of the common subsequence, and the size of the alphabet. Lower bounds on the complexity of this basic problem imply lower bounds on a number of other sequence alignment and consensus problems. At issue in the theory of parameterized complexity is whether a problem which takes input (x; k) can be solved in time f (k) n ff where ff is independent of k (termed fixed-parameter tractability). It can be argued that this is the appropriate asymptotic model of feasible computability for problems for which a small range of parameter values covers important applications | a situation which certainly holds for many problems in biological sequence analysis. Our main results show that: (1) The Longest Common Subsequence (LCS) parameterized by the number of sequences to be analyzed is hard for W [t] for all t. (2) The LCS problem problem, parameterized by the length of the common subsequence, belongs to W [P ] and is hard for W [2]. (3) The LCS problem parameterized both by the number of sequences and the length of the common subsequence, is complete for W [1]. All of the above results are obtained for unrestricted alphabet sizes. For alphabets of a fixed size, problems (2) and (3) are fixed-parameter tractable. We conjecture that (1) remains hard. ",
"neighbors": [
1824
],
"mask": "Test"
},
{
"node_id": 2007,
"label": 4,
"text": "Title: A Computer Scientist's View of Life, the Universe, and Everything \nAbstract: Is the universe computable? If so, it may be much cheaper in terms of information requirements to compute all computable universes instead of just ours. I apply basic concepts of Kolmogorov complexity theory to the set of possible universes, and chat about perceived and true randomness, life, generalization, and learning in a given universe. Assumptions. A long time ago, the Great Programmer wrote a program that runs all possible universes on His Big Computer. \"Possible\" means \"computable\": (1) Each universe evolves on a discrete time scale. (2) Any universe's state at a given time is describable by a finite number of bits. One of the many universes is ours, despite some who evolved in it and claim it is incomputable. Computable universes. Let T M denote an arbitrary universal Turing machine with unidirectional output tape. T M 's input and output symbols are \"0\", \"1\", and \",\" (comma). T M 's possible input programs can be ordered alphabetically: \"\" (empty program), \"0\", \"1\", \",\", \"00\", \"01\", \"0,\", \"10\", \"11\", \"1,\", \",0\", \",1\", \",,\", \"000\", etc. Let A k denote T M 's k-th program in this list. Its output will be a finite or infinite string over the alphabet f \"0\",\"1\",\",\"g. This sequence of bitstrings separated by commas will be interpreted as the evolution E k of universe U k . If E k includes at least one comma, then let U l k represents U k 's state at the l-th time step of E k (k; l 2 f1; 2; : : : ; g). E k is represented by the sequence U 1 k corresponds to U k 's big bang. Different algorithms may compute the same universe. Some universes are finite (those whose programs cease producing outputs at some point), others are not. I don't know about ours. TM not important. The choice of the Turing machine is not important. This is due to the compiler theorem: for each universal Turing machine C there exists a constant prefix C 2 f \"0\",\"1\",\",\"g fl such that for all possible programs p, C's output in response to program C p is identical to T M 's output in response to p. The prefix C is the compiler that compiles programs for T M into equivalent programs for C. k denote the l-th (possibly empty) bitstring before the l-th comma. U l",
"neighbors": [
68,
1779,
1780
],
"mask": "Validation"
},
{
"node_id": 2008,
"label": 3,
"text": "Title: Self-Targeting Candidates for Metropolis-Hastings Algorithms \nAbstract: The Metropolis-Hastings algorithm for estimating a distribution is based on choosing a candidate Markov chain and then accepting or rejecting moves of the candidate to produce a chain known to have as the invariant measure. The traditional methods use candidates essentially unconnected to . Based on diffusions for which is invariant, we develop for one-dimensional distributions a class of candidate distributions that \"self-target\" towards the high density areas of . These produce Metropolis-Hastings algorithms with convergence rates that appear to be considerably better than those known for the traditional candidate choices, such as random walk. In particular, for wide classes of these choices may effectively help reduce the \"burn-in\" problem. We illustrate this behaviour for examples with exponential and polynomial tails, and for a logistic regression model using a Gibbs sampling algorithm. ",
"neighbors": [
2022,
2153,
2219
],
"mask": "Test"
},
{
"node_id": 2009,
"label": 5,
"text": "Title: The Predictability of Data Values \nAbstract: Copyright 1997 IEEE. Published in the Proceedings of Micro-30, December 1-3, 1997 in Research Triangle Park, North Carolina. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions IEEE Service Center 445 Hoes Lane P.O. Box 1331 Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966. ",
"neighbors": [
2534
],
"mask": "Test"
},
{
"node_id": 2010,
"label": 0,
"text": "Title: Learning in design: From Characterizing Dimensions to Working Systems \nAbstract: The application of machine learning (ML) to solve practical problems is complex. Only recently, due to the increased promise of ML in solving real problems and the experienced difficulty of their use, has this issue started to attract attention. This difficulty arises from the complexity of learning problems and the large variety of available techniques. In order to understand this complexity and begin to overcome it, it is important to construct a characterization of learning situations. Building on previous work that dealt with the practical use of ML, a set of dimensions is developed, contrasted with another recent proposal, and illustrated with a project on the development of a decision-support system for marine propeller design. The general research opportunities that emerge from the development of the dimensions are discussed. Leading toward working systems, a simple model is presented for setting priorities in research and in selecting learning tasks within large projects. Central to the development of the concepts discussed in this paper is their use in future projects and the recording of their successes, limitations, and failures.",
"neighbors": [
2447
],
"mask": "Train"
},
{
"node_id": 2011,
"label": 6,
"text": "Title: An O(n log log n Learning Algorithm for DNF under the Uniform Distribution \nAbstract: We show that a DNF with terms of size at most d can be approximated by a function with at most d O(d log1=\") non zero Fourier coefficients such that the expected error squared, with respect to the uniform distribution, is at most \". This property is used to derive a learning algorithm for DNF, under the uniform distribution. The learning algorithm uses queries and learns, with respect to the uniform distribution, a DNF with terms of size at most d in time polynomial in n and d O(d log 1=\") . The interesting implications are for the case when \" is constant. In this case our algorithm learns a DNF with a polynomial number of terms in time n O(log log n) , and a DNF with terms of size at most O(log n= log log n) in polynomial time.",
"neighbors": [
1835,
2182,
2633
],
"mask": "Test"
},
{
"node_id": 2012,
"label": 6,
"text": "Title: Multivariate Decision Trees \nAbstract: COINS Technical Report 92-82 December 1992 Abstract Multivariate decision trees overcome a representational limitation of univariate decision trees: univariate decision trees are restricted to splits of the instance space that are orthogonal to the feature's axis. This paper discusses the following issues for constructing multivariate decision trees: representing a multivariate test, including symbolic and numeric features, learning the coefficients of a multivariate test, selecting the features to include in a test, and pruning of multivariate decision trees. We present some new and review some well-known methods for forming multivariate decision trees. The methods are compared across a variety of learning tasks to assess each method's ability to find concise, accurate decision trees. The results demonstrate that some multivariate methods are more effective than others. In addition, the experiments confirm that allowing multivariate tests improves the accuracy of the resulting decision tree over univariate trees. ",
"neighbors": [
102,
378,
2135
],
"mask": "Train"
},
{
"node_id": 2013,
"label": 2,
"text": "Title: Using the Stochastic Gradient Method to Fit Polychotomous Regression Models \nAbstract: Technical Report No. 319 April 7, 1997 University of Washington Department of Statistics Seattle, Washington 98195-4322 Abstract Kooperberg, Bose, and Stone (1997) introduced polyclass, a methodology that uses adaptively selected linear splines and their tensor products to model conditional class probabilities. The authors attempted to develop a methodology that would work well on small and moderate size problems and would scale up to large problems. However, the version of polyclass that was developed for large problems was impractical in that it required two months of cpu time to apply it to a large data set. A modification to this methodology involving the use of the stochastic gradient (on-line) method in fitting polyclass models to given sets of basis functions is developed here that makes the methodology applicable to large data sets. In particular, it is successfully applied to a phoneme recognition problem involving 45 phonemes, 81 features, 150,000 cases in the training sample, 1000 basis functions, and 44,000 unknown parameters. Comparisons with neural networks are made both on the original problem and on a three-vowel subproblem. ",
"neighbors": [
74,
2382
],
"mask": "Train"
},
{
"node_id": 2014,
"label": 4,
"text": "Title: Emergent Hierarchical Control Structures: Learning Reactive/Hierarchical Relationships in Reinforcement Environments \nAbstract: The use of externally imposed hierarchical structures to reduce the complexity of learning control is common. However, it is acknowledged that learning the hierarchical structure itself is an important step towards more general (learning of many things as required) and less bounded (learning of a single thing as specified) learning. Presented in this paper is a reinforcement learning algorithm called Nested Q-learning that generates a hierarchical control structure in reinforcement learning domains. The emergent structure combined with learned bottom-up reactive reactions results in a reactive hierarchical control system. Effectively, the learned hierarchy decomposes what would otherwise be a monolithic evaluation function into many smaller evaluation functions that can be recombined without the loss of previously learned information. ",
"neighbors": [
562,
1828,
2018
],
"mask": "Train"
},
{
"node_id": 2015,
"label": 6,
"text": "Title: On-Line Portfolio Selection Using Multiplicative Updates \nAbstract: We present an on-line investment algorithm which achieves almost the same wealth as the best constant-rebalanced portfolio determined in hindsight from the actual market outcomes. The algorithm employs a multiplicative update rule derived using a framework introduced by Kivinen and Warmuth. Our algorithm is very simple to implement and requires only constant storage and computing time per stock in each trading period. We tested the performance of our algorithm on real stock data from the New York Stock Exchange accumulated during a 22-year period. On this data, our algorithm clearly outperforms the best single stock as well as Cover's universal portfolio selection algorithm. We also present results for the situation in which the investor has access to additional \"side information.\" ",
"neighbors": [
2034,
2092,
2327
],
"mask": "Train"
},
{
"node_id": 2016,
"label": 3,
"text": "Title: On the Semantics of Belief Revision Systems \nAbstract: We consider belief revision operators that satisfy the Alchourron-Gardenfors-Makinson postulates, and present an epistemic logic in which, for any such revision operator, the result of a revision can be described by a sentence in the logic. In our logic, the fact that the agent's set of beliefs is is represented by the sentence O, where O is Levesque's `only know' operator. Intuitively, O is read as ` is all that is believed.' The fact that the agent believes is represented by the sentence B , read in the usual way as ` is believed'. The connective represents update as defined by Katsuno and Mendelzon. The revised beliefs are represented by the sentence O B . We show that for every revision operator that satisfies the AGM postulates, there is a model for our epistemic logic such that the beliefs implied by the sentence O B in this model correspond exactly to the sentences implied by the theory that results from revising by . This means that reasoning about changes in the agent's beliefs reduces to model checking of certain epistemic sentences. The negative result in the paper is that this type of formal account of revision cannot be extended to the situation where the agent is able to reason about its beliefs. A fully introspective agent cannot use our construction to reason about the results of its own revisions, on pain of triviality. ",
"neighbors": [
342,
467,
1800
],
"mask": "Train"
},
{
"node_id": 2017,
"label": 3,
"text": "Title: 28 Learning Bayesian Networks Using Feature Selection \nAbstract: This paper introduces a novel enhancement for learning Bayesian networks with a bias for small, high-predictive-accuracy networks. The new approach selects a subset of features that maximizes predictive accuracy prior to the network learning phase. We examine explicitly the effects of two aspects of the algorithm, feature selection and node ordering. Our approach generates networks that are computationally simpler to evaluate and display predictive accuracy comparable Bayesian networks are being increasingly recognized as an important representation for probabilistic reasoning. For many domains, the need to specify the probability distributions for a Bayesian network is considerable, and learning these probabilities from data using an algorithm like K2 [Cooper92] could alleviate such specification difficulties. We describe an extension to the Bayesian network learning approaches introduced in K2. Our goal is to construct networks that are simpler to evaluate but still have high predictive accuracy relative to networks that model all features. Rather than use all database features (or attributes) for constructing the network, we select a subset of features that maximize the predictive accuracy of the network. Then the learning process uses only the selected features as nodes in learning the Bayesian network. We examine explicitly the effects of two aspects of the algorithm: (a) feature selection, and (b) node ordering. Our experimental results verify that this approach generates networks that are compu-tationally simpler to evaluate and display predictive accuracy comparable to the predictive accuracy of Bayesian networks that model all features. Our results, similar to those observed by other studies of feature selection in learning [Caruana94, John94, Langley94a, Langley94b], demonstrate that feature selection provides comparable predictive accuracy using smaller networks. For example, by selecting as few as 18% of the features for the to that of Bayesian networks which model all attributes.",
"neighbors": [
1479,
1582,
2677
],
"mask": "Train"
},
{
"node_id": 2018,
"label": 4,
"text": "Title: Learning Hierarchical Control Structures for Multiple Tasks and Changing Environments \nAbstract: While the need for hierarchies within control systems is apparent, it is also clear to many researchers that such hierarchies should be learned. Learning both the structure and the component behaviors is a difficult task. The benefit of learning the hierarchical structures of behaviors is that the decomposition of the control structure into smaller transportable chunks allows previously learned knowledge to be applied to new but related tasks. Presented in this paper are improvements to Nested Q-learning (NQL) that allow more realistic learning of control hierarchies in reinforcement environments. Also presented is a simulation of a simple robot performing a series of related tasks that is used to compare both hierarchical and non-hierarchal learning techniques. ",
"neighbors": [
562,
1828,
2014
],
"mask": "Train"
},
{
"node_id": 2019,
"label": 2,
"text": "Title: Ensemble Training: Some Recent Experiments with Postal Zip Data \nAbstract: Recent findings suggest that a classification scheme based on an ensemble of networks is an effective way to address overfitting. We study optimal methods for training an ensemble of networks. Some recent experiments on Postal Zip-code character data suggest that weight decay may not be an optimal method for controlling the variance of a classifier.",
"neighbors": [
157,
2147
],
"mask": "Train"
},
{
"node_id": 2020,
"label": 2,
"text": "Title: Variational Gaussian Process Classifiers \nAbstract: ",
"neighbors": [
1857
],
"mask": "Train"
},
{
"node_id": 2021,
"label": 2,
"text": "Title: Best-First Model Merging for Dynamic Learning and Recognition \nAbstract: Best-first model merging is a general technique for dynamically choosing the structure of a neural or related architecture while avoiding overfitting. It is applicable to both learning and recognition tasks and often generalizes significantly better than fixed structures. We demonstrate the approach applied to the tasks of choosing radial basis functions for function learning, choosing local affine models for curve and constraint surface modelling, and choosing the structure of a balltree or bumptree to maximize efficiency of access.",
"neighbors": [
87,
157,
2218,
2428
],
"mask": "Test"
},
{
"node_id": 2022,
"label": 3,
"text": "Title: Geometric and Subgeometric Convergence of Diffusions with Given Stationary Distributions, and Their Discretizations \nAbstract: We describe algorithms for estimating a given measure known up to a constant of proportionality, based on a large class of diffusions (extending the Langevin model) for which is invariant. We show that under weak conditions one can choose from this class in such a way that the diffusions converge at exponential rate to , and one can even ensure that convergence is independent of the starting point of the algorithm. When convergence is less than exponential we show that it is often polynomial at known rates. We then consider methods of discretizing the diffusion in time, and find methods which inherit the convergence rates of the continuous time process. These contrast with the behaviour of the naive or Euler discretization, which can behave badly even in simple cases. ",
"neighbors": [
2008,
2153,
2219
],
"mask": "Test"
},
{
"node_id": 2023,
"label": 2,
"text": "Title: Classification of EEG Signals Using a Sparse Polynomial Builder \nAbstract: Edward S. Orosz and Charles W. Anderson Technical Report CS-94-111 April 27, 1994 ",
"neighbors": [
2135
],
"mask": "Test"
},
{
"node_id": 2024,
"label": 2,
"text": "Title: Analysis of Linsker's application of Hebbian rules to Linear Networks \nAbstract: Linsker has reported the development of structured receptive fields in simulations using a Hebb-type synaptic plasticity rule in a feed-forward linear network. The synapses develop under dynamics determined by a matrix that is closely related to the covariance matrix of input cell activities. We analyse the dynamics of the learning rule in terms of the eigenvectors of this matrix. These eigenvectors represent independently evolving weight structures. Some general theorems are presented regarding the properties of these eigenvectors and their eigenvalues. For a general covariance matrix four principal parameter regimes are predicted. We concentrate on the gaussian covariances at layer B ! C of Linsker's network. Analytic and numerical solutions for the eigenvectors at this layer are presented. Three eigenvectors dominate the dynamics: a DC eigenvector, in which all synapses have the same sign; a bi-lobed, oriented eigenvector; and a circularly symmetric, centre-surround eigenvector. Analysis of the circumstances in which each of these vectors dominates yields an explanation of the emergence of centre-surround structures and symmetry-breaking bi-lobed structures. Criteria are developed estimating the boundary of the parameter regime in which centre-surround structures emerge. The application of our analysis to Linsker's higher layers, at which the covariance functions were oscillatory, is briefly discussed. ",
"neighbors": [
427,
737,
1778,
1932
],
"mask": "Test"
},
{
"node_id": 2025,
"label": 3,
"text": "Title: WEAK CONVERGENCE AND OPTIMAL SCALING OF RANDOM WALK METROPOLIS ALGORITHMS \nAbstract: This paper considers the problem of scaling the proposal distribution of a multidimensional random walk Metropolis algorithm, in order to maximize the efficiency of the algorithm. The main result is a weak convergence result as the dimension of a sequence of target densities, n, converges to 1. When the proposal variance is appropriately scaled according to n, the sequence of stochastic processes formed by the first component of each Markov chain, converge to the appropriate limiting Langevin diffusion process. The limiting diffusion approximation admits a straight-forward efficiency maximization problem, and the resulting asymptotically optimal policy is related to the asymptotic acceptance rate of proposed moves for the algorithm. The asymptotically optimal acceptance rate is 0.234 under quite general conditions. The main result is proved in the case where the target density has a symmetric product form. Extensions of the result are discussed. ",
"neighbors": [
2153,
2377,
2693
],
"mask": "Train"
},
{
"node_id": 2026,
"label": 2,
"text": "Title: Learning overcomplete representations \nAbstract: This paper considers the problem of scaling the proposal distribution of a multidimensional random walk Metropolis algorithm, in order to maximize the efficiency of the algorithm. The main result is a weak convergence result as the dimension of a sequence of target densities, n, converges to 1. When the proposal variance is appropriately scaled according to n, the sequence of stochastic processes formed by the first component of each Markov chain, converge to the appropriate limiting Langevin diffusion process. The limiting diffusion approximation admits a straight-forward efficiency maximization problem, and the resulting asymptotically optimal policy is related to the asymptotic acceptance rate of proposed moves for the algorithm. The asymptotically optimal acceptance rate is 0.234 under quite general conditions. The main result is proved in the case where the target density has a symmetric product form. Extensions of the result are discussed. ",
"neighbors": [
570,
576,
1922,
2552
],
"mask": "Train"
},
{
"node_id": 2027,
"label": 4,
"text": "Title: Coordinating Reactive Behaviors keywords: reactive systems, planning and learning \nAbstract: Combinating reactivity with planning has been proposed as a means of compensating for potentially slow response times of planners while still making progress toward long term goals. The demands of rapid response and the complexity of many environments make it difficult to decompose, tune and coordinate reactive behaviors while ensuring consistency. Neural networks can address the tuning problem, but are less useful for decomposition and coordination. We hypothesize that interacting reactions can be decomposed into separate behaviors resident in separate networks and that the interaction can be coordinated through the tuning mechanism and a higher level controller. To explore these issues, we have implemented a neural network architecture as the reactive component of a two layer control system for a simulated race car. By varying the architecture, we test whether decomposing reactivity into separate behaviors leads to superior overall performance, coordination and learning convergence. ",
"neighbors": [
465,
565,
636,
2409
],
"mask": "Train"
},
{
"node_id": 2028,
"label": 6,
"text": "Title: Teaching a Smarter Learner \nAbstract: We introduce a formal model of teaching in which the teacher is tailored to a particular learner, yet the teaching protocol is designed so that no collusion is possible. Not surprisingly, such a model remedies the non-intuitive aspects of other models in which the teacher must successfully teach any consistent learner. We prove that any class that can be exactly identified by a deterministic polynomial-time algorithm with access to a very rich set of example-based queries is teachable by a computationally unbounded teacher and a polynomial-time learner. In addition, we present other general results relating this model of teaching to various previous results. We also consider the problem of designing teacher/learner pairs in which both the teacher and learner are polynomial-time algorithms and describe teacher/learner pairs for the classes of 1-decision lists and Horn sentences.",
"neighbors": [
308,
1003,
2653
],
"mask": "Train"
},
{
"node_id": 2029,
"label": 2,
"text": "Title: A Simple Randomized Quantization Algorithm for Neural Network Pattern Classifiers \nAbstract: This paper explores some algorithms for automatic quantization of real-valued datasets using thermometer codes for pattern classification applications. Experimental results indicate that a relatively simple randomized thermometer code generation technique can result in quantized datasets that when used to train simple perceptrons, can yield generalization on test data that is substantially better than that obtained with their unquantized counterparts.",
"neighbors": [
503,
1818,
1952,
2393
],
"mask": "Train"
},
{
"node_id": 2030,
"label": 1,
"text": "Title: Using Modeling Knowledge to Guide Design Space Search \nAbstract: Automated search of a space of candidate designs seems an attractive way to improve the traditional engineering design process. To make this approach work, however, the automated design system must include both knowledge of the modeling limitations of the method used to evaluate candidate designs and also an effective way to use this knowledge to influence the search process. We suggest that a productive approach is to include this knowledge by implementing a set of model constraint functions which measure how much each modeling assumptions is violated, and to influence the search by using the values of these model constraint functions as constraint inputs to a standard constrained nonlinear optimization numerical method. We test this idea in the domain of conceptual design of supersonic transport aircraft, and our experiments indicate that our model constraint communication strategy can decrease the cost of design space search by one or more orders of magnitude. ",
"neighbors": [
743,
744,
2077,
2128,
2130,
2131,
2316,
2659
],
"mask": "Train"
},
{
"node_id": 2031,
"label": 6,
"text": "Title: TOWARDS CONCEPT FORMATION GROUNDED ON PERCEPTION AND ACTION OF A MOBILE ROBOT \nAbstract: The recognition of objects and, hence, their descriptions must be grounded in the environment in terms of sensor data. We argue, why the concepts, used to classify perceived objects and used to perform actions on these objects, should integrate action-oriented perceptual features and perception-oriented action features. We present a grounded symbolic representation for these concepts. Moreover, the concepts should be learned. We show a logic-oriented approach to learning grounded concepts. ",
"neighbors": [
2171
],
"mask": "Validation"
},
{
"node_id": 2032,
"label": 5,
"text": "Title: Learning Action-oriented Perceptual Features for Robot Navigation \nAbstract: The recognition of objects and, hence, their descriptions must be grounded in the environment in terms of sensor data. We argue, why the concepts, used to classify perceived objects and used to perform actions on these objects, should integrate action-oriented perceptual features and perception-oriented action features. We present a grounded symbolic representation for these concepts. Moreover, the concepts should be learned. We show a logic-oriented approach to learning grounded concepts. ",
"neighbors": [
344,
2171
],
"mask": "Train"
},
{
"node_id": 2033,
"label": 2,
"text": "Title: Improving RBF Networks by the Feature Selection Approach EUBAFES \nAbstract: The curse of dimensionality is one of the severest problems concerning the application of RBF networks. The number of RBF nodes and therefore the number of training examples needed grows exponentially with the intrinsic dimensionality of the input space. One way to address this problem is the application of feature selection as a data preprocessing step. In this paper we propose a two-step approach for the determination of an optimal feature subset: First, all possible feature-subsets are reduced to those with best discrimination properties by the application of the fast and robust filter technique EUBAFES. Secondly we use a wrapper approach to judge, which of the pre-selected feature subsets leads to RBF networks with least complexity and best classification accuracy. Experiments are undertaken to show the improvement for RBF networks by our feature selection approach. ",
"neighbors": [
430,
2622
],
"mask": "Train"
},
{
"node_id": 2034,
"label": 3,
"text": "Title: Update rules for parameter estimation in Bayesian networks \nAbstract: This paper re-examines the problem of parameter estimation in Bayesian networks with missing values and hidden variables from the perspective of recent work in on-line learning [12]. We provide a unified framework for parameter estimation that encompasses both on-line learning, where the model is continuously adapted to new data cases as they arrive, and the more traditional batch learning, where a pre-accumulated set of samples is used in a one-time model selection process. In the batch case, our framework encompasses both the gradient projection algorithm [2, 3] and the EM algorithm [14] for Bayesian networks. The framework also leads to new on-line and batch parameter update schemes, including a parameterized version of EM. We provide both empirical and theoretical results indicating that parameterized EM allows faster convergence to the maximum likelihood parame ters than does standard EM.",
"neighbors": [
453,
558,
577,
2015,
2327
],
"mask": "Train"
},
{
"node_id": 2035,
"label": 0,
"text": "Title: Knowledge Compilation and Speedup Learning in Continuous Task Domains \nAbstract: Many techniques for speedup learning and knowledge compilation focus on the learning and optimization of macro-operators or control rules in task domains that can be characterized using a problem-space search paradigm. However, such a characterization does not fit well the class of task domains in which the problem solver is required to perform in a continuous manner. For example, in many robotic domains, the problem solver is required to monitor real-valued perceptual inputs and vary its motor control parameters in a continuous, on-line manner to successfully accomplish its task. In such domains, discrete symbolic states and operators are difficult to define. To improve its performance in continuous problem domains, a problem solver must learn, modify, and use continuous operators that continuously map input sensory information to appropriate control outputs. Additionally, the problem solver must learn the contexts in which those continuous operators are applicable. We propose a learning method that can compile sensorimo-tor experiences into continuous operators, which can then be used to improve performance of the problem solver. The method speeds up the task performance as well as results in improvements in the quality of the resulting solutions. The method is implemented in a robotic navigation system, which is evaluated through extensive experimen tation.",
"neighbors": [
858,
1084,
2303
],
"mask": "Train"
},
{
"node_id": 2036,
"label": 6,
"text": "Title: Query, PACS and simple-PAC Learning \nAbstract: We study a distribution dependent form of PAC learning that uses probability distributions related to Kolmogorov complexity. We relate the PACS model, defined by Denis, D'Halluin and Gilleron in [3], with the standard simple-PAC model and give a general technique that subsumes the results in [3] and [6]. ",
"neighbors": [
2696
],
"mask": "Test"
},
{
"node_id": 2037,
"label": 0,
"text": "Title: Formalising the knowledge content of case memory systems \nAbstract: Discussions of case-based reasoning often reflect an implicit assumption that a case memory system will become better informed, i.e. will increase in knowledge, as more cases are added to the case-base. This paper considers formalisations of this `knowledge content' which are a necessary preliminary to more rigourous analysis of the performance of case-based reasoning systems. In particular we are interested in modelling the learning aspects of case-based reasoning in order to study how the performance of a case-based reasoning system changes as it accumlates problem-solving experience. The current paper presents a `case-base semantics' which generalises recent formalisations of case-based classification. Within this framework, the paper explores various issues in assuring that these sematics are well-defined, and illustrates how the knowledge content of the case memory system can be seen to reside in both the chosen similarity measure and in the cases of the case-base.",
"neighbors": [
288,
1584,
2151
],
"mask": "Validation"
},
{
"node_id": 2038,
"label": 0,
"text": "Title: Knowledge Based Systems \nAbstract: Technical Report No. 95/2 ",
"neighbors": [
985,
2692
],
"mask": "Train"
},
{
"node_id": 2039,
"label": 1,
"text": "Title: A Case Study on Tuning of Genetic Algorithms by Using Performance Evaluation Based on Experimental Design \nAbstract: This paper proposes four performance measures of a genetic algorithm (GA) which enable us to compare different GAs for an op timization problem and different choices of their parameters' values. The performance measures are defined in terms of observations in simulation, such as the frequency of optimal solutions, fitness values, the frequency of evolution leaps, and the number of generations needed to reach an optimal solution. We present a case study in which parameters of a GA for robot path planning was tuned and its performance was optimized through performance evaluation by using the measures. Especially, one of the performance measures is used to demonstrate the adaptivity of the GA for robot path planning. We also propose a process of systematic tuning based on techniques for the design of experiments. ",
"neighbors": [
163,
1060,
1890,
2254
],
"mask": "Train"
},
{
"node_id": 2040,
"label": 6,
"text": "Title: On the Learnability and Usage of Acyclic Probabilistic Finite Automata \nAbstract: We propose and analyze a distribution learning algorithm for a subclass of Acyclic Probabilistic Finite Automata (APFA). This subclass is characterized by a certain distinguishability property of the automata's states. Though hardness results are known for learning distributions generated by general APFAs, we prove that our algorithm can efficiently learn distributions generated by the subclass of APFAs we consider. In particular, we show that the KL-divergence between the distribution generated by the target source and the distribution generated by our hypothesis can be made arbitrarily small with high confidence in polynomial time. We present two applications of our algorithm. In the first, we show how to model cursively written letters. The resulting models are part of a complete cursive handwriting recognition system. In the second application we demonstrate how APFAs can be used to build multiple-pronunciation models for spoken words. We evaluate the APFA based pronunciation models on labeled speech data. The good performance (in terms of the log-likelihood obtained on test data) achieved by the APFAs and the little time needed for learning suggests that the learning algorithm of APFAs might be a powerful alternative to commonly used probabilistic models. ",
"neighbors": [
574,
672,
1006,
1924,
2004,
2360
],
"mask": "Test"
},
{
"node_id": 2041,
"label": 2,
"text": "Title: Natural Language Grammatical Inference with Recurrent Neural Networks \nAbstract: This paper examines the inductive inference of a complex grammar with neural networks specifically, the task considered is that of training a network to classify natural language sentences as grammatical or ungrammatical, thereby exhibiting the same kind of discriminatory power provided by the Principles and Parameters linguistic framework, or Government-and-Binding theory. Neural networks are trained, without the division into learned vs. innate components assumed by Chomsky, in an attempt to produce the same judgments as native speakers on sharply grammatical/ungrammatical data. How a recurrent neural network could possess linguistic capability, and the properties of various common recurrent neural network architectures are discussed. The problem exhibits training behavior which is often not present with smaller grammars, and training was initially difficult. However, after implementing several techniques aimed at improving the convergence of the gradient descent backpropagation-through- time training algorithm, significant learning was possible. It was found that certain architectures are better able to learn an appropriate grammar. The operation of the networks and their training is analyzed. Finally, the extraction of rules in the form of deterministic finite state automata is investigated. ",
"neighbors": [
1744
],
"mask": "Train"
},
{
"node_id": 2042,
"label": 2,
"text": "Title: Fast Bounded Smooth Regression with Lazy Neural Trees \nAbstract: We propose the lazy neural tree (LNT) as the appropriate architecture for the realization of smooth regression systems. The LNT is a hybrid of a decision tree and a neural network. From the neural network it inherits smoothness of the generated function, incremental adaptability, and conceptual simplicity. From the decision tree it inherits the topology and initial parameter setting as well as a very efficient sequential implementation that out-performs traditional neural network simulations by the order of magnitudes. The enormous speed is achieved by lazy evaluation. A further speed-up can be obtained by the application of a window-ing scheme if the region of interesting results is restricted. ",
"neighbors": [
378,
2428
],
"mask": "Test"
},
{
"node_id": 2043,
"label": 2,
"text": "Title: Using Mixtures of Factor Analyzers for Segmentation and Pose Estimation Category: Visual Processing Preference: Oral \nAbstract: To read a hand-written digit string, it is helpful to segment the image into separate digits. Bottom-up segmentation heuristics often fail when neighboring digits overlap substantially. We describe a system that has a stochastic generative model of each digit class and we show that this is the only knowledge required for segmentation. The system uses Gibbs sampling to construct a perceptual interpretation of a digit string and segmentation arises naturally from the \"explaining away\" effects that occur during Bayesian inference. By using conditional mixtures of factor analyzers, it is possible to extract an explicit, compact representation of the instantiation parameters that describe the pose of each digit. These instantiation parameters can then be used as the inputs to a higher level system that models the relationships between digits. The same technique could be used to model individual digits as redundancies between the instantiation parameters of their parts.",
"neighbors": [
2270
],
"mask": "Train"
},
{
"node_id": 2044,
"label": 2,
"text": "Title: Neural Networks and Statistical Models Proceedings of the Nineteenth Annual SAS Users Group International Conference,\nAbstract: ",
"neighbors": [
80,
427,
1149,
2683
],
"mask": "Train"
},
{
"node_id": 2045,
"label": 5,
"text": "Title: The Arguments of Newly Invented Predicates in ILP \nAbstract: In this paper we investigate the problem of choosing arguments for a new predicate. We identify the relevant terms to be considered as arguments, and propose methods to choose among them based on propositional minimisation.",
"neighbors": [
638,
2550
],
"mask": "Train"
},
{
"node_id": 2046,
"label": 2,
"text": "Title: A Method for Identifying Splice Sites and Translational Start Sites in \nAbstract: This paper describes a new method for determining the consensus sequences that signal the start of translation and the boundaries between exons and introns (donor and acceptor sites) in eukaryotic mRNA. The method takes into account the dependencies between adjacent bases, in contrast to the usual technique of considering each position independently. When coupled with a dynamic program to compute the most likely sequence, new consensus sequences emerge. The consensus sequence information is summarized in conditional probability matrices which, when used to locate signals in uncharacter-ized genomic DNA, have greater sensitivity and specificity than conventional matrices. Species-specific versions of these matrices are especially effective at distinguishing true and false sites. ",
"neighbors": [
268,
616,
2107
],
"mask": "Test"
},
{
"node_id": 2047,
"label": 1,
"text": "Title: Evolutionary wanderlust: Sexual selection with directional mate preferences. Evolutionary wanderlust: Sexual selection with directional mate preferences \nAbstract: In the pantheon of evolutionary forces, the optimizing Apollonian powers of natural selection are generally assumed to dominate the dark Dionysian dynamics of sexual selection. But this need not be the case, particularly with a class of selective mating mechanisms called `directional mate preferences' (Kirkpatrick, 1987). In previous simulation research, we showed that nondirectional assortative mating preferences could cause populations to spontaneously split apart into separate species (Todd & Miller, 1991). In this paper, we show that directional mate preferences can cause populations to wander capriciously through phenotype space, under a strange form of runaway sexual selection, with or without the influence of natural selection pressures. When directional mate preferences are free to evolve, they do not always evolve to point in the direction of natural-selective peaks. Sexual selection can thus take on a life of its own, such that mate preferences within a species become a distinct and important part of the environment to which the species' phenotypes adapt. These results suggest a broader conception of `adaptive behavior', in which attracting potential mates becomes as important as finding food and avoiding predators. We present a framework for simulating a wide range of directional and non-directional mate preferences, and discuss some practical and scientific applications of simu lating sexual selection.",
"neighbors": [
2111
],
"mask": "Train"
},
{
"node_id": 2048,
"label": 0,
"text": "Title: Technical Diagnosis: Fallexperte-D of further knowledge sources (domain knowledge, common knowledge) is investigated in the\nAbstract: Case based reasoning (CBR) uses the knowledge from former experiences (\"known cases\"). Since special knowledge of an expert is mainly subject to his experiences, the CBR techniques are a good base for the development of expert systems. We investigate the problem for technical diagnosis. Diagnosis is not considered as a classification task, but as a process to be guided by computer assisted experience. This corresponds to the flexible \"case completion\" approach. Flexibility is also needed for the expert view with predominant interest in the unexpected, unpredictible cases. ",
"neighbors": [
1854
],
"mask": "Train"
},
{
"node_id": 2049,
"label": 2,
"text": "Title: Learning Feature-based Semantics with Simple Recurrent Networks \nAbstract: The paper investigates the possibilities for using simple recurrent networks as transducers which map sequential natural language input into non-sequential feature-based semantics. The networks perform well on sentences containing a single main predicate (encoded by transitive verbs or prepositions) applied to multiple-feature objects (encoded as noun-phrases with adjectival modifiers), and shows robustness against ungrammatical inputs. A second set of experiments deals with sentences containing embedded structures. Here the network is able to process multiple levels of sentence-final embeddings but only one level of center-embedding. This turns out to be a consequence of the network's inability to retain information that is not reflected in the outputs over intermediate phases of processing. Two extensions to Elman's [9] original recurrent network architecture are introduced. ",
"neighbors": [
2218,
2306,
2410
],
"mask": "Train"
},
{
"node_id": 2050,
"label": 2,
"text": "Title: TABLE DES MATI ERES 1 Apprentissage et approximation les techniques de regularisation 3 1.1 Introduction\nAbstract: The paper investigates the possibilities for using simple recurrent networks as transducers which map sequential natural language input into non-sequential feature-based semantics. The networks perform well on sentences containing a single main predicate (encoded by transitive verbs or prepositions) applied to multiple-feature objects (encoded as noun-phrases with adjectival modifiers), and shows robustness against ungrammatical inputs. A second set of experiments deals with sentences containing embedded structures. Here the network is able to process multiple levels of sentence-final embeddings but only one level of center-embedding. This turns out to be a consequence of the network's inability to retain information that is not reflected in the outputs over intermediate phases of processing. Two extensions to Elman's [9] original recurrent network architecture are introduced. ",
"neighbors": [
611,
2378
],
"mask": "Validation"
},
{
"node_id": 2051,
"label": 4,
"text": "Title: Emergent Control and Planning in an Autonomous Vehicle \nAbstract: We use a connectionist network trained with reinforcement to control both an autonomous robot vehicle and a simulated robot. We show that given appropriate sensory data and architectural structure, a network can learn to control the robot for a simple navigation problem. We then investigate a more complex goal-based problem and examine the plan-like behavior that emerges. ",
"neighbors": [
1969
],
"mask": "Test"
},
{
"node_id": 2052,
"label": 0,
"text": "Title: Applying Case-Based Reasoning to Control in Robotics \nAbstract: The proposed architecture is experimentally evaluated on two real world domains and the results are compared to other machine learning algorithms applied to the same problem.",
"neighbors": [
1483,
2062
],
"mask": "Train"
},
{
"node_id": 2053,
"label": 6,
"text": "Title: On the Complexity of Learning from Drifting Distributions \nAbstract: The proposed architecture is experimentally evaluated on two real world domains and the results are compared to other machine learning algorithms applied to the same problem.",
"neighbors": [
109,
488,
2054,
2685,
2690
],
"mask": "Validation"
},
{
"node_id": 2054,
"label": 6,
"text": "Title: Tracking Drifting Concepts By Minimizing Disagreements \nAbstract: In this paper we consider the problem of tracking a subset of a domain (called the target) which changes gradually over time. A single (unknown) probability distribution over the domain is used to generate random examples for the learning algorithm and measure the speed at which the target changes. Clearly, the more rapidly the target moves, the harder it is for the algorithm to maintain a good approximation of the target. Therefore we evaluate algorithms based on how much movement of the target can be tolerated between examples while predicting with accuracy *. Furthermore, the complexity of the class H of possible targets, as measured by d, its VC-dimension, also effects the difficulty of tracking the target concept. We show that if the problem of minimizing the number of disagreements with a sample from among concepts in a class H can be approximated to within a factor k, then there is a simple tracking algorithm for H which can achieve a probability * of making a mistake if the target movement rate is at most a constant times * 2 =(k(d + k) ln 1 * ), where d is the Vapnik-Chervonenkis dimension of H. Also, we show that if H is properly PAC-learnable, then there is an efficient (randomized) algorithm that with high probability approximately minimizes disagreements to within a factor of 7d + 1, yielding an efficient tracking algorithm for H which tolerates drift rates up to a constant times * 2 =(d 2 ln 1 In addition, we prove complementary results for the classes of halfspaces and axis-aligned hy- perrectangles showing that the maximum rate of drift that any algorithm (even with unlimited computational power) can tolerate is a constant times * 2 =d. ",
"neighbors": [
109,
591,
640,
2053,
2685
],
"mask": "Train"
},
{
"node_id": 2055,
"label": 1,
"text": "Title: 1 FEATURE SUBSET SELECTION USING A GENETIC ALGORITHM time needed for learning a sufficiently accurate\nAbstract: Practical pattern classification and knowledge discovery problems require selection of a subset of attributes or features (from a much larger set) to represent the patterns to be classified. This is due to the fact that the performance of the classifier (usually induced by some learning algorithm) and the cost of classification are sensitive to the choice of the features used to construct the classifier. Exhaustive evaluation of possible feature subsets is usually infeasible in practice because of the large amount of computational effort required. Genetic algorithms, which belong to a class of randomized heuristic search techniques, offer an attractive approach to find near-optimal solutions to such optimization problems. This paper presents an approach to feature subset selection using a genetic algorithm. Some advantages of this approach include the ability to accommodate multiple criteria such as accuracy and cost of classification into the feature selection process and to find feature subsets that perform well for particular choices of the inductive learning algorithm used to construct the pattern classifier. Our experiments with several benchmark real-world pattern classification problems demonstrate the feasibility of this approach to feature subset selection in the automated Many practical pattern classification tasks (e.g., medical diagnosis) require learning of an appropriate classification function that assigns a given input pattern (typically represented using a vector of attribute or feature values) to one of a finite set of classes. The choice of features, attributes, or measurements used to represent patterns that are presented to a classifier affect (among other things): The accuracy of the classification function that can be learned using an inductive learning algorithm (e.g., a decision tree induction algorithm or a neural network learning algorithm): The features used to describe the patterns implicitly define a pattern language. If the language is not expressive enough, it would fail to capture the information that is necessary for classification and hence regardless of the learning algorithm used, the accuracy of the classification function learned would be limited by this lack of information. design of neural networks for pattern classification and knowledge discovery.",
"neighbors": [
2352
],
"mask": "Train"
},
{
"node_id": 2056,
"label": 2,
"text": "Title: PREDICTION WITH GAUSSIAN PROCESSES: FROM LINEAR REGRESSION TO LINEAR PREDICTION AND BEYOND To appear in\nAbstract: The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems. ",
"neighbors": [
271,
2095
],
"mask": "Validation"
},
{
"node_id": 2057,
"label": 0,
"text": "Title: Chunking in soar: The anatomy of a general learn ing mechanism. Machine Learning, 1(1). Learning\nAbstract: gers University. Also appears as tech. report ML- TR-7. Minton, S. (1988). Quantitative results concerning the utility of explanation-based learning. In Proceedings of National Conference on Artificial Intelli gence, pages 564-569. St. Paul, MN. ",
"neighbors": [
790,
1510,
2215,
2465,
2695
],
"mask": "Train"
},
{
"node_id": 2058,
"label": 1,
"text": "Title: Challenges in Evolving Controllers for Physical Robots \nAbstract: This paper discusses the feasibility of applying evolutionary methods to automatically generating controllers for physical mobile robots. We overview the state of the art in the field, describe some of the main approaches, discuss the key challenges, unanswered problems, and some promising directions.",
"neighbors": [
755,
757,
2232
],
"mask": "Train"
},
{
"node_id": 2059,
"label": 6,
"text": "Title: Challenges in Evolving Controllers for Physical Robots \nAbstract: General convergence results for linear discriminant updates Abstract The problem of learning linear discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of quasi-additive algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers much of this class, including both Perceptron and Winnow but also many novel algorithms. Our proof introduces a generic measure of progress that seems to capture much of when and how these algorithms converge. Using this measure, we develop a simple general technique for proving mistake bounds, which we apply to the new algorithms as well as existing algorithms. When applied to known algorithms, our technique automatically produces close variants of existing proofs (and we generally obtain the known bounds, to within constants) thus showing, in a certain sense, that these seem ingly diverse results are fundamentally isomorphic. ",
"neighbors": [
453,
2651
],
"mask": "Test"
},
{
"node_id": 2060,
"label": 0,
"text": "Title: A Similarity-Based Retrieval Tool for Software Repositories \nAbstract: In this paper we present a prototype of a flexible similarity-based retrieval system. Its flexibility is supported by allowing for an imprecisely specified query. Moreover, our algorithm allows for assessing if the retrieved items are relevant in the initial context, specified in the query. The presented system can be used as a supporting tool for a software repository. We also discuss system evaluation with concerns on usefulness, scalability, applicability and comparability. Evaluation of the T A3 system on three domains gives us encouraging results and an integration of TA3 into a real software repository as a retrieval tool is ongoing. ",
"neighbors": [
857,
1483,
2062
],
"mask": "Train"
},
{
"node_id": 2061,
"label": 0,
"text": "Title: Inductive Learning and Case-Based Reasoning \nAbstract: This paper describes an application of an inductive learning techniques to case-based reasoning. We introduce two main forms of induction, define case-based reasoning and present a combination of both. The evaluation of the proposed system, called TA3, is carried out on a classification task, namely character recognition. We show how inductive knowledge improves knowledge representation and in turn flexibility of the system, its performance (in terms of classification accuracy) and its scalability. ",
"neighbors": [
96,
215,
2062
],
"mask": "Train"
},
{
"node_id": 2062,
"label": 0,
"text": "Title: A Case-Based Reasoning Approach \nAbstract: The AAAI Fall Symposium; Flexible Computation in Intelligent Systems: Results, Issues, and Opportunities. Nov. 9-11, 1996, Cambridge, MA Abstract This paper presents a case-based reasoning system TA3. We address the flexibility of the case-based reasoning process, namely flexible retrieval of relevant experiences, by using a novel similarity assessment theory. To exemplify the advantages of such an approach, we have experimentally evaluated the system and compared its performance to the performance of non-flexible version of TA3 and to other machine learning algorithms on several domains. ",
"neighbors": [
2052,
2060,
2061,
2066
],
"mask": "Train"
},
{
"node_id": 2063,
"label": 3,
"text": "Title: Planning Medical Therapy Using Partially Observable Markov Decision Processes. \nAbstract: Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead they are very often dependent and interleaved over time, mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different treatment and diagnostic (investigative) procedures. The framework particularly suitable for modeling such a complex therapy decision process is Partially observable Markov decision process (POMDP). Unfortunately the problem of finding the optimal therapy within the standard POMDP framework is also computationally very costly. In this paper we investigate various structural extensions of the standard POMDP framework and approximation methods which allow us to simplify model construction process for larger therapy problems and to solve them faster. A therapy problem we target specifically is the management of patients with ischemic heart disease. ",
"neighbors": [
2494
],
"mask": "Train"
},
{
"node_id": 2064,
"label": 3,
"text": "Title: A Market Framework for Pooling Opinions \nAbstract: Consider a group of Bayesians, each with a subjective probability distribution over a set of uncertain events. An opinion pool derives a single consensus distribution over the events, representative of the group as a whole. Several pooling functions have been proposed, each sensible under particular assumptions or measures. Many researchers over many years have failed to form a consensus on which method is best. We propose a market-based pooling procedure, and analyze its properties. Participants bet on securities, each paying off contingent on an uncertain event, so as to maximize their own expected utilities. The consensus probability of each event is defined as the corresponding security's equilibrium price. The market framework provides explicit monetary incentives for participation and honesty, and allows agents to maintain individual rationality and limited privacy. \"No arbitrage\" arguments ensure that the equilibrium prices form legal probabilities. We show that, when events are disjoint and all participants have exponential utility for money, the market derives the same result as the logarithmic opinion pool; similarly, logarithmic utility for money yields the linear opinion pool. In both cases, we prove that the group's behavior is, to an outside observer, indistinguishable from that of a rational individual, whose beliefs equal the equilibrium prices. ",
"neighbors": [
1777,
1802
],
"mask": "Train"
},
{
"node_id": 2065,
"label": 1,
"text": "Title: Performance Enhanced Genetic Programming \nAbstract: Genetic Programming is increasing in popularity as the basis for a wide range of learning algorithms. However, the technique has to date only been successfully applied to modest tasks because of the performance overheads of evolving a large number of data structures, many of which do not correspond to a valid program. We address this problem directly and demonstrate how the evolutionary process can be achieved with much greater efficiency through the use of a formally-based representation and strong typing. We report initial experimental results which demonstrate that our technique exhibits significantly better performance than previous work. ",
"neighbors": [
1985,
2086
],
"mask": "Train"
},
{
"node_id": 2066,
"label": 6,
"text": "Title: On the Informativeness of the DNA Promoter Sequences Domain Theory \nAbstract: The DNA promoter sequences domain theory and database have become popular for testing systems that integrate empirical and analytical learning. This note reports a simple change and reinterpretation of the domain theory in terms of M-of-N concepts, involving no learning, that results in an accuracy of 93.4% on the 106 items of the database. Moreover, an exhaustive search of the space of M-of-N domain theory interpretations indicates that the expected accuracy of a randomly chosen interpretation is 76.5%, and that a maximum accuracy of 97.2% is achieved in 12 cases. This demonstrates the informativeness of the domain theory, without the complications of understanding the interactions between various learning algorithms and the theory. In addition, our results help characterize the difficulty of learning using the DNA promoters theory.",
"neighbors": [
159,
985,
2062,
2674
],
"mask": "Validation"
},
{
"node_id": 2067,
"label": 6,
"text": "Title: Classification by Pairwise Coupling \nAbstract: We discuss a strategy for polychotomous classification that involves estimating class probabilities for each pair of classes, and then coupling the estimates together. The coupling model is similar to the Bradley-Terry method for paired comparisons. We study the nature of the class probability estimates that arise, and examine the performance of the procedure in real and simulated datasets. Classifiers used include linear discriminants, nearest neighbors, and the support vector machine.",
"neighbors": [
1792
],
"mask": "Validation"
},
{
"node_id": 2068,
"label": 2,
"text": "Title: Rearrangement of receptive field topography after intracortical and peripheral stimulation: The role of plasticity in\nAbstract: Intracortical microstimulation (ICMS) of a single site in the somatosensory cortex of rats and monkeys for 2-6 hours produces a large increase in the number of neurons responsive to the skin region corresponding to the ICMS-site receptive field (RF), with very little effect on the position and size of the ICMS-site RF, and the response evoked at the ICMS site by tactile stimulation (Recanzone et al., 1992b). Large changes in RF topography are observed following several weeks of repetitive stimulation of a restricted skin region in monkeys (Jenkins et al., 1990; Recanzone et al., 1992acde). Repetitive stimulation of a localized skin region in monkeys produced by training the monkeys in a tactile frequency discrimination task improves their performance (Recanzone et al., 1992a). It has been suggested that these changes in RF topography are caused by competitive learning in excitatory pathways (Grajski & Merzenich, 1990; Jenkins et al., 1990; Recanzone et al., 1992abcde). ICMS almost simultaneously excites excitatory and inhibitory terminals and excitatory and inhibitory cortical neurons within a few microns of the stimulating electrode. Thus, this paper investigates the implications of the possibility that lateral inhibitory pathways too may undergo synaptic plasticity during ICMS. Lateral inhibitory pathways may also undergo synaptic plasticity in adult animals during peripheral conditioning. The \"EXIN\" (afferent excitatory and lateral inhibitory) synaptic plasticity rules ",
"neighbors": [
355,
2228
],
"mask": "Validation"
},
{
"node_id": 2069,
"label": 3,
"text": "Title: A Note on Testing Exogeneity of Instrumental Variables (DRAFT PAPER) \nAbstract: Intracortical microstimulation (ICMS) of a single site in the somatosensory cortex of rats and monkeys for 2-6 hours produces a large increase in the number of neurons responsive to the skin region corresponding to the ICMS-site receptive field (RF), with very little effect on the position and size of the ICMS-site RF, and the response evoked at the ICMS site by tactile stimulation (Recanzone et al., 1992b). Large changes in RF topography are observed following several weeks of repetitive stimulation of a restricted skin region in monkeys (Jenkins et al., 1990; Recanzone et al., 1992acde). Repetitive stimulation of a localized skin region in monkeys produced by training the monkeys in a tactile frequency discrimination task improves their performance (Recanzone et al., 1992a). It has been suggested that these changes in RF topography are caused by competitive learning in excitatory pathways (Grajski & Merzenich, 1990; Jenkins et al., 1990; Recanzone et al., 1992abcde). ICMS almost simultaneously excites excitatory and inhibitory terminals and excitatory and inhibitory cortical neurons within a few microns of the stimulating electrode. Thus, this paper investigates the implications of the possibility that lateral inhibitory pathways too may undergo synaptic plasticity during ICMS. Lateral inhibitory pathways may also undergo synaptic plasticity in adult animals during peripheral conditioning. The \"EXIN\" (afferent excitatory and lateral inhibitory) synaptic plasticity rules ",
"neighbors": [
260,
2434
],
"mask": "Validation"
},
{
"node_id": 2070,
"label": 5,
"text": "Title: A Partial Memory Incremental Learning Methodology And Its Application To Computer Intrusion Detection \nAbstract: Intracortical microstimulation (ICMS) of a single site in the somatosensory cortex of rats and monkeys for 2-6 hours produces a large increase in the number of neurons responsive to the skin region corresponding to the ICMS-site receptive field (RF), with very little effect on the position and size of the ICMS-site RF, and the response evoked at the ICMS site by tactile stimulation (Recanzone et al., 1992b). Large changes in RF topography are observed following several weeks of repetitive stimulation of a restricted skin region in monkeys (Jenkins et al., 1990; Recanzone et al., 1992acde). Repetitive stimulation of a localized skin region in monkeys produced by training the monkeys in a tactile frequency discrimination task improves their performance (Recanzone et al., 1992a). It has been suggested that these changes in RF topography are caused by competitive learning in excitatory pathways (Grajski & Merzenich, 1990; Jenkins et al., 1990; Recanzone et al., 1992abcde). ICMS almost simultaneously excites excitatory and inhibitory terminals and excitatory and inhibitory cortical neurons within a few microns of the stimulating electrode. Thus, this paper investigates the implications of the possibility that lateral inhibitory pathways too may undergo synaptic plasticity during ICMS. Lateral inhibitory pathways may also undergo synaptic plasticity in adult animals during peripheral conditioning. The \"EXIN\" (afferent excitatory and lateral inhibitory) synaptic plasticity rules ",
"neighbors": [
2602,
2640
],
"mask": "Validation"
},
{
"node_id": 2071,
"label": 0,
"text": "Title: LINNEO A Classification Methodology for Ill-structured Domains \nAbstract: In this work we present a classification methodology (LINNEO + ) to discover concepts from ill-structured domains and to organize hierarchies with them. In order to achieve this aim LINNEO + uses conceptual learning techniques and classification. The final target is to build knowledge bases after expert validation. Some techniques for the improvement of the results in the classification step are used, like biasing using partial expert knowledge (classification rules or causal and structural dependencies between attributes) or delayed cluster assignation of objects. Also some comparisons with a few well-known systems are shown.",
"neighbors": [
1809
],
"mask": "Test"
},
{
"node_id": 2072,
"label": 2,
"text": "Title: Data Mining for Association Rules with Unsupervised Neural Networks \nAbstract: results for Gaussian mixture models and factor analysis are discussed. ",
"neighbors": [
36,
667,
2227
],
"mask": "Train"
},
{
"node_id": 2073,
"label": 6,
"text": "Title: Classification Using -Machines and Constructive Function Approximation \nAbstract: The classification algorithm CLEF combines a version of a linear machine known as a - machine with a non-linear function approxima-tor that constructs its own features. The algorithm finds non-linear decision boundaries by constructing features that are needed to learn the necessary discriminant functions. The CLEF algorithm is proven to separate all consistently labelled training instances, even when they are not linearly separable in the input variables. The algorithm is illustrated on a variety of tasks, showing an improvement over C4.5, a state-of-art de cision tree learning algorithm.",
"neighbors": [
1818
],
"mask": "Train"
},
{
"node_id": 2074,
"label": 0,
"text": "Title: The Management of Context-Sensitive Features: A Review of Strategies \nAbstract: In this paper, we review five heuristic strategies for handling context-sensitive features in supervised machine learning from examples. We discuss two methods for recovering lost (implicit) contextual information. We mention some evidence that hybrid strategies can have a synergetic effect. We then show how the work of several machine learning researchers fits into this framework. While we do not claim that these strategies exhaust the possibilities, it appears that the framework includes all of the techniques that can be found in the published literature on context sensitive learning.",
"neighbors": [
1636,
1647,
2607,
2615
],
"mask": "Train"
},
{
"node_id": 2075,
"label": 0,
"text": "Title: Case Retrieval Nets: Foundations, Properties, \nAbstract: Implementation, and Results ",
"neighbors": [
1855
],
"mask": "Train"
},
{
"node_id": 2076,
"label": 3,
"text": "Title: Automated Discovery of Linear Feedback Models 1 \nAbstract: Implementation, and Results ",
"neighbors": [
211,
772,
1324,
1527,
2559
],
"mask": "Test"
},
{
"node_id": 2077,
"label": 1,
"text": "Title: An Adaptive Penalty Approach for Constrained Genetic-Algorithm Optimization \nAbstract: In this paper we describe a new adaptive penalty approach for handling constraints in genetic algorithm optimization problems. The idea is to start with a relatively small penalty coefficient and then increase it or decrease it on demand as the optimization progresses. Empirical results in several engineering design domains demonstrate the merit of the proposed approach.",
"neighbors": [
163,
743,
744,
2030
],
"mask": "Validation"
},
{
"node_id": 2078,
"label": 3,
"text": "Title: Structured Reachability Analysis for Markov Decision Processes \nAbstract: Recent research in decision theoretic planning has focussed on making the solution of Markov decision processes (MDPs) more feasible. We develop a family of algorithms for structured reachability analysis of MDPs that are suitable when an initial state (or set of states) is known. Using compact, structured representations of MDPs (e.g., Bayesian networks), our methods, which vary in the tradeoff between complexity and accuracy, produce structured descriptions of (estimated) reachable states that can be used to eliminate variables or variable values from the problem description, reducing the size of the MDP and making it easier to solve. One contribution of our work is the extension of ideas from GRAPHPLAN to deal with the distributed nature of action representations typically embodied within Bayes nets and the problem of correlated action effects. We also demonstrate that our algorithm can be made more complete by using k-ary constraints instead of binary constraints. Another contribution is the illustration of how the compact representation of reachability constraints can be exploited by several existing (exact and approximate) abstraction algorithms for MDPs.",
"neighbors": [
295,
1459,
1983,
2406,
2474
],
"mask": "Validation"
},
{
"node_id": 2079,
"label": 5,
"text": "Title: Extraction of Meta-Knowledge to Restrict the Hypothesis Space for ILP Systems incorporating them in FOIL.\nAbstract: Many ILP systems, such as GOLEM, FOIL, and MIS, take advantage of user supplied meta-knowledge to restrict the hypothesis space. This meta-knowledge can be in the form of type information about arguments in the predicate being learned, or it can be information about whether a certain argument in the predicate is functionally dependent on the other arguments (supplied as mode information). This meta knowledge is explicitly supplied to an ILP system in addition to the data. The present paper argues that in many cases the meta knowledge can be extracted directly from the raw data. Three algorithms are presented that learn type, mode, and symmetric meta-knowledge from data. These algorithms can be incorporated in existing ILP systems in the form of a preprocessor that obviates the need for a user to explicitly provide this information. In many cases, the algorithms can extract meta- knowledge that the user is either unaware of, but which information can be used by the ILP system to restrict the hypothesis space. ",
"neighbors": [
1428,
2609
],
"mask": "Train"
},
{
"node_id": 2080,
"label": 6,
"text": "Title: Learning from positive data \nAbstract: Gold showed in 1967 that not even regular grammars can be exactly identified from positive examples alone. Since it is known that children learn natural grammars almost exclusively from positives examples, Gold's result has been used as a theoretical support for Chomsky's theory of innate human linguistic abilities. In this paper new results are presented which show that within a Bayesian framework not only grammars, but also logic programs are learnable with arbitrarily low expected error from positive examples only. In addition, we show that the upper bound for expected error of a learner which maximises the Bayes' posterior probability when learning from positive examples is within a small additive term of one which does the same from a mixture of positive and negative examples. An Inductive Logic Programming implementation is described which avoids the pitfalls of greedy search by global optimisation of this function during the local construction of individual clauses of the hypothesis. Results of testing this implementation on artificially-generated data-sets are reported. These results are in agreement with the theoretical predictions. ",
"neighbors": [
1290,
2329,
2609
],
"mask": "Test"
},
{
"node_id": 2081,
"label": 3,
"text": "Title: DE-NOISING BY reconstruction f n is defined in the wavelet domain by translating all the\nAbstract: p n. We prove two results about that estimator. [Smooth]: With high probability ^ f fl n is at least as smooth as f , in any of a wide variety of smoothness measures. [Adapt]: The estimator comes nearly as close in mean square to f as any measurable estimator can come, uniformly over balls in each of two broad scales of smoothness classes. These two properties are unprecedented in several ways. Our proof of these results develops new facts about abstract statistical inference and its connection with Acknowledgements. These results were described at the Symposium on Wavelet Theory, held in connection with the Shanks Lectures at Van-derbilt University, April 3-4 1992. The author would like to thank Professor L.L. Schumaker for hospitality at the conference, and R.A. DeVore, Iain Johnstone, Gerard Kerkyacharian, Bradley Lucier, A.S. Nemirovskii, Ingram Olkin, and Dominique Picard for interesting discussions and correspondence on related topics. The author is also at the University of California, Berkeley ",
"neighbors": [
1910,
2159,
2366
],
"mask": "Test"
},
{
"node_id": 2082,
"label": 1,
"text": "Title: %A L. Ingber %T Adaptive simulated annealing (ASA): Lessons learned %J Control and Cybernetics Annealing\nAbstract: ",
"neighbors": [
1775,
1793,
1795,
2178,
2545
],
"mask": "Train"
},
{
"node_id": 2083,
"label": 2,
"text": "Title: TREE CONTRACTIONS AND EVOLUTIONARY TREES \nAbstract: An evolutionary tree is a rooted tree where each internal vertex has at least two children and where the leaves are labeled with distinct symbols representing species. Evolutionary trees are useful for modeling the evolutionary history of species. An agreement subtree of two evolutionary trees is an evolutionary tree which is also a topological subtree of the two given trees. We give an algorithm to determine the largest possible number of leaves in any agreement subtree of two trees T 1 and T 2 with n leaves each. If the maximum degree d of these trees is bounded by a constant, the time complexity is O(n log 2 n) and is within a log n factor of optimal. For general d, this algorithm runs in O(nd 2 log d log 2 n) time or alternately in O(nd p d log 3 n) time. ",
"neighbors": [
299,
1827,
2511
],
"mask": "Train"
},
{
"node_id": 2084,
"label": 2,
"text": "Title: Synthesize, Optimize, Analyze, Repeat (SOAR): Application of Neural Network Tools to ECG Patient Monitoring \nAbstract: Results are reported from the application of tools for synthesizing, optimizing and analyzing neural networks to an ECG Patient Monitoring task. A neural network was synthesized from a rule-based classifier and optimized over a set of normal and abnormal heartbeats. The classification error rate on a separate and larger test set was reduced by a factor of 2. Sensitivity analysis of the synthesized and optimized networks revealed informative differences. Analysis of the weights and unit activations of the optimized network enabled a reduction in size of the network by a factor of 40% without loss of accuracy.",
"neighbors": [
2615
],
"mask": "Test"
},
{
"node_id": 2085,
"label": 2,
"text": "Title: Modeling dynamic receptive field changes produced by intracortical microstimulation \nAbstract: Intracortical microstimulation (ICMS) of a localized site in the somatosensory cortex of rats and monkeys for 2-6 hours produces a large increase in the cortical representation of the skin region represented by the ICMS-site neurons before ICMS, with very little effect on the ICMS-site neuron's RF location, RF size, and responsiveness (Recanzone et al., 1992). The \"EXIN\" (afferent excitatory and lateral inhibitory) learning rules (Marshall, 1995) are used to model RF changes during ICMS. The EXIN model produces reorganization of RF topography similar to that observed experimentally. The possible role of inhibitory learning in producing the effects of ICMS is studied by simulating the EXIN model with only lateral inhibitory learning. The model also produces an increase in the cortical representation of the skin region represented by the ICMS-site RF. ICMS is compared to artificial scotoma conditioning (Pettet & Gilbert, 1992) and retinal lesions (Darian-Smith & Gilbert, 1995), and it is suggested that lateral inhibitory learning may be a general principle of cortical plasticity. ",
"neighbors": [
1093,
1094,
2228
],
"mask": "Train"
},
{
"node_id": 2086,
"label": 1,
"text": "Title: ABSTRACT In general, the machine learning process can be accelerated through the use of additional\nAbstract: Intracortical microstimulation (ICMS) of a localized site in the somatosensory cortex of rats and monkeys for 2-6 hours produces a large increase in the cortical representation of the skin region represented by the ICMS-site neurons before ICMS, with very little effect on the ICMS-site neuron's RF location, RF size, and responsiveness (Recanzone et al., 1992). The \"EXIN\" (afferent excitatory and lateral inhibitory) learning rules (Marshall, 1995) are used to model RF changes during ICMS. The EXIN model produces reorganization of RF topography similar to that observed experimentally. The possible role of inhibitory learning in producing the effects of ICMS is studied by simulating the EXIN model with only lateral inhibitory learning. The model also produces an increase in the cortical representation of the skin region represented by the ICMS-site RF. ICMS is compared to artificial scotoma conditioning (Pettet & Gilbert, 1992) and retinal lesions (Darian-Smith & Gilbert, 1995), and it is suggested that lateral inhibitory learning may be a general principle of cortical plasticity. ",
"neighbors": [
1231,
2065
],
"mask": "Train"
},
{
"node_id": 2087,
"label": 1,
"text": "Title: Price's Theorem and the MAX Problem \nAbstract: We present a detailed analysis of the evolution of GP populations using the problem of finding a program which returns the maximum possible value for a given terminal and function set and a depth limit on the program tree (known as the MAX problem). We confirm the basic message of [ Gathercole and Ross, 1996 ] that crossover together with program size restrictions can be responsible for premature convergence to a sub-optimal solution. We show that this can happen even when the population retains a high level of variety and show that in many cases evolution from the sub-optimal solution to the solution is possible if sufficient time is allowed. In both cases theoretical models are presented and compared with actual runs. Experimental evidence is presented that Price's Covariance and Selection Theorem can be applied to GP populations and the practical effect of program size restrictions are noted. Finally we show that covariance between gene frequency and fitness in the first few generations can be used to predict the course of GP runs.",
"neighbors": [
1257,
1911,
2175,
2261
],
"mask": "Train"
},
{
"node_id": 2088,
"label": 3,
"text": "Title: A Probabilistic Calculus of Actions \nAbstract: We present a symbolic machinery that admits both probabilistic and causal information about a given domain, and produces probabilistic statements about the effect of actions and the impact of observations. The calculus admits two types of conditioning operators: ordinary Bayes conditioning, P (yjX = x), which represents the observation X = x, and causal conditioning, P (yjdo(X = x)), read: the probability of Y = y conditioned on holding X constant (at x) by deliberate action. Given a mixture of such observational and causal sentences, together with the topology of the causal graph, the calculus derives new conditional probabilities of both types, thus enabling one to quantify the effects of actions and observations.",
"neighbors": [
248,
776,
1527,
2167,
2524,
2525
],
"mask": "Train"
},
{
"node_id": 2089,
"label": 1,
"text": "Title: A Cooperative Coevolutionary Approach to Function Optimization \nAbstract: A general model for the coevolution of cooperating species is presented. This model is instantiated and tested in the domain of function optimization, and compared with a traditional GA-based function optimizer. The results are encouraging in two respects. They suggest ways in which the performance of GA and other EA-based optimizers can be improved, and they suggest a new approach to evolving complex structures such as neural networks and rule sets.",
"neighbors": [
357,
714,
1117,
1261,
1530,
1603,
1965
],
"mask": "Test"
},
{
"node_id": 2090,
"label": 2,
"text": "Title: Is Learning The n-th Thing Any Easier Than Learning The First? \nAbstract: This paper investigates learning in a lifelong context. Lifelong learning addresses situations in which a learner faces a whole stream of learning tasks. Such scenarios provide the opportunity to transfer knowledge across multiple learning tasks, in order to generalize more accurately from less training data. In this paper, several different approaches to lifelong learning are described, and applied in an object recognition domain. It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks.",
"neighbors": [
1260,
2530
],
"mask": "Train"
},
{
"node_id": 2091,
"label": 0,
"text": "Title: The Utility of Knowledge in Inductive Learning Running Head: Knowledge in Inductive Learning \nAbstract: This paper investigates learning in a lifelong context. Lifelong learning addresses situations in which a learner faces a whole stream of learning tasks. Such scenarios provide the opportunity to transfer knowledge across multiple learning tasks, in order to generalize more accurately from less training data. In this paper, several different approaches to lifelong learning are described, and applied in an object recognition domain. It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks.",
"neighbors": [
303,
585,
1539,
1944,
2438,
2617
],
"mask": "Test"
},
{
"node_id": 2092,
"label": 6,
"text": "Title: Universal Portfolios With and Without Transaction Costs \nAbstract: A constant rebalanced portfolio is an investment strategy which keeps the same distribution of wealth among a set of stocks from period to period. Recently there has been work on on-line investment strategies that are competitive with the best constant rebalanced portfolio determined in hindsight (Cover, 1991; Helmbold et al., 1996; Cover and Ordentlich, 1996a; Cover and Ordentlich, 1996b; Ordentlich and Cover, 1996; Cover, 1996). For the universal algorithm of Cover (Cover, 1991), we provide a simple analysis which naturally extends to the case of a fixed percentage transaction cost (commission), answering a question raised in (Cover, 1991; Helmbold et al., 1996; Cover and Ordentlich, 1996a; Cover and Ordentlich, 1996b; Ordentlich and Cover, 1996; Cover, 1996). In addition, we present a simple randomized implementation that is significantly faster in practice. We conclude by explaining how these algorithms can be applied to other problems, such as combining the predictions of statistical language models, where the resulting guarantees are more striking. ",
"neighbors": [
453,
2015
],
"mask": "Train"
},
{
"node_id": 2093,
"label": 2,
"text": "Title: Locally Connected Recurrent Networks \nAbstract: Lai-Wan CHAN and Evan Fung-Yu YOUNG Computer Science Department, The Chinese University of Hong Kong New Territories, Hong Kong Email : lwchan@cs.cuhk.hk Technical Report : CS-TR-95-10 Abstract The fully connected recurrent network (FRN) using the on-line training method, Real Time Recurrent Learning (RTRL), is computationally expensive. It has a computational complexity of O(N 4 ) and storage complexity of O(N 3 ), where N is the number of non-input units. We have devised a locally connected recurrent model which has a much lower complexity in both computational time and storage space. The ring-structure recurrent network (RRN), the simplest kind of the locally connected has the corresponding complexity of O(mn+np) and O(np) respectively, where p, n and m are the number of input, hidden and output units respectively. We compare the performance between RRN and FRN in sequence recognition and time series prediction. We tested the networks' ability in temporal memorizing power and time warpping ability in the sequence recognition task. In the time series prediction task, we used both networks to train and predict three series; a periodic series with white noise, a deterministic chaotic series and the sunspots data. Both tasks show that RRN needs a much shorter training time and the performance of RRN is comparable to that of FRN.",
"neighbors": [
283,
1990
],
"mask": "Train"
},
{
"node_id": 2094,
"label": 3,
"text": "Title: Interpretation of Complex Scenes Using Bayesian Networks \nAbstract: In most object recognition systems, interactions between objects in a scene are ignored and the best interpretation is considered to be the set of hypothesized objects that matches the greatest number of image features. We show how image interpretation can be cast as the problem of finding the most probable explanation (MPE) in a Bayesian network that models both visual and physical object interactions. The problem of how to determine exact conditional probabilities for the network is shown to be unimportant, since the goal is to find the most probable configuration of objects, not to calculate absolute probabilities. We furthermore show that evaluating configurations by feature counting is equivalent to calculating the joint probability of the configuration using a restricted Bayesian network, and derive the assumptions about probabilities necessary to make a Bayesian formulation reasonable.",
"neighbors": [
2164
],
"mask": "Validation"
},
{
"node_id": 2095,
"label": 2,
"text": "Title: A Practical Monte Carlo Implementation of Bayesian Learning \nAbstract: ",
"neighbors": [
157,
2056,
2230
],
"mask": "Train"
},
{
"node_id": 2096,
"label": 5,
"text": "Title: A Note on Scheduling Algorithms for Processors with Lookahead \nAbstract: ",
"neighbors": [
2142
],
"mask": "Validation"
},
{
"node_id": 2097,
"label": 3,
"text": "Title: Impediments to Universal Preference-Based Default Theories \nAbstract: Research on nonmonotonic and default reasoning has identified several important criteria for preferring alternative default inferences. The theories of reasoning based on each of these criteria may uniformly be viewed as theories of rational inference, in which the reasoner selects maximally preferred states of belief. Though researchers have noted some cases of apparent conflict between the preferences supported by different theories, it has been hoped that these special theories of reasoning may be combined into a universal logic of nonmonotonic reasoning. We show that the different categories of preferences conflict more than has been realized, and adapt formal results from social choice theory to prove that every universal theory of default reasoning will violate at least one reasonable principle of rational reasoning. Our results can be interpreted as demonstrating that, within the preferential framework, we cannot expect much improvement on the rigid lexicographic priority mechanisms that have been proposed for conflict resolution.",
"neighbors": [
1994,
1995
],
"mask": "Validation"
},
{
"node_id": 2098,
"label": 6,
"text": "Title: Predicting a binary sequence almost as well as the optimal biased coin \nAbstract: We apply the exponential weight algorithm, introduced and Littlestone and Warmuth [17] and by Vovk [24] to the problem of predicting a binary sequence almost as well as the best biased coin. We first show that for the case of the logarithmic loss, the derived algorithm is equivalent to the Bayes algorithm with Jeffrey's prior, that was studied by Xie and Barron under probabilistic assumptions [26]. We derive a uniform bound on the regret which holds for any sequence. We also show that if the empirical distribution of the sequence is bounded away from 0 and from 1, then, as the length of the sequence increases to infinity, the difference between this bound and a corresponding bound on the average case regret of the same algorithm (which is asymptotically optimal in that case) is only 1=2. We show that this gap of 1=2 is necessary by calculating the regret of the min-max optimal algorithm for this problem and showing that the asymptotic upper bound is tight. We also study the application of this algorithm to the square loss and show that the algorithm that is derived in this case is different from the Bayes algorithm and is better than it for prediction in the worst-case.",
"neighbors": [
453,
2099,
2156
],
"mask": "Validation"
},
{
"node_id": 2099,
"label": 6,
"text": "Title: Game Theory, On-line Prediction and Boosting \nAbstract: We study the close connections between game theory, on-line prediction and boosting. After a brief review of game theory, we describe an algorithm for learning to play repeated games based on the on-line prediction methods of Littlestone and War-muth. The analysis of this algorithm yields a simple proof of von Neumann's famous minmax theorem, as well as a provable method of approximately solving a game. We then show that the on-line prediction model is obtained by applying this game-playing algorithm to an appropriate choice of game and that boosting is obtained by applying the same algorithm to the dual of this game.",
"neighbors": [
453,
456,
569,
2098
],
"mask": "Validation"
},
{
"node_id": 2100,
"label": 5,
"text": "Title: GURRR: A Global Unified Resource Requirements Representation \nAbstract: When compiling for instruction level parallelism (ILP), the integration of the optimization phases can lead to an improvement in the quality of code generated. However, since several different representations of a program are used in the various phases, only a partial integration has been achieved to date. We present a program representation that combines resource requirements and availability information with control and data dependence information. The representation enables the integration of several optimizing phases, including transformations, register allocation, and instruction scheduling. The basis of this integration is the simultaneous allocation of different types of resources. We define the representation and show how it is constructed. We then formulate several optimization phases to use the representation to achieve better integration. ",
"neighbors": [
1961,
2527
],
"mask": "Train"
},
{
"node_id": 2101,
"label": 1,
"text": "Title: Evolving Control Structures with Automatically Defined Macros Evolving Control Structures with Automatically Defined Macros. \nAbstract: When compiling for instruction level parallelism (ILP), the integration of the optimization phases can lead to an improvement in the quality of code generated. However, since several different representations of a program are used in the various phases, only a partial integration has been achieved to date. We present a program representation that combines resource requirements and availability information with control and data dependence information. The representation enables the integration of several optimizing phases, including transformations, register allocation, and instruction scheduling. The basis of this integration is the simultaneous allocation of different types of resources. We define the representation and show how it is constructed. We then formulate several optimization phases to use the representation to achieve better integration. ",
"neighbors": [
2470
],
"mask": "Validation"
},
{
"node_id": 2102,
"label": 1,
"text": "Title: The Evolution of Communication Schemes Over Continuous Channels \nAbstract: Many problems impede the design of multi-agent systems, not the least of which is the passing of information between agents. While others hand implement communication routes and semantics, we explore a method by which communication can evolve. In the experiments described here, we model agents as connectionist networks. We supply each agent with a number of communications channels implemented by the addition of both input and output units for each channel. The output units initiate environmental signals whose amplitude decay over distance and are perturbed by environmental noise. An agent does not receive input from other individuals, rather the agents input reects the summation of all other agents output signals along that channel. Because we use real-valued activations, the agents communicate using real-valued vectors. Under our evolutionary program, GNARL, the agents coevolve a communication scheme over continuous channels which conveys task-spe cific information.",
"neighbors": [
144,
189,
2664
],
"mask": "Train"
},
{
"node_id": 2103,
"label": 1,
"text": "Title: The Evolution of Communication Schemes Over Continuous Channels \nAbstract: As the field of Genetic Programming (GP) matures and its breadth of application increases, the need for parallel implementations becomes absolutely necessary. The transputer-based system presented in [Koza and Andre 1995] is one of the rare such parallel implementations. Until today, no implementation has been proposed for parallel GP using a SIMD architecture, except for a data-parallel approach [Tufts 1995], although others have exploited workstation farms and pipelined supercomputers. One reason is certainly the apparent difficulty of dealing with the parallel evaluation of different S-expressions when only a single instruction can be executed at the same time on every processor. The aim of this chapter is to present such an implementation of parallel GP on a SIMD system, where each processor can efficiently evaluate a different S-expression. We have implemented this approach on a MasPar MP-2 computer, and will present some timing results. To the extent that SIMD machines, like the MasPar are available to offer cost-effective cycles for scientific experimentation, this is a useful approach. The idea of simulating a MIMD machine using a SIMD architecture is not new [Hillis and Steele 1986; Littman and Metcalf 1990; Dietz and Cohen 1992]. One of the original ideas for the Connection Machine [Hillis and Steele 1986] was that it could simulate other parallel architectures. Indeed, in the extreme, each processor on a SIMD architecture can simulate a universal Turing machine (TM). With different turing machine specifications stored in each local memory, each processor would simply have its own tape, tape head, state table and state pointer, and the simulation would be performed by repeating the basic TM operations simultaneously. Of course, such a simulation would be very inefficient, and difficult to program, but would have the advantage of being really MIMD, where no SIMD processor would be in idle state, until its simulated machine halts. Now let us consider an alternative idea, that each SIMD processor would simulate an individual stored program computer using a simple instruction set. For each step of the simulation, the SIMD system would sequentially execute each possible instruction on the subset of processors whose next instruction match it. For a typical assembly language, even with a reduced instruction set, most processors would be idle most of the time. However, if the set of instructions implemented on the virtual processor is very small, this approach can be fruitful. In the case of Genetic Programming, the \"instruction set\" is composed of the specified set of functions designed for the task. We will show below that with a precompilation step, simply adding a push, a conditional, and unconditional branching and a stop instruction, we can get a very effective MIMD simulation running. ",
"neighbors": [
415,
2334
],
"mask": "Train"
},
{
"node_id": 2104,
"label": 1,
"text": "Title: A study of the effects of group formation on evolutionary search \nAbstract: As the field of Genetic Programming (GP) matures and its breadth of application increases, the need for parallel implementations becomes absolutely necessary. The transputer-based system presented in [Koza and Andre 1995] is one of the rare such parallel implementations. Until today, no implementation has been proposed for parallel GP using a SIMD architecture, except for a data-parallel approach [Tufts 1995], although others have exploited workstation farms and pipelined supercomputers. One reason is certainly the apparent difficulty of dealing with the parallel evaluation of different S-expressions when only a single instruction can be executed at the same time on every processor. The aim of this chapter is to present such an implementation of parallel GP on a SIMD system, where each processor can efficiently evaluate a different S-expression. We have implemented this approach on a MasPar MP-2 computer, and will present some timing results. To the extent that SIMD machines, like the MasPar are available to offer cost-effective cycles for scientific experimentation, this is a useful approach. The idea of simulating a MIMD machine using a SIMD architecture is not new [Hillis and Steele 1986; Littman and Metcalf 1990; Dietz and Cohen 1992]. One of the original ideas for the Connection Machine [Hillis and Steele 1986] was that it could simulate other parallel architectures. Indeed, in the extreme, each processor on a SIMD architecture can simulate a universal Turing machine (TM). With different turing machine specifications stored in each local memory, each processor would simply have its own tape, tape head, state table and state pointer, and the simulation would be performed by repeating the basic TM operations simultaneously. Of course, such a simulation would be very inefficient, and difficult to program, but would have the advantage of being really MIMD, where no SIMD processor would be in idle state, until its simulated machine halts. Now let us consider an alternative idea, that each SIMD processor would simulate an individual stored program computer using a simple instruction set. For each step of the simulation, the SIMD system would sequentially execute each possible instruction on the subset of processors whose next instruction match it. For a typical assembly language, even with a reduced instruction set, most processors would be idle most of the time. However, if the set of instructions implemented on the virtual processor is very small, this approach can be fruitful. In the case of Genetic Programming, the \"instruction set\" is composed of the specified set of functions designed for the task. We will show below that with a precompilation step, simply adding a push, a conditional, and unconditional branching and a stop instruction, we can get a very effective MIMD simulation running. ",
"neighbors": [
403,
2302
],
"mask": "Train"
},
{
"node_id": 2105,
"label": 2,
"text": "Title: A study of the effects of group formation on evolutionary search \nAbstract: Information Processing in Primate Retinal Cone Pathways: Experiments and Results ",
"neighbors": [
2621
],
"mask": "Test"
},
{
"node_id": 2106,
"label": 5,
"text": "Title: Theoretical Modeling of Superscalar Processor Performance \nAbstract: The current trace-driven simulation approach to determine superscalar processor performance is widely used but has some shortcomings. Modern benchmarks generate extremely long traces, resulting in problems with data storage, as well as very long simulation run times. More fundamentally, simulation generally does not provide significant insight into the factors that determine performance or a characterization of their interactions. This paper proposes a theoretical model of superscalar processor performance that addresses these shortcomings. Performance is viewed as an interaction of program parallelism and machine parallelism. Both program and machine parallelisms are decomposed into multiple component functions. Methods for measuring or computing these functions are described. The functions are combined to provide a model of the interaction between program and machine parallelisms and an accurate estimate of the performance. The computed performance, based on this model, is compared to simulated performance for six benchmarks from the SPEC 92 suite on several configurations of the IBM RS/6000 instruction set architecture. ",
"neighbors": [
735,
1750,
2649
],
"mask": "Validation"
},
{
"node_id": 2107,
"label": 2,
"text": "Title: Prediction of human mRNA donor and acceptor sites from the DNA sequence \nAbstract: Artificial neural networks have been applied to the prediction of splice site location in human pre-mRNA. A joint prediction scheme where prediction of transition regions between introns and exons regulates a cutoff level for splice site assignment was able to predict splice site locations with confidence levels far better than previously reported in the literature. The problem of predicting donor and acceptor sites in human genes is hampered by the presence of numerous amounts of false positives | in the paper the distribution of these false splice sites is examined and linked to a possible scenario for the splicing mechanism in vivo. When the presented method detects 95% of the true donor and acceptor sites it makes less than 0.1% false donor site assignments and less than 0.4% false acceptor site assignments. For the large data set used in this study this means that on the average there are one and a half false donor sites per true donor site and six false acceptor sites per true acceptor site. With the joint assignment method more than a fifth of the true donor sites and around one fourth of the true acceptor sites could be detected without accompaniment of any false positive predictions. Highly confident splice sites could not be isolated with a widely used weight matrix method or by separate splice site networks. A complementary relation between the confidence levels of the coding/non-coding and the separate splice site networks was observed, with many weak splice sites having sharp transitions in the coding/non-coding signal and many stronger splice sites having more ill-defined transitions between coding and non-coding. ",
"neighbors": [
613,
1865,
1953,
2046,
2496,
2571,
2574
],
"mask": "Test"
},
{
"node_id": 2108,
"label": 3,
"text": "Title: The Automated Mapping of Plans for Plan Recognition \nAbstract: To coordinate with other agents in its environment, an agent needs models of what the other agents are trying to do. When communication is impossible or expensive, this information must be acquired indirectly via plan recognition. Typical approaches to plan recognition start with a specification of the possible plans the other agents may be following, and develop special techniques for discriminating among the possibilities. Perhaps more desirable would be a uniform procedure for mapping plans to general structures supporting inference based on uncertain and incomplete observations. In this paper, we describe a set of methods for converting plans represented in a flexible procedural language to observation models represented as probabilistic belief networks. ",
"neighbors": [
1172,
1898,
2140
],
"mask": "Train"
},
{
"node_id": 2109,
"label": 2,
"text": "Title: Local quartet splits of a binary tree infer all quartet splits via one dyadic inference\nAbstract: DIMACS Technical Report 96-43 DIMACS is a partnership of Rutgers University, Princeton University, AT&T Research, Bellcore, and Bell Laboratories. DIMACS is an NSF Science and Technology Center, funded under contract STC-91-19999; and also receives support from the New Jersey Commission on Science and Technology. ",
"neighbors": [
2185
],
"mask": "Train"
},
{
"node_id": 2110,
"label": 6,
"text": "Title: Constructing Big Trees from Short Sequences \nAbstract: The construction of evolutionary trees is a fundamental problem in biology, and yet methods for reconstructing evolutionary trees are not reliable when it comes to inferring accurate topologies of large divergent evolutionary trees from realistic length sequences. We address this problem and present a new polynomial time algorithm for reconstructing evolutionary trees called the Short Quartets Method which is consistent and which has greater statistical power than other polynomial time methods, such as Neighbor-Joining and the 3-approximation algorithm by Agarwala et al. (and the \"Double Pivot\" variant of the Agarwala et al. algorithm by Cohen and Farach) for the L 1 -nearest tree problem. Our study indicates that our method will produce the correct topology from shorter sequences than can be guaranteed using these other methods.",
"neighbors": [
299,
1827
],
"mask": "Train"
},
{
"node_id": 2111,
"label": 1,
"text": "Title: Artificial Life as Theoretical Biology: How to do real science with computer simulation \nAbstract: Artificial Life (A-Life) research offers, among other things, a new style of computer simulation for understanding biological systems and processes. But most current A-Life work does not show enough methodological sophistication to count as good theoretical biology. As a first step towards developing a stronger methodology for A-Life, this paper (1) identifies some methodological pitfalls arising from the `computer science inuence' in A-Life, (2) suggests some methodological heuristics for A-Life as theoretical biology, (3) notes the strengths of A-Life methods versus previous research methods in biology, (4) examines some open questions in theoretical biology that may benefit from A-Life simulation, and (5) argues that the debate over `Strong A-Life' is not relevant to A-Life's utility for theoretical biology. 1 Introduction: Simulating our way into the Dark Continent ",
"neighbors": [
2047,
2302
],
"mask": "Train"
},
{
"node_id": 2112,
"label": 2,
"text": "Title: Approximation from shift-invariant subspaces of L \nAbstract: A complete characterization is given of closed shift-invariant subspaces of L 2 (IR d ) which provide a specified approximation order. When such a space is principal (i.e., generated by a single function), then this characterization is in terms of the Fourier transform of the generator. As a special case, we obtain the classical Strang-Fix conditions, but without requiring the generating function to decay at infinity. The approximation order of a general closed shift-invariant space is shown to be already realized by a specifiable principal subspace.",
"neighbors": [
365,
2572
],
"mask": "Train"
},
{
"node_id": 2113,
"label": 6,
"text": "Title: Learning Model Bias minimum number of examples requred to learn a single task, and O(a\nAbstract: In this paper the problem of learning appropriate domain-specific bias is addressed. It is shown that this can be achieved by learning many related tasks from the same domain, and a theorem is given bounding the number tasks that must be learnt. A corollary of the theorem is that if the tasks are known to possess a common internal representation or preprocessing then the number of examples required per task for good generalisation when learning n tasks simultaneously scales like O(a + b tive support for the theoretical results is reported. ",
"neighbors": [
2586,
2623
],
"mask": "Train"
},
{
"node_id": 2114,
"label": 3,
"text": "Title: Probabilistic Principal Component Analysis \nAbstract: Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA. ",
"neighbors": [
1923,
1928,
2124
],
"mask": "Validation"
},
{
"node_id": 2115,
"label": 3,
"text": "Title: Modeling Belief in Dynamic Systems. Part I: Foundations \nAbstract: Belief change is a fundamental problem in AI: Agents constantly have to update their beliefs to accommodate new observations. In recent years, there has been much work on axiomatic characterizations of belief change. We claim that a better understanding of belief change can be gained from examining appropriate semantic models. In this paper we propose a general framework in which to model belief change. We begin by defining belief in terms of knowledge and plausibility: an agent believes if he knows that is more plausible than :. We then consider some properties defining the interaction between knowledge and plausibility, and show how these properties affect the properties of belief. In particular, we show that by assuming two of the most natural properties, belief becomes a KD45 operator. Finally, we add time to the picture. This gives us a framework in which we can talk about knowledge, plausibility (and hence belief), and time, which extends the framework of Halpern and Fagin for modeling knowledge in multi-agent systems. We then examine the problem of \"minimal change\". This notion can be captured by using prior plausibilities, an analogue to prior probabilities, which can be updated by \"conditioning\". We show by example that conditioning on a plausibility measure can capture many scenarios of interest. In a companion paper, we show how the two best-studied scenarios of belief change, belief revision and belief update, fit into our framework. ? Some of this work was done while both authors were at the IBM Almaden Research Center. The first author was also at Stanford while much of the work was done. IBM and Stanford's support are gratefully acknowledged. The work was also supported in part by the Air Force Office of Scientific Research (AFSC), under Contract F49620-91-C-0080 and grant F94620-96-1-0323 and by NSF under grants IRI-95-03109 and IRI-96-25901. A preliminary version of this paper appears in Proceedings of the 5th Conference on Theoretical Aspects of Reasoning About Knowledge, 1994, pp. 44-64, under the title \"A knowledge-based framework for belief change, Part I: Foundations\". ",
"neighbors": [
276,
2000
],
"mask": "Train"
},
{
"node_id": 2116,
"label": 1,
"text": "Title: Differential Evolution A simple and efficient adaptive scheme for global optimization over continuous spaces \nAbstract: A new heuristic approach for minimizing possibly nonlinear and non differentiable continuous space functions is presented. By means of an extensive testbed, which includes the De Jong functions, it will be demonstrated that the new method converges faster and with more certainty than Adaptive Simulated Annealing as well as the Annealed Nelder&Mead approach, both of which have a reputation for being very powerful. The new method requires few control variables, is robust, easy to use and lends itself very well to parallel computation. ",
"neighbors": [
163,
1775,
2125
],
"mask": "Train"
},
{
"node_id": 2117,
"label": 2,
"text": "Title: Stimulus specificity in perceptual learning: a consequence of experiments that are also stimulus specific? Keywords:\nAbstract: A new heuristic approach for minimizing possibly nonlinear and non differentiable continuous space functions is presented. By means of an extensive testbed, which includes the De Jong functions, it will be demonstrated that the new method converges faster and with more certainty than Adaptive Simulated Annealing as well as the Annealed Nelder&Mead approach, both of which have a reputation for being very powerful. The new method requires few control variables, is robust, easy to use and lends itself very well to parallel computation. ",
"neighbors": [
2639
],
"mask": "Test"
},
{
"node_id": 2118,
"label": 4,
"text": "Title: On Step-Size and Bias in Temporal-Difference Learning \nAbstract: We present results for three new algorithms for setting the step-size parameters, ff and , of temporal-difference learning methods such as TD(). The overall task is that of learning to predict the outcome of an unknown Markov chain based on repeated observations of its state trajectories. The new algorithms select step-size parameters online in such a way as to eliminate the bias normally inherent in temporal-difference methods. We compare our algorithms with conventional Monte Carlo methods. Monte Carlo methods have a natural way of setting the step size: for each state s they use a step size of 1=n s , where n s is the number of times state s has been visited. We seek and come close to achieving comparable step-size algorithms for TD(). One new algorithm uses a = 1=n s schedule to achieve the same effect as processing a state backwards with TD(0), but remains completely incremental. Another algorithm uses a at each time equal to the estimated transition probability of the current transition. We present empirical results showing improvement in convergence rate over Monte Carlo methods and conventional TD(). A limitation of our results at present is that they apply only to tasks whose state trajectories do not contain cycles. ",
"neighbors": [
565,
2442
],
"mask": "Train"
},
{
"node_id": 2119,
"label": 2,
"text": "Title: Gas Identification System using Graded Temperature Sensor and Neural Net Interpretation \nAbstract: We present results for three new algorithms for setting the step-size parameters, ff and , of temporal-difference learning methods such as TD(). The overall task is that of learning to predict the outcome of an unknown Markov chain based on repeated observations of its state trajectories. The new algorithms select step-size parameters online in such a way as to eliminate the bias normally inherent in temporal-difference methods. We compare our algorithms with conventional Monte Carlo methods. Monte Carlo methods have a natural way of setting the step size: for each state s they use a step size of 1=n s , where n s is the number of times state s has been visited. We seek and come close to achieving comparable step-size algorithms for TD(). One new algorithm uses a = 1=n s schedule to achieve the same effect as processing a state backwards with TD(0), but remains completely incremental. Another algorithm uses a at each time equal to the estimated transition probability of the current transition. We present empirical results showing improvement in convergence rate over Monte Carlo methods and conventional TD(). A limitation of our results at present is that they apply only to tasks whose state trajectories do not contain cycles. ",
"neighbors": [
2154
],
"mask": "Train"
},
{
"node_id": 2120,
"label": 2,
"text": "Title: PUSH-PULL SHUNTING MODEL OF GANGLION CELLS Simulations of X and Y retinal ganglion cell behavior\nAbstract: We present results for three new algorithms for setting the step-size parameters, ff and , of temporal-difference learning methods such as TD(). The overall task is that of learning to predict the outcome of an unknown Markov chain based on repeated observations of its state trajectories. The new algorithms select step-size parameters online in such a way as to eliminate the bias normally inherent in temporal-difference methods. We compare our algorithms with conventional Monte Carlo methods. Monte Carlo methods have a natural way of setting the step size: for each state s they use a step size of 1=n s , where n s is the number of times state s has been visited. We seek and come close to achieving comparable step-size algorithms for TD(). One new algorithm uses a = 1=n s schedule to achieve the same effect as processing a state backwards with TD(0), but remains completely incremental. Another algorithm uses a at each time equal to the estimated transition probability of the current transition. We present empirical results showing improvement in convergence rate over Monte Carlo methods and conventional TD(). A limitation of our results at present is that they apply only to tasks whose state trajectories do not contain cycles. ",
"neighbors": [
1798
],
"mask": "Test"
},
{
"node_id": 2121,
"label": 2,
"text": "Title: Testing for Gaussianity and Non Linearity in the sustained portion of musical sounds. \nAbstract: Higher order spectra of a signal contain information about the non Gaussian and non Linear properties of the system that created it. Since the non linearity in musical signal usually originate in the excitation signal while the linear spectral characteristics are attributed to the resonant chambers, we discard the spectral information by looking at the higher order statistical properties of the residual signal, i.e. the estimated input signal obtained by inverse filtering of the sound. In the current paper we show that the skewness and kurtosis values of the residual could be used for characterization of such important sound properties as belonging to families of strings, woodwind and brass instrumental timbres. The skewness parameter is shown to be closely related to the bicoherence function calculated over the original signal and as such it is succinct to an interpretation as statistical test for the signal conforming to a linear non Gaussian model. The above results are compared to the Hinich bispectral tests for Gaussianity and non Linearity of time series and exhibit a similar classification results. Finally, regarding the higher order statistics of a signal as a feature vector, a statistical distance measure for the cumulant space is suggested. ",
"neighbors": [
2212
],
"mask": "Test"
},
{
"node_id": 2122,
"label": 0,
"text": "Title: Preparing Case Retrieval Nets for Distributed Processing \nAbstract: In this paper, we discuss two approaches of applying the memory model of Case Retrieval Nets to applications where a distributed processing of information is required. For this, we distinguish two types of such applications, namely (a) the case of distributed case libraries and (b) the case of distributed cases. While a solution to the former is straightforward, the latter requires an extension to Case Retrieval Nets which provides a kind of partitioning of the entire net structure. This extended model even allows for a concurrent implementation of the retrieval process or for the use of collaborative agents for retrieval. Keywords: Case-based reasoning, case retrieval, memory structures, distributed processing. ",
"neighbors": [
66,
75,
1854,
1855,
1864
],
"mask": "Validation"
},
{
"node_id": 2123,
"label": 0,
"text": "Title: Justification Structures for Document Reuse \nAbstract: Document drafting|an important problem-solving task of professionals in a wide variety of fields|typifies a design task requiring complex adaptation for case reuse. This paper proposes a framework for document reuse based on an explicit representation of the illocutionary and rhetorical structure underlying documents. Explicit representation of this structure facilitates (1) interpretation of previous documents by enabling them to \"explain themselves,\" (2) construction of documents by enabling document drafters to issue goal-based specifications and rapidly retrieve documents with similar intentional structure, and (3) mainte nance of multi-generation documents.",
"neighbors": [
649,
2482
],
"mask": "Train"
},
{
"node_id": 2124,
"label": 2,
"text": "Title: A Hierarchical Latent Variable Model for Data Visualization \nAbstract: Visualization has proven to be a powerful and widely-applicable tool for the analysis and interpretation of multi-variate data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and sub-clusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach on a toy data set, and we then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multi-phase flows in oil pipelines, and to data in 36 dimensions derived from satellite images. A Matlab software implementation of the algorithm is publicly available from the world-wide web. ",
"neighbors": [
74,
1928,
2114
],
"mask": "Train"
},
{
"node_id": 2125,
"label": 2,
"text": "Title: On the Usage of Differential Evolution for Function Optimization Differential Evolution (DE) has recently proven\nAbstract: assumed unless otherwise stated. Basically, DE generates new parameter vectors by adding the weighted difference between two population vectors to a third vector. If the resulting vector yields a lower objective function value than a predetermined population member, the newly generated vector replaces the vector, with which it was compared, in the next generation; otherwise, the old vector is retained. This basic principle, however, is extended when it comes to the practical variants of DE. For example an existing vector can be perturbed by adding more than one weighted difference vector to it. In most cases, it is also worthwhile to mix the parameters of the old vector with those of the perturbed one before comparing the objective function values. Several variants of DE which have proven to be useful will be described in the ",
"neighbors": [
2116
],
"mask": "Train"
},
{
"node_id": 2126,
"label": 5,
"text": "Title: Applying ILP to Diterpene Structure Elucidation from 13 C NMR Spectra \nAbstract: We present a novel application of ILP to the problem of diterpene structure elucidation from 13 C NMR spectra. Diterpenes are organic compounds of low molecular weight that are based on a skeleton of 20 carbon atoms. They are of significant chemical and commercial interest because of their use as lead compounds in the search for new pharmaceutical effectors. The structure elucidation of diterpenes based on 13 C NMR spectra is usually done manually by human experts with specialized background knowledge on peak patterns and chemical structures. In the process, each of the 20 skeletal atoms is assigned an atom number that corresponds to its proper place in the skeleton and the diterpene is classified into one of the possible skeleton types. We address the problem of learning classification rules from a database of peak patterns for diterpenes with known structure. Recently, propositional learning was successfully applied to learn classification rules from spectra with assigned atom numbers. As the assignment of atom numbers is a difficult process in itself (and possibly indistinguishable from the classification process), we apply ILP, i.e., relational learning, to the problem of classifying spectra without assigned atom numbers. ",
"neighbors": [
426,
2213,
2339,
2426,
2591
],
"mask": "Train"
},
{
"node_id": 2127,
"label": 3,
"text": "Title: NAIVE BAYESIAN LEARNING Adapted from \nAbstract: We present a novel application of ILP to the problem of diterpene structure elucidation from 13 C NMR spectra. Diterpenes are organic compounds of low molecular weight that are based on a skeleton of 20 carbon atoms. They are of significant chemical and commercial interest because of their use as lead compounds in the search for new pharmaceutical effectors. The structure elucidation of diterpenes based on 13 C NMR spectra is usually done manually by human experts with specialized background knowledge on peak patterns and chemical structures. In the process, each of the 20 skeletal atoms is assigned an atom number that corresponds to its proper place in the skeleton and the diterpene is classified into one of the possible skeleton types. We address the problem of learning classification rules from a database of peak patterns for diterpenes with known structure. Recently, propositional learning was successfully applied to learn classification rules from spectra with assigned atom numbers. As the assignment of atom numbers is a difficult process in itself (and possibly indistinguishable from the classification process), we apply ILP, i.e., relational learning, to the problem of classifying spectra without assigned atom numbers. ",
"neighbors": [
1329,
2338
],
"mask": "Train"
},
{
"node_id": 2128,
"label": 0,
"text": "Title: Intelligent Model Selection for Hillclimbing Search in Computer-Aided Design \nAbstract: Models of physical systems can differ according to computational cost, accuracy and precision, among other things. Depending on the problem solving task at hand, different models will be appropriate. Several investigators have recently developed methods of automatically selecting among multiple models of physical systems. Our research is novel in that we are developing model selection techniques specifically suited to computer-aided de sign. Our approach is based on the idea that artifact performance models for computer-aided design should be chosen in light of the design decisions they are required to support. We have developed a technique called \"Gradient Magnitude Model Selection\" (GMMS), which embodies this principle. GMMS operates in the context of a hillclimbing search process. It selects the simplest model that meets the needs of the hillclimbing algorithm in which it operates. We are using the domain of sailing yacht design as a testbed for this research. We have implemented GMMS and used it in hillclimb-ing search to decide between a computationally expensive potential-flow program and an algebraic approximation to analyze the performance of sailing yachts. Experimental tests show that GMMS makes the design process faster than it would be if the most expensive model were used for all design evaluations. GMMS achieves this performance improvement with little or no sacrifice in the quality of the resulting design. ",
"neighbors": [
2030,
2479
],
"mask": "Train"
},
{
"node_id": 2129,
"label": 2,
"text": "Title: Fast Pruning Using Principal Components \nAbstract: We present a new algorithm for eliminating excess parameters and improving network generalization after supervised training. The method, \"Principal Components Pruning (PCP)\", is based on principal component analysis of the node activations of successive layers of the network. It is simple, cheap to implement, and effective. It requires no network retraining, and does not involve calculating the full Hessian of the cost function. Only the weight and the node activity correlation matrices for each layer of nodes are required. We demonstrate the efficacy of the method on a regression problem using polynomial basis functions, and on an economic time series prediction problem using a two-layer, feedforward network.",
"neighbors": [
2454
],
"mask": "Train"
},
{
"node_id": 2130,
"label": 1,
"text": "Title: Intelligent Gradient-Based Search of Incompletely Defined Design Spaces \nAbstract: Gradient-based numerical optimization of complex engineering designs offers the promise of rapidly producing better designs. However, such methods generally assume that the objective function and constraint functions are continuous, smooth, and defined everywhere. Unfortunately, realistic simulators tend to violate these assumptions. We present a rule-based technique for intelligently computing gradients in the presence of such pathologies in the simulators, and show how this gradient computation method can be used as part of a gradient-based numerical optimization system. We tested the resulting system in the domain of conceptual design of supersonic transport aircraft, and found that using rule-based gradients can decrease the cost of design space search by one or more orders of magnitude.",
"neighbors": [
2030
],
"mask": "Validation"
},
{
"node_id": 2131,
"label": 0,
"text": "Title: Learning Prototype-Selection Rules for Case-Based Iterative Design seen as a case-based reasoning system [4], in\nAbstract: The first step for most case-based design systems is to select an initial prototype from a database of previous designs. The retrieved prototype is then modified to tailor it to the given goals. For any particular design goal the selection of a starting point for the design process can have a dramatic effect both on the quality of the eventual design and on the overall design time. We present a technique for automatically constructing effective prototype-selection rules. Our technique applies a standard inductive-learning algorithm, C4.5, to a set of training data describing which particular prototype would have been the best choice for each goal encountered in a previous design session. We have tested our technique in the domain of racing-yacht-hull design, comparing our inductively learned selection rules to several competing prototype-selection methods. Our results show that the inductive prototype-selection method leads to better final designs when the design process is guided by a noisy evaluation function, and that the inductively learned rules will often be more efficient than competing methods. Many automated design systems begin by retrieving an initial prototype from a library of previous designs, using the given design goal as an index to guide the retrieval process [14]. The retrieved prototype is then modified by a set of design modification operators to tailor the selected design to the given goals. In many cases the quality of competing designs can be assessed using domain-specific evaluation functions, and in such cases the design-modification process is often This research has benefited from numerous discussions with members of the Rutgers CAP project. We thank Andrew Gelsey for helping with the cross-validation code, John Keane for helping with RUVPP, and Andrew Gelsey and Tim Weinrich for comments on a previous draft of this paper. This research was supported under ARPA-funded NASA grant NAG 2-645. In the context of such case-based design systems, the choice of an initial prototype can affect both the quality of the final design and the computational cost of obtaining that design, for three reasons. First, prototype selection may impact quality when the prototypes lie in disjoint search spaces. In particular, if the system's design modification operators cannot convert any prototype into any other prototype, the choice of initial prototype will restrict the set of possible designs that can be obtained by any search process. A poor choice of initial prototype may therefore lead to a suboptimal final design. Second, prototype selection may impact quality when the design process is guided by a nonlinear evaluation function with unknown global properties. Since there is no known method that is guaranteed to find the global optimum of an arbitrary nonlinear function [7], most design systems rely on iterative local search methods whose results are sensitive to the initial starting point. Finally, the choice of prototype may have an impact on the time needed to carry out the design modification process|two different starting points may yield the same final design but take very different amounts of time to get there. In design problems where evaluating even just a single design can take tremendous amounts of time, selecting an appropriate initial prototype can be the determining factor in the success or failure of the design process. This paper describes the application of inductive learning [11] to form rules for selecting appropriate prototype designs. The paper is structured as follows. In Section 2, we describe our inductive method for learning prototype-selection rules. In Section 3 we describe the domain of racing-yacht-hull design, in which we tested our prototype-selection methods. In Sections 4 and 5, we describe the experiments ",
"neighbors": [
1892,
2030,
2319
],
"mask": "Validation"
},
{
"node_id": 2132,
"label": 5,
"text": "Title: Combining Data Mining and Machine Learning for Effective User Profiling \nAbstract: This paper describes the automatic design of methods for detecting fraudulent behavior. Much of the design is accomplished using a series of machine learning methods. In particular, we combine data mining and constructive induction with more standard machine learning techniques to design methods for detecting fraudulent usage of cellular telephones based on profiling customer behavior. Specifically, we use a rule- learning program to uncover indicators of fraudulent behavior from a large database of cellular calls. These indicators are used to create profilers, which then serve as features to a system that combines evidence from multiple profilers to generate high-confidence alarms. Experiments indicate that this automatic approach performs nearly as well as the best hand-tuned methods for detecting fraud. ",
"neighbors": [
382,
1837
],
"mask": "Validation"
},
{
"node_id": 2133,
"label": 1,
"text": "Title: Genetic Programming Bloat with Dynamic Fitness \nAbstract: Technical Report: CSRP-97-29, 3 December 1997 Abstract In artificial evolution individuals which perform as their parents are usually rewarded identically to their parents. We note that Nature is more dynamic and there may be a penalty to pay for doing the same thing as your parents. We report two sets of experiments where static fitness functions are firstly augmented by a penalty for unchanged offspring and secondly the static fitness case is replaced by randomly generated dynamic test cases. We conclude genetic programming, when evolving artificial ant control programs, is surprisingly little effected by large penalties and program growth is observed in all our experiments. ",
"neighbors": [
1925,
2199,
2206
],
"mask": "Train"
},
{
"node_id": 2134,
"label": 6,
"text": "Title: Learning to Classify Sensor Data inductive bias, supervised Bayesian learning, minimum description length. \nAbstract: ",
"neighbors": [
2644
],
"mask": "Train"
},
{
"node_id": 2135,
"label": 2,
"text": "Title: Learning Polynomial Functions by Feature Construction \nAbstract: We present a method for learning higher-order polynomial functions from examples using linear regression and feature construction. Regression is used on a set of training instances to produce a weight vector for a linear function over the feature set. If this hypothesis is imperfect, a new feature is constructed by forming the product of the two features that most effectively predict the squared error of the current hypothesis. The algorithm is then repeated. In an extension to this method, the specific pair of features to combine is selected by measuring their joint ability to predict the hypothesis' error.",
"neighbors": [
134,
2012,
2023,
2333,
2583
],
"mask": "Test"
},
{
"node_id": 2136,
"label": 3,
"text": "Title: Bayesian Experimental Design: A Review \nAbstract: Non Bayesian experimental design for linear models has been reviewed by Stein-berg and Hunter (1984) and in the recent book by Pukelsheim (1993); Ford, Kitsos and Titterington (1989) reviewed non Bayesian design for nonlinear models. Bayesian design for both linear and nonlinear models is reviewed here. We argue that the design problem is best considered as a decision problem and that it is best solved by maximizing the expected utility of the experiment. This paper considers only in a marginal way, when appropriate, the theory of non Bayesian design. ",
"neighbors": [
2148
],
"mask": "Train"
},
{
"node_id": 2137,
"label": 0,
"text": "Title: Search-based Class Discretization \nAbstract: We present a methodology that enables the use of classification algorithms on regression tasks. We implement this method in system RECLA that transforms a regression problem into a classification one and then uses an existent classification system to solve this new problem. The transformation consists of mapping a continuous variable into an ordinal variable by grouping its values into an appropriate set of intervals. We use misclassification costs as a means to reflect the implicit ordering among the ordinal values of the new variable. We describe a set of alternative discretization methods and, based on our experimental results, justify the need for a search-based approach to choose the best method. Our experimental results confirm the validity of our search-based approach to class discretization, and reveal the accuracy benefits of adding misclassification costs. ",
"neighbors": [
430,
431,
2508
],
"mask": "Train"
},
{
"node_id": 2138,
"label": 3,
"text": "Title: A Nonparametric Bayesian Approach to Modelling Nonlinear Time Series \nAbstract: The Bayesian multivariate adaptive regression spline (BMARS) methodology of Denison et al. (1997) is extended to cope with nonlinear time series and financial datasets. The nonlinear time series model is closely related to the adaptive spline threshold autoregressive (ASTAR) method of Lewis and Stevens (1991) while the financial models can be thought of as Bayesian versions of both the generalised and simple autoregressive conditional het-eroscadastic (GARCH and ARCH) models. ",
"neighbors": [
1718,
2285
],
"mask": "Train"
},
{
"node_id": 2139,
"label": 1,
"text": "Title: Evolving Teamwork and Coordination with Genetic Programming \nAbstract: Some problems can be solved only by multi-agent teams. In using genetic programming to produce such teams, one faces several design decisions. First, there are questions of team diversity and of breeding strategy. In one commonly used scheme, teams consist of clones of single individuals; these individuals breed in the normal way and are cloned to form teams during fitness evaluation. In contrast, teams could also consist of distinct individuals. In this case one can either allow free interbreeding between members of different teams, or one can restrict interbreeding in various ways. A second design decision concerns the types of coordination-facilitating mechanisms provided to individual team members; these range from sensors of various sorts to complex communication systems. This paper examines three breeding strategies (clones, free, and restricted) and three coordination mechanisms (none, deictic sensing, and name-based sensing) for evolving teams of agents in the Serengeti world, a simple predator/prey environment. Among the conclusions are the fact that a simple form of restricted interbreeding outperforms free interbreeding in all teams with distinct individuals, and the fact that name-based sensing consistently outperforms deictic sensing.",
"neighbors": [
995,
2220,
2226
],
"mask": "Test"
},
{
"node_id": 2140,
"label": 3,
"text": "Title: Sonderforschungsbereich 314 K unstliche Intelligenz Wissensbasierte Systeme KI-Labor am Lehrstuhl f ur Informatik IV Numerical\nAbstract: Some problems can be solved only by multi-agent teams. In using genetic programming to produce such teams, one faces several design decisions. First, there are questions of team diversity and of breeding strategy. In one commonly used scheme, teams consist of clones of single individuals; these individuals breed in the normal way and are cloned to form teams during fitness evaluation. In contrast, teams could also consist of distinct individuals. In this case one can either allow free interbreeding between members of different teams, or one can restrict interbreeding in various ways. A second design decision concerns the types of coordination-facilitating mechanisms provided to individual team members; these range from sensors of various sorts to complex communication systems. This paper examines three breeding strategies (clones, free, and restricted) and three coordination mechanisms (none, deictic sensing, and name-based sensing) for evolving teams of agents in the Serengeti world, a simple predator/prey environment. Among the conclusions are the fact that a simple form of restricted interbreeding outperforms free interbreeding in all teams with distinct individuals, and the fact that name-based sensing consistently outperforms deictic sensing.",
"neighbors": [
623,
1268,
1898,
2108,
2292
],
"mask": "Train"
},
{
"node_id": 2141,
"label": 6,
"text": "Title: Fast and Simple Algorithms for Perfect Phylogeny and Triangulating Colored Graphs \nAbstract: This paper presents an O((r n=m) m rnm) algorithm for determining whether a set of n species has a perfect phylogeny, where m is the number of characters used to describe a species and r is the maximum number of states that a character can be in. The perfect phylogeny algorithm leads to an O((2e=k) k e 2 k) algorithm for triangulating a k-colored graph having e edges.",
"neighbors": [
2418,
2511
],
"mask": "Test"
},
{
"node_id": 2142,
"label": 5,
"text": "Title: Run-time versus Compile-time Instruction Scheduling in Superscalar (RISC) Processors: Performance and Tradeoffs \nAbstract: The RISC revolution has spurred the development of processors with increasing levels of instruction level parallelism (ILP). In order to realize the full potential of these processors, multiple instructions must be issued and executed in a single cycle. Consequently, instruction scheduling plays a crucial role as an optimization in this context. While early attempts at instruction scheduling were limited to compile-time approaches, the recent trend is to provide dynamic support in hardware. In this paper, we present the results of a detailed comparative study of the performance advantages to be derived by the spectrum of instruction scheduling approaches: from limited basic-block schedulers in the compiler, to novel and aggressive run-time schedulers in hardware. A significant portion of our experimental study via simulations, is devoted to understanding the performance advantages of run-time scheduling. Our results indicate it to be effective in extracting the ILP inherent to the program trace being scheduled, over a wide range of machine and program parameters. Furthermore, we also show that this effectiveness can be further enhanced by a simple basic-block scheduler in the compiler, which optimizes for the presence of the run-time scheduler in the target; current basic-block schedulers are not designed to take advantage of this feature. We demonstrate this fact by presenting a novel enhanced basic-block scheduler in this paper. Finally, we outline a simple analytical characterization of the performance advantage, that run-time schedulers have to offer. ",
"neighbors": [
2096
],
"mask": "Train"
},
{
"node_id": 2143,
"label": 2,
"text": "Title: MULTIPLE SCALES OF BRAIN-MIND INTERACTIONS \nAbstract: Posner and Raichle's Images of Mind is an excellent educational book and very well written. Some aws as a scientific publication are: (a) the accuracy of the linear subtraction method used in PET is subject to scrutiny by further research at finer spatial-temporal resolutions; (b) lack of accuracy of the experimental paradigm used for EEG complementary studies. Images (Posner & Raichle, 1994) is an excellent introduction to interdisciplinary research in cognitive and imaging science. Well written and illustrated, it presents concepts in a manner well suited both to the layman/undergraduate and to the technical nonexpert/graduate student and postdoctoral researcher. Many, not all, people involved in interdisciplinary neuroscience research agree with the P & R's statements on page 33, on the importance of recognizing emergent properties of brain function from assemblies of neurons. It is clear from the sparse references that this book was not intended as a standalone review of a broad field. There are some aws in the scientific development, but this must be expected in such a pioneering venture. P & R hav e proposed many cognitive mechanisms deserving further study with imaging tools yet to be developed which can yield better spatial-temporal resolutions. ",
"neighbors": [
2181
],
"mask": "Train"
},
{
"node_id": 2144,
"label": 3,
"text": "Title: UNIVERSAL FORMULAS FOR TREATMENT EFFECTS FROM NONCOMPLIANCE DATA \nAbstract: This paper establishes formulas that can be used to bound the actual treatment effect in any experimental study in which treatment assignment is random but subject compliance is imperfect. These formulas provide the tightest bounds on the average treatment effect that can be inferred given the distribution of assignments, treatments, and responses. Our results reveal that even with high rates of noncompliance, experimental data can yield significant and sometimes accurate information on the effect of a treatment on the population.",
"neighbors": [
1326,
1894
],
"mask": "Train"
},
{
"node_id": 2145,
"label": 6,
"text": "Title: Exploration in Machine Learning \nAbstract: Most researchers in machine learning have built their learning systems under the assumption that some external entity would do all the work of furnishing the learning experiences. Recently, however, investigators in several subfields of machine learning have designed systems that play an active role in choosing the situations from which they will learn. Such activity is generally called exploration. This paper describes a few of these exploratory learning projects, as reported in the literature, and attempts to extract a general account of the issues involved in exploration.",
"neighbors": [
2408
],
"mask": "Validation"
},
{
"node_id": 2146,
"label": 6,
"text": "Title: On Learning Read-k-Satisfy-j DNF \nAbstract: We study the learnability of Read-k-Satisfy-j (RkSj) DNF formulas. These are boolean formulas in disjunctive normal form (DNF), in which the maximum number of occurrences of a variable is bounded by k, and the number of terms satisfied by any assignment is at most j. After motivating the investigation of this class of DNF formulas, we present an algorithm that with high probability finds a DNF formula that is logically equivalent to any unknown RkSj DNF formula to be learned. The algorithm uses the well-studied protocol of equivalence and membership queries, and runs in polynomial time for k j = O( log n log log n ), where n is the number of input variables.",
"neighbors": [
638,
1003,
1004,
1897,
2182
],
"mask": "Test"
},
{
"node_id": 2147,
"label": 2,
"text": "Title: Extraction of Facial Features for Recognition using Neural Networks \nAbstract: ",
"neighbors": [
2019,
2498,
2499
],
"mask": "Train"
},
{
"node_id": 2148,
"label": 3,
"text": "Title: Bayesian Design for the Normal Linear Model with Unknown Error Variance \nAbstract: Most of the Bayesian theory of optimal experimental design, for the normal linear model, has been developed under the restrictive assumption that the variance is known. In special cases, insensitivity of specific design criteria to specific prior assumptions on the variance has been demonstrated, but a general result to show the way in which Bayesian optimal designs are affected by prior information about the variance is lacking. This paper stresses the important distinction between expected utility functions and optimality criteria, examines a number of expected utility functions some of which possess interesting properties, and deserve wider use and derives the relevant Bayesian optimality criteria under normal assumptions. This unifying setup is useful for proving the main result of the paper, that clarifies the issue of designing for the normal linear model with unknown variance. ",
"neighbors": [
2136
],
"mask": "Validation"
},
{
"node_id": 2149,
"label": 5,
"text": "Title: Scheduling and Mapping: Software Pipelining in the Presence of Structural Hazards proposed formulation and a\nAbstract: Recently, software pipelining methods based on an ILP (Integer Linear Programming) framework have been successfully applied to derive rate-optimal schedules for architectures involving clean pipelines | pipelines without structural hazards. The problem for architectures beyond such clean pipelines remains open. One challenge is how, under a unified ILP framework, to simultaneously represent resource constraints for unclean pipelines, and the assignment or mapping of operations from a loop to those pipelines. In this paper we provide a framework which does exactly this, and in addition constructs rate-optimal software pipelined schedules. ",
"neighbors": [
1955,
2188,
2190,
2194
],
"mask": "Train"
},
{
"node_id": 2150,
"label": 4,
"text": "Title: Multi-time Models for Temporally Abstract Planning \nAbstract: Planning and learning at multiple levels of temporal abstraction is a key problem for artificial intelligence. In this paper we summarize an approach to this problem based on the mathematical framework of Markov decision processes and reinforcement learning. Current model-based reinforcement learning is based on one-step models that cannot represent common-sense higher-level actions, such as going to lunch, grasping an object, or flying to Denver. This paper generalizes prior work on temporally abstract models [Sutton, 1995] and extends it from the prediction setting to include actions, control, and planning. We introduce a more general form of temporally abstract model, the multi-time model, and establish its suitability for planning and learning by virtue of its relationship to the Bellman equations. This paper summarizes the theoretical framework of multi-time models and illustrates their potential advantages in a The need for hierarchical and abstract planning is a fundamental problem in AI (see, e.g., Sacerdoti, 1977; Laird et al., 1986; Korf, 1985; Kaelbling, 1993; Dayan & Hinton, 1993). Model-based reinforcement learning offers a possible solution to the problem of integrating planning with real-time learning and decision-making (Peng & Williams, 1993, Moore & Atkeson, 1993; Sutton and Barto, 1998). However, current model-based reinforcement learning is based on one-step models that cannot represent common-sense, higher-level actions. Modeling such actions requires the ability to handle different, interrelated levels of temporal abstraction. A new approach to modeling at multiple time scales was introduced by Sutton (1995) based on prior work by Singh , Dayan , and Sutton and Pinette . This approach enables models of the environment at different temporal scales to be intermixed, producing temporally abstract models. However, that work was concerned only with predicting the environment. This paper summarizes an extension of the approach including actions and control of the environment [Precup & Sutton, 1997]. In particular, we generalize the usual notion of a gridworld planning task.",
"neighbors": [
1192,
1954,
2179,
2183,
2222,
2305
],
"mask": "Test"
},
{
"node_id": 2151,
"label": 0,
"text": "Title: A Yardstick for the Evaluation of Case-Based Classifiers \nAbstract: This paper proposes that the generalisation capabilities of a case-based reasoning system can be evaluated by comparison with a `rote-learning' algorithm which uses a very simple generalisation strategy. Two such algorithms are defined, and expressions for their classification accuracy are derived as a function of the size of training sample. A series of experiments using artificial and `natural' data sets is described in which the learning curve for a case-based learner is compared with those for the apparently trivial rote-learning learning algorithms. The results show that in a number of `plausible' situations, the learning curves for a simple case-based learner and the `majority' rote-learner can barely be distinguished, although a domain is demonstrated where favourable performance from the case-based learner is observed. This suggests that the maxim of case-based reasoning that `similar problems have similar solutions' may be useful as the basis of a generalisation strategy only in selected domains.",
"neighbors": [
1109,
1584,
2037,
2342
],
"mask": "Validation"
},
{
"node_id": 2152,
"label": 1,
"text": "Title: Cellular Encoding for Interactive Evolutionary Robotics \nAbstract: Research in robotics programming is divided in two camps. The direct hand programmming approach uses an explicit model or a behavioral model ( subsumption architecture). The machine learning community uses neural network and/or genetic algorithm. We claim that hand programming and learning are complementary. The two approaches used together can be orders of magnitude more powerful than each approach taken separately. We propose a method to combine them both. It includes three concepts : syntactic constraints to restrict the search space, hand-made problem decomposition, hand given fitness. We use this method to solve a complex problem ( eight-legged locomotion). It needs 5000 less evaluations compared to when genetic algorithm are used alone. ",
"neighbors": [
1277,
2429
],
"mask": "Test"
},
{
"node_id": 2153,
"label": 3,
"text": "Title: Rates of convergence of the Hastings and Metropolis algorithms \nAbstract: We apply recent results in Markov chain theory to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and provide necessary and sufficient conditions for the algorithms to converge at a geometric rate to a prescribed distribution . In the independence case (in IR k ) these indicate that geometric convergence essentially occurs if and only if the candidate density is bounded below by a multiple of ; in the symmetric case (in IR only) we show geometric convergence essentially occurs if and only if has geometric tails. We also evaluate recently developed computable bounds on the rates of convergence in this context: examples show that these theoretical bounds can be inherently extremely conservative, although when the chain is stochastically monotone the bounds may well be effective. ",
"neighbors": [
115,
889,
1713,
1716,
1977,
1982,
1992,
2002,
2008,
2022,
2025,
2219,
2318,
2699
],
"mask": "Train"
},
{
"node_id": 2154,
"label": 2,
"text": "Title: Olfaction Metal Oxide Semiconductor Gas Sensors and Neural Networks \nAbstract: We apply recent results in Markov chain theory to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and provide necessary and sufficient conditions for the algorithms to converge at a geometric rate to a prescribed distribution . In the independence case (in IR k ) these indicate that geometric convergence essentially occurs if and only if the candidate density is bounded below by a multiple of ; in the symmetric case (in IR only) we show geometric convergence essentially occurs if and only if has geometric tails. We also evaluate recently developed computable bounds on the rates of convergence in this context: examples show that these theoretical bounds can be inherently extremely conservative, although when the chain is stochastically monotone the bounds may well be effective. ",
"neighbors": [
2119
],
"mask": "Test"
},
{
"node_id": 2155,
"label": 2,
"text": "Title: Cognitive Computation (Extended Abstract) \nAbstract: Cognitive computation is discussed as a discipline that links together neurobiology, cognitive psychology and artificial intelligence. ",
"neighbors": [
591,
2467
],
"mask": "Train"
},
{
"node_id": 2156,
"label": 6,
"text": "Title: WORST CASE PREDICTION OVER SEQUENCES UNDER LOG LOSS \nAbstract: We consider the game of sequentially assigning probabilities to future data based on past observations under logarithmic loss. We are not making probabilistic assumptions about the generation of the data, but consider a situation where a player tries to minimize his loss relative to the loss of the (with hindsight) best distribution from a target class for the worst sequence of data. We give bounds on the minimax regret in terms of the metric entropies of the target class with respect to suitable distances between distributions. ",
"neighbors": [
453,
2098
],
"mask": "Train"
},
{
"node_id": 2157,
"label": 0,
"text": "Title: Similarity Metrics: A Formal Unification of Cardinal and Non-Cardinal Similarity Measures \nAbstract: In [9] we introduced a formal framework for constructing ordinal similarity measures, and suggested how this might also be applied to cardinal measures. In this paper we will place this approach in a more general framework, called similarity metrics. In this framework, ordinal similarity metrics (where comparison returns a boolean value) can be combined with cardinal metrics (returning a numeric value) and, indeed, with metrics returning values of other types, to produce new metrics.",
"neighbors": [
66,
288,
2565
],
"mask": "Train"
},
{
"node_id": 2158,
"label": 5,
"text": "Title: Learning Recursion with Iterative Bootstrap Induction (Extended Abstract) \nAbstract: In this paper we are concerned with the problem of inducing recursive Horn clauses from small sets of training examples. The method of iterative bootstrap induction is presented. In the first step, the system generates simple clauses, which can be regarded as properties of the required definition. Properties represent generalizations of the positive examples, simulating the effect of having larger number of examples. Properties are used subsequently to induce the required recursive definitions. This paper describes the method together with a series of experiments. The results support the thesis that iterative bootstrap induction is indeed an effective technique that could be of general use in ILP.",
"neighbors": [
1498,
2449
],
"mask": "Validation"
},
{
"node_id": 2159,
"label": 3,
"text": "Title: Wavelet Shrinkage: Asymptopia? \nAbstract: Considerable effort has been directed recently to develop asymptotically minimax methods in problems of recovering infinite-dimensional objects (curves, densities, spectral densities, images) from noisy data. A rich and complex body of work has evolved, with nearly- or exactly- minimax estimators being obtained for a variety of interesting problems. Unfortunately, the results have often not been translated into practice, for a variety of reasons sometimes, similarity to known methods, sometimes, computational intractability, and sometimes, lack of spatial adaptivity. We discuss a method for curve estimation based on n noisy data; one translates the empirical wavelet coefficients towards the origin by an amount method is different from methods in common use today, is computationally practical, and is spatially adaptive; thus it avoids a number of previous objections to minimax estimators. At the same time, the method is nearly minimax for a wide variety of loss functions - e.g. pointwise error, global error measured in L p norms, pointwise and global error in estimation of derivatives and for a wide range of smoothness classes, including standard Holder classes, Sobolev classes, and Bounded Variation. This is a much broader near-optimality than anything previously proposed in the minimax literature. Finally, the theory underlying the method is interesting, as it exploits a correspondence between statistical questions and questions of optimal recovery and information-based complexity. Acknowledgements: These results have been described at the Oberwolfach meeting `Mathematische Stochastik' December, 1992 and at the AMS Annual meeting, January 1993. This work was supported by NSF DMS 92-09130. The authors would like to thank Paul-Louis Hennequin, who organized the Ecole d' Ete de Probabilites at Saint Flour 1990, where this collaboration began, and to Universite de Paris VII (Jussieu) and Universite de Paris-sud (Orsay) for supporting visits of DLD and IMJ. The authors would like to thank Ildar Ibragimov and Arkady Nemirovskii for personal correspondence cited below. p",
"neighbors": [
1910,
2081,
2661
],
"mask": "Test"
},
{
"node_id": 2160,
"label": 3,
"text": "Title: group, and despite having just 337 subjects, the study strongly supports Identification of causal effects\nAbstract: Figure 8a and Figure 8b show the prior distribution over f(-CR ) that follows from the flat prior and the skewed prior, respectively. Figure 8c and Figure 8d show the posterior distribution p(f (-CR jD)) obtained by our system when run on the Lipid data, using the flat prior and the skewed prior, respectively. From the bounds of Balke and Pearl (1994), it follows that under the large-sample assumption, 0:51 f (-CR jD) 0:86. Figure 8: Prior (a, b) and posterior (c,d) distributions for a subpopulation f (-CR jD) specified by the counter-factual query \"Would Joe have improved had he taken the drug, given that he did not improve without it\". (a) corresponds to the flat prior, (b) to the skewed prior. This paper identifies and demonstrates a new application area for network-based inference techniques - the management of causal analysis in clinical experimentation. These techniques, which were originally developed for medical diagnosis, are shown capable of circumventing one of the major problems in clinical experiments the assessment of treatment efficacy in the face of imperfect compliance. While standard diagnosis involves purely probabilistic inference in fully specified networks, causal analysis involves partially specified networks in which the links are given causal interpretation and where the domain of some variables are unknown. The system presented in this paper provides the clinical research community, we believe for the first time, an assumption-free, unbiased assessment of the average treatment effect. We offer this system as a practical tool to be used whenever full compliance cannot be enforced and, more broadly, whenever the data available is insufficient for answering the queries of interest to the clinical investigator. Lipid Research Clinic Program. 1984. The lipid research clinics coronary primary prevention trial results, parts i and ii. Journal of the American Medical Association 251(3):351-374. January. ",
"neighbors": [
1747,
2434
],
"mask": "Train"
},
{
"node_id": 2161,
"label": 3,
"text": "Title: On The Foundation Of Structural Equation Models or \nAbstract: When Can We Give Causal Interpretation Abstract The assumptions underlying statistical estimation are of fundamentally different character from the causal assumptions that underly structural equation models (SEM). The differences have been blurred through the years for the lack of a mathematical notation capable of distinguishing causal from equational relationships. Recent advances in graphical methods provide formal explication of these differences, and are destined to have profound impact on SEM's practice and philosophy.",
"neighbors": [
1326,
2167
],
"mask": "Train"
},
{
"node_id": 2162,
"label": 2,
"text": "Title: Incremental Class Learning approach and its application to Handwritten Digit Recognition \nAbstract: Incremental Class Learning (ICL) provides a feasible framework for the development of scalable learning systems. Instead of learning a complex problem at once, ICL focuses on learning subproblems incrementally, one at a time | using the results of prior learning for subsequent learning | and then combining the solutions in an appropriate manner. With respect to multi-class classification problems, the ICL approach presented in this paper can be summarized as follows. Initially the system focuses on one category. After it learns this category, it tries to identify a compact subset of features (nodes) in the hidden layers, that are crucial for the recognition of this category. The system then freezes these crucial nodes (features) by fixing their incoming weights. As a result, these features cannot be obliterated in subsequent learning. These frozen features are available during subsequent learning and can serve as parts of weight structures build to recognize other categories. As more categories are learned, the set of features gradually stabilizes and learning a new category requires less effort. Eventually, learning a new category may only involve combining existing features in an appropriate manner. The approach promotes the sharing of learned features among a number of categories and also alleviates the well-known catastrophic interference problem. We present results of applying the ICL approach to the Handwritten Digit Recognition problem, based on a spatio-temporal representation of patterns. ",
"neighbors": [
745,
2586,
2599
],
"mask": "Validation"
},
{
"node_id": 2163,
"label": 5,
"text": "Title: Speculative Hedge: Regulating Compile-Time Speculation Against Profile Variations code performance in the presence of execution\nAbstract: Path-oriented scheduling methods, such as trace scheduling and hyperblock scheduling, use speculation to extract instruction-level parallelism from control-intensive programs. These methods predict important execution paths in the current scheduling scope using execution profiling or frequency estimation. Aggressive speculation is then applied to the important execution paths, possibly at the cost of degraded performance along other paths. Therefore, the speed of the output code can be sensitive to the compiler's ability to accurately predict the important execution paths. Prior work in this area has utilized the speculative yield function by Fisher, coupled with dependence height, to distribute instruction priority among execution paths in the scheduling scope. While this technique provides more stability of performance by paying attention to the needs of all paths, it does not directly address the problem of mismatch between compile-time prediction and run-time behavior. The work presented in this paper extends the speculative yield and dependence height heuristic to explicitly minimize the penalty suffered by other paths when instructions are speculated along a path. Since the execution time of a path is determined by the number of cycles spent between a path's entrance and exit in the scheduling scope, the heuristic attempts to eliminate unnecessary speculation that delays any path's exit. Such control of speculation makes the performance much less sensitive to the actual path taken at run time. The proposed method has a strong emphasis on achieving minimal delay to all exits. Thus the name, speculative hedge, is used. This paper presents the speculative hedge heuristic, and shows how it controls over-speculation in a superblock/hyperblock scheduler. The stability of out Copyright 1996 IEEE. Published in the Proceedings of the 29th Annual International Symposium on Microarchitecture, De-cember 2-4, 1996, Paris, France. Personal use of this material is permitted. However, permission to reprint/republish this material for resale or redistribution purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager, Copyrights and Permissions / IEEE Service Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ 08855-1331, USA. Telephone: + Intl. 908-562-3966 ",
"neighbors": [
1849
],
"mask": "Test"
},
{
"node_id": 2164,
"label": 3,
"text": "Title: Efficient Inference in Bayes Networks As A Combinatorial Optimization Problem \nAbstract: A number of exact algorithms have been developed to perform probabilistic inference in Bayesian belief networks in recent years. The techniques used in these algorithms are closely related to network structures and some of them are not easy to understand and implement. In this paper, we consider the problem from the combinatorial optimization point of view and state that efficient probabilistic inference in a belief network is a problem of finding an optimal factoring given a set of probability distributions. From this viewpoint, previously developed algorithms can be seen as alternate factoring strategies. In this paper, we define a combinatorial optimization problem, the optimal factoring problem, and discuss application of this problem in belief networks. We show that optimal factoring provides insight into the key elements of efficient probabilistic inference, and demonstrate simple, easily implemented algorithms with excellent performance. ",
"neighbors": [
5,
515,
1749,
2094,
2521
],
"mask": "Train"
},
{
"node_id": 2165,
"label": 1,
"text": "Title: Auto-teaching: networks that develop their own teaching input \nAbstract: Backpropagation learning (Rumelhart, Hinton and Williams, 1986) is a useful research tool but it has a number of undesiderable features such as having the experimenter decide from outside what should be learned. We describe a number of simulations of neural networks that internally generate their own teaching input. The networks generate the teaching input by trasforming the network input through connection weights that are evolved using a form of genetic algorithm. What results is an innate (evolved) capacity not to behave efficiently in an environment but to learn to behave efficiently. The analysis of what these networks evolve to learn shows some interesting results. ",
"neighbors": [
129,
538,
745,
1143,
2193,
2363
],
"mask": "Train"
},
{
"node_id": 2166,
"label": 3,
"text": "Title: Probabilistic evaluation of counterfactual queries \nAbstract: To appear in the Twelfth National Conference on Artificial Intelligence (AAAI-94), Seattle, WA, July 31 August 4, 1994. Technical Report R-213-A April, 1994 Abstract Evaluation of counterfactual queries (e.g., \"If A were true, would C have been true?\") is important to fault diagnosis, planning, and determination of liability. We present a formalism that uses probabilistic causal networks to evaluate one's belief that the counterfactual consequent, C, would have been true if the antecedent, A, were true. The antecedent of the query is interpreted as an external action that forces the proposition A to be true, which is consistent with Lewis' Miraculous Analysis. This formalism offers a concrete embodiment of the \"closest world\" approach which (1) properly reflects common understanding of causal influences, (2) deals with the uncertainties inherent in the world, and (3) is amenable to machine representation. ",
"neighbors": [
260,
772,
776,
971,
1527,
2167,
2524
],
"mask": "Train"
},
{
"node_id": 2167,
"label": 3,
"text": "Title: Counterfactuals and Policy Analysis in Structural Models \nAbstract: Evaluation of counterfactual queries (e.g., \"If A were true, would C have been true?\") is important to fault diagnosis, planning, determination of liability, and policy analysis. We present a method for evaluating counter-factuals when the underlying causal model is represented by structural models a nonlinear generalization of the simultaneous equations models commonly used in econometrics and social sciences. This new method provides a coherent means for evaluating policies involving the control of variables which, prior to enacting the policy were influenced by other variables in the system. ",
"neighbors": [
772,
2088,
2161,
2166
],
"mask": "Train"
},
{
"node_id": 2168,
"label": 6,
"text": "Title: Malicious Membership Queries and Exceptions \nAbstract: Evaluation of counterfactual queries (e.g., \"If A were true, would C have been true?\") is important to fault diagnosis, planning, determination of liability, and policy analysis. We present a method for evaluating counter-factuals when the underlying causal model is represented by structural models a nonlinear generalization of the simultaneous equations models commonly used in econometrics and social sciences. This new method provides a coherent means for evaluating policies involving the control of variables which, prior to enacting the policy were influenced by other variables in the system. ",
"neighbors": [
1363,
2350
],
"mask": "Train"
},
{
"node_id": 2169,
"label": 3,
"text": "Title: Theory Refinement on Bayesian Networks \nAbstract: Theory refinement is the task of updating a domain theory in the light of new cases, to be done automatically or with some expert assistance. The problem of theory refinement under uncertainty is reviewed here in the context of Bayesian statistics, a theory of belief revision. The problem is reduced to an incremental learning task as follows: the learning system is initially primed with a partial theory supplied by a domain expert, and thereafter maintains its own internal representation of alternative theories which is able to be interrogated by the domain expert and able to be incrementally refined from data. Algorithms for refinement of Bayesian networks are presented to illustrate what is meant by \"partial theory\", \"alternative theory representation\", etc. The algorithms are an incremental variant of batch learning algorithms from the literature so can work well in batch and incremental mode.",
"neighbors": [
1290,
2420
],
"mask": "Test"
},
{
"node_id": 2170,
"label": 1,
"text": "Title: Generalist and Specialist Behavior Due to Individual Energy Extracting Abilities. \nAbstract: The emergence of generalist and specialist behavior in populations of neural networks is studied. Energy extracting ability is included as a property of an organism. In artificial life simulations with organisms living in an environment, the fitness score can be interpreted as the combination of an organisms behavior and the ability of the organism to extract energy from potential food sources distributed in the environment. The energy extracting ability is viewed as an evolvable trait of organisms a particular organism's mechanisms for extracting energy from the environment and, therefore, it is not fixed and decided by the researcher. Simulations with fixed and evolvable energy extracting abilities show that the energy extracting mechanism, the sensory apparatus, and the behavior of organisms may co-evolve and be co-adapted. The results suggest that populations of organisms evolve to be generalists or specialists due to individual energy extracting abilities.",
"neighbors": [
1325,
2237
],
"mask": "Train"
},
{
"node_id": 2171,
"label": 5,
"text": "Title: K unstliche Intelligenz Grdt: Enhancing Model-Based Learning for Its Application in Robot Navigation \nAbstract: The emergence of generalist and specialist behavior in populations of neural networks is studied. Energy extracting ability is included as a property of an organism. In artificial life simulations with organisms living in an environment, the fitness score can be interpreted as the combination of an organisms behavior and the ability of the organism to extract energy from potential food sources distributed in the environment. The energy extracting ability is viewed as an evolvable trait of organisms a particular organism's mechanisms for extracting energy from the environment and, therefore, it is not fixed and decided by the researcher. Simulations with fixed and evolvable energy extracting abilities show that the energy extracting mechanism, the sensory apparatus, and the behavior of organisms may co-evolve and be co-adapted. The results suggest that populations of organisms evolve to be generalists or specialists due to individual energy extracting abilities.",
"neighbors": [
344,
638,
2031,
2032
],
"mask": "Test"
},
{
"node_id": 2172,
"label": 6,
"text": "Title: Tractability of Theory Patching \nAbstract: In this paper we consider the problem of theory patching, in which we are given a domain theory, some of whose components are indicated to be possibly flawed, and a set of labeled training examples for the domain concept. The theory patching problem is to revise only the indicated components of the theory, such that the resulting theory correctly classifies all the training examples. Theory patching is thus a type of theory revision in which revisions are made to individual components of the theory. Our concern in this paper is to determine for which classes of logical domain theories the theory patching problem is tractable. We consider both propositional and first-order domain theories, and show that the theory patching problem is equivalent to that of determining what information contained in a theory is stable regardless of what revisions might be performed to the theory. We show that determining stability is tractable if the input theory satisfies two conditions: that revisions to each theory component have monotonic effects on the classification of examples, and that theory components act independently in the classification of examples in the theory. We also show how the concepts introduced can be used to determine the soundness and completeness of particular theory patching algorithms.",
"neighbors": [
136,
159,
2692
],
"mask": "Train"
},
{
"node_id": 2173,
"label": 1,
"text": "Title: Adapting Control Strategies for Situated Autonomous Agents \nAbstract: This paper studies how to balance evolutionary design and human expertise in order to best design situated autonomous agents which can learn specific tasks. A genetic algorithm designs control circuits to learn simple behaviors, and given control strategies for simple behaviors, the genetic algorithm designs a combinational circuit that switches between these simple behaviors to perform a navigation task. Keywords: Genetic Algorithms, Computational Design, Autonomous Agents, Robotics. ",
"neighbors": [
163,
636,
846,
2204
],
"mask": "Test"
},
{
"node_id": 2174,
"label": 4,
"text": "Title: The Role of the Trainer in Reinforcement Learning \nAbstract: In this paper we propose a threestage incremental approach to the development of autonomous agents. We discuss some issues about the characteristics which differentiate reinforcement programs (RPs), and define the trainer as a particular kind of RP. We present a set of results obtained running experiments with a trainer which provides guidance to the AutonoMouse, our mousesized autonomous robot. ",
"neighbors": [
636,
764,
1573,
2687
],
"mask": "Train"
},
{
"node_id": 2175,
"label": 1,
"text": "Title: The Troubling Aspects of a Building Block Hypothesis for Genetic Programming \nAbstract: In this paper we carefully formulate a Schema Theorem for Genetic Programming (GP) using a schema definition that accounts for the variable length and the non-homologous nature of GP's representation. In a manner similar to early GA research, we use interpretations of our GP Schema Theorem to obtain a GP Building Block definition and to state a \"classical\" Building Block Hypothesis (BBH): that GP searches by hierarchically combining building blocks. We report that this approach is not convincing for several reasons: it is difficult to find support for the promotion and combination of building blocks solely by rigourous interpretation of a GP Schema Theorem; even if there were such support for a BBH, it is empirically questionable whether building blocks always exist because partial solutions of consistently above average fitness and resilience to disruption are not assured; also, a BBH constitutes a narrow and imprecise account of GP search behavior.",
"neighbors": [
120,
163,
1009,
1257,
1362,
1696,
1719,
1745,
1940,
2087,
2206,
2249,
2250,
2259,
2361
],
"mask": "Train"
},
{
"node_id": 2176,
"label": 2,
"text": "Title: An Analytical Framework for Local Feedforward Networks \nAbstract: Interference in neural networks occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are referred to as spatially local networks. To understand these properties, a theoretical framework, consisting of a measure of interference and a measure of network localization, is developed. These measures incorporate not only the network weights and architecture but also the learning algorithm. Using this framework to analyze sigmoidal, multi-layer perceptron (MLP) networks that employ the back-propagation learning algorithm, we address a familiar misconception that single-hidden-layer sigmoidal networks are inherently non-local by demonstrating that given a sufficiently large number of adjustable weights, single-hidden-layer sigmoidal MLPs can be made arbitrarily local while retaining the ability to approximate any continuous function on a compact domain. fl Partially supported under Task 2312 R1 by the United States Air Force Office of Scientific Research.",
"neighbors": [
1914,
2535
],
"mask": "Test"
},
{
"node_id": 2177,
"label": 1,
"text": "Title: Analyzing Social Network Structures in the Iterated Prisoner's Dilemma with Choice and Refusal \nAbstract: University of Wisconsin-Madison, Department of Computer Sciences Technical Report CS-TR-94-1259 Abstract The Iterated Prisoner's Dilemma with Choice and Refusal (IPD/CR) [46] is an extension of the Iterated Prisoner's Dilemma with evolution that allows players to choose and to refuse their game partners. From individual behaviors, behavioral population structures emerge. In this report, we examine one particular IPD/CR environment and document the social network methods used to identify population behaviors found within this complex adaptive system. In contrast to the standard homogeneous population of nice cooperators, we have also found metastable populations of mixed strategies within this environment. In particular, the social networks of interesting populations and their evolution are examined.",
"neighbors": [
163,
1883
],
"mask": "Test"
},
{
"node_id": 2178,
"label": 2,
"text": "Title: Statistical Mechanics of Nonlinear Nonequilibrium Financial Markets: Applications to Optimized Trading \nAbstract: A paradigm of statistical mechanics of financial markets (SMFM) using nonlinear nonequilibrium algorithms, first published in L. Ingber, Mathematical Modelling, 5, 343-361 (1984), is fit to multi-variate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. ",
"neighbors": [
1788,
1793,
1794,
1795,
2082,
2181,
2545,
2582
],
"mask": "Train"
},
{
"node_id": 2179,
"label": 4,
"text": "Title: Scaling Reinforcement Learning Algorithms by Learning Variable Temporal Resolution Models \nAbstract: The close connection between reinforcement learning (RL) algorithms and dynamic programming algorithms has fueled research on RL within the machine learning community. Yet, despite increased theoretical understanding, RL algorithms remain applicable to simple tasks only. In this paper I use the abstract framework afforded by the connection to dynamic programming to discuss the scaling issues faced by RL researchers. I focus on learning agents that have to learn to solve multiple structured RL tasks in the same environment. I propose learning abstract environment models where the abstract actions represent \"intentions\" of achieving a particular state. Such models are variable temporal resolution models because in different parts of the state space the abstract actions span different number of time steps. The operational definitions of abstract actions can be learned incrementally using repeated experience at solving RL tasks. I prove that under certain conditions solutions to new RL tasks can be found by using simu lated experience with abstract actions alone.",
"neighbors": [
321,
2150,
2183
],
"mask": "Validation"
},
{
"node_id": 2180,
"label": 6,
"text": "Title: Oblivious Decision Trees, Graphs, and Top-Down Pruning \nAbstract: We describe a supervised learning algorithm, EODG, that uses mutual information to build an oblivious decision tree. The tree is then converted to an Oblivious read-Once Decision Graph (OODG) by merging nodes at the same level of the tree. For domains that are appropriate for both decision trees and OODGs, performance is approximately the same as that of C4.5, but the number of nodes in the OODG is much smaller. The merging phase that converts the oblivious decision tree to an OODG provides a new way of dealing with the replication problem and a new pruning mechanism that works top down starting from the root. The pruning mechanism is well suited for finding symmetries and aids in recovering from splits on irrelevant features that may happen during the tree construction.",
"neighbors": [
1500,
2577
],
"mask": "Validation"
},
{
"node_id": 2181,
"label": 2,
"text": "Title: Statistical mechanics of neocortical interactions: Training and testing canonical momenta indicators of EEG \nAbstract: A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electrical-chemical properties of synaptic interactions. While not useful to yield insights at the single neuron level, SMNI has demonstrated its capability in describing large-scale properties of short-term memory and electroencephalographic (EEG) systematics. The necessity of including nonlinear and stochastic structures in this development has been stressed. Sets of EEG and evoked potential data were fit, collected to investigate genetic predispositions to alcoholism and to extract brain signatures of short-term memory. Adaptive Simulated Annealing (ASA), a global optimization algorithm, was used to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta indicators (CMI) are thereby derived for individual's EEG data. The CMI give better signal recognition than the raw data, and can be used to advantage as correlates of behavioral states. These results give strong quantitative support for an accurate intuitive picture, portraying neocortical interactions as having common algebraic or physics mechanisms that scale across quite disparate spatial scales and functional or behavioral phenomena, i.e., describing interactions among neurons, columns of neurons, and regional masses of neurons. This paper adds to these previous investigations two important aspects, a description of how the CMI may be used in source localization, and calculations using previously ASA-fitted parameters in out-of-sample data. ",
"neighbors": [
1788,
1793,
1795,
2143,
2178
],
"mask": "Validation"
},
{
"node_id": 2182,
"label": 6,
"text": "Title: Weakly Learning DNF and Characterizing Statistical Query Learning Using Fourier Analysis \nAbstract: We present new results, both positive and negative, on the well-studied problem of learning disjunctive normal form (DNF) expressions. We first prove that an algorithm due to Kushilevitz and Mansour [16] can be used to weakly learn DNF using membership queries in polynomial time, with respect to the uniform distribution on the inputs. This is the first positive result for learning unrestricted DNF expressions in polynomial time in any nontrivial formal model of learning. It provides a sharp contrast with the results of Kharitonov [15], who proved that AC 0 is not efficiently learnable in the same model (given certain plausible cryptographic assumptions). We also present efficient learning algorithms in various models for the read-k and SAT-k subclasses of DNF. For our negative results, we turn our attention to the recently introduced statistical query model of learning [11]. This model is a restricted version of the popular Probably Approximately Correct (PAC) model [23], and practically every class known to be efficiently learnable in the PAC model is in fact learnable in the statistical query model [11]. Here we give a general characterization of the complexity of statistical query learning in terms of the number of uncorrelated functions in the concept class. This is a distribution-dependent quantity yielding upper and lower bounds on the number of statistical queries required for learning on any input distribution. As a corollary, we obtain that DNF expressions and decision trees are not even weakly learnable with fl This research is sponsored in part by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. Support also is sponsored by the National Science Foundation under Grant No. CC-9119319. Blum also supported in part by NSF National Young Investigator grant CCR-9357793. Views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of Wright Laboratory or the United States Government, or NSF. respect to the uniform input distribution in polynomial time in the statistical query model. This result is information-theoretic and therefore does not rely on any unproven assumptions. It demonstrates that no simple modification of the existing algorithms in the computational learning theory literature for learning various restricted forms of DNF and decision trees from passive random examples (and also several algorithms proposed in the experimental machine learning communities, such as the ID3 algorithm for decision trees [22] and its variants) will solve the general problem. The unifying tool for all of our results is the Fourier analysis of a finite class of boolean functions on the hypercube. ",
"neighbors": [
591,
1003,
1748,
1835,
1897,
2011,
2146,
2633
],
"mask": "Test"
},
{
"node_id": 2183,
"label": 4,
"text": "Title: Multi-time Models for Temporally \nAbstract: Planning Abstract Planning and learning at multiple levels of temporal abstraction is a key problem for artificial intelligence. In this paper we summarize an approach to this problem based on the mathematical framework of Markov decision processes and reinforcement learning. Current model-based reinforcement learning is based on one-step models that cannot represent common-sense higher-level actions, such as going to lunch, grasping an object, or flying to Denver. This paper generalizes prior work on temporally abstract models (Sutton, 1995b) and extends it from the prediction setting to include actions, control, and planning. We introduce a more general form of temporally abstract model, the multi-time model, and establish its suitability for planning and learning by virtue of its relationship to Bellman equations. This paper summarizes the theoretical framework of multi-time models and illustrates their potential ad The need for hierarchical and abstract planning is a fundamental problem in AI (see, e.g., Sacerdoti, 1977; Laird et al., 1986; Korf, 1985; Kaelbling, 1993; Dayan & Hinton, 1993). Model-based reinforcement learning offers a possible solution to the problem of integrating planning with real-time learning and decision-making (Peng & Williams, 1993, Moore & Atkeson, 1993; Sutton and Barto, in press). However, current model-based reinforcement learning is based on one-step models that cannot represent common-sense, higher-level actions. Modeling such actions requires the ability to handle different, interrelated levels of temporal abstraction. A new approach to modeling at multiple time scales was introduced by Sutton (1995b) based on prior work by Singh (1992), Dayan (1993b), and Sutton and Pinette (1985). This approach enables models of the environment at different temporal scales to be intermixed, producing temporally abstract models. However, that work was concerned only with predicting the environment. This paper summarizes vantages in a gridworld planning task.",
"neighbors": [
1954,
2150,
2179,
2305
],
"mask": "Train"
},
{
"node_id": 2184,
"label": 0,
"text": "Title: AN ARCHITECTURE FOR GOAL-DRIVEN EXPLANATION \nAbstract: In complex and changing environments explanation must be a a dynamic and goal-driven process. This paper discusses an evolving system implementing a novel model of explanation generation | Goal-Driven Interactive Explanation | that models explanation as a goal-driven, multi-strategy, situated process inter-weaving reasoning with action. We describe a preliminary implementation of this model in gobie, a system that generates explanations for its internal use to support plan generation and execution. ",
"neighbors": [
2398
],
"mask": "Validation"
},
{
"node_id": 2185,
"label": 2,
"text": "Title: of nucleotide sites needed to accurately reconstruct large evolutionary trees 1 \nAbstract: DIMACS Technical Report 96-19 July 1996 ",
"neighbors": [
2109
],
"mask": "Test"
},
{
"node_id": 2186,
"label": 2,
"text": "Title: A REMARK ON ROBUST STABILIZATION OF GENERAL ASYMPTOTICALLY CONTROLLABLE SYSTEMS \nAbstract: It was shown recently by Clarke, Ledyaev, Sontag and Subbotin that any asymptotically controllable system can be stabilized by means of a certain type of discontinuous feedback. The feedback laws constructed in that work are robust with respect to actuator errors as well as to perturbations of the system dynamics. A drawback, however, is that they may be highly sensitive to errors in the measurement of the state vector. This paper addresses this shortcoming, and shows how to design a dynamic hybrid stabilizing controller which, while preserving robustness to external perturbations and actuator error, is also robust with respect to measurement error. This new design relies upon a controller which incorporates an internal model of the system driven by the previously constructed feedback. ",
"neighbors": [
2321
],
"mask": "Test"
},
{
"node_id": 2187,
"label": 2,
"text": "Title: \"UNIVERSAL\" CONSTRUCTION OF ARTSTEIN'S THEOREM ON NONLINEAR STABILIZATION 1 \nAbstract: Report SYCON-89-03 ABSTRACT This note presents an explicit proof of the theorem -due to Artstein- which states that the existence of a smooth control-Lyapunov function implies smooth stabilizability. More- over, the result is extended to the real-analytic and rational cases as well. The proof uses a \"universal\" formula given by an algebraic function of Lie derivatives; this formula originates in the solution of a simple Riccati equation. ",
"neighbors": [
531,
2314,
2321
],
"mask": "Test"
},
{
"node_id": 2188,
"label": 5,
"text": "Title: Improving Software Pipelining With Unroll-and-Jam \nAbstract: In this paper, we demonstrate how unroll-and-jam can significantly improve the initiation interval in a software-pipelined loop. Improvements in the initiation interval of greater than 40% are common, while dramatic improvements of a factor of 5 are possible. ",
"neighbors": [
2149,
2189,
2190,
2194
],
"mask": "Validation"
},
{
"node_id": 2189,
"label": 5,
"text": "Title: Stage Scheduling: A Technique to Reduce the Register Requirements of a Modulo Schedule \nAbstract: Modulo scheduling is an efficient technique for exploiting instruction level parallelism in a variety of loops, resulting in high performance code but increased register requirements. We present a set of low computational complexity stage-scheduling heuristics that reduce the register requirements of a given modulo schedule by shifting operations by multiples of II cycles. Measurements on a benchmark suite of 1289 loops from the Perfect Club, SPEC-89, and the Livermore Fortran Kernels shows that our best heuristic achieves on average 99% of the decrease in register requirements obtained by an optimal stage scheduler. ",
"neighbors": [
2188,
2190,
2194,
2365
],
"mask": "Train"
},
{
"node_id": 2190,
"label": 5,
"text": "Title: Minimum Register Requirements for a Modulo Schedule \nAbstract: Modulo scheduling is an efficient technique for exploiting instruction level parallelism in a variety of loops, resulting in high performance code but increased register requirements. We present a combined approach that schedules the loop operations for minimum register requirements, given a modulo reservation table. Our method determines optimal register requirements for machines with finite resources and for general dependence graphs. This method demonstrates the potential of lifetime-sensitive modulo scheduling and is useful in evaluating the performance of lifetime-sensitive modulo scheduling heuristics. ",
"neighbors": [
1955,
2149,
2188,
2189,
2194,
2365
],
"mask": "Validation"
},
{
"node_id": 2191,
"label": 2,
"text": "Reference: [Tex89] Texas Instruments. TMS320C30 C Compiler Reference Guide, 1989. Document Title: SPRU034A. \nAbstract: The design and implementation of software for the Ring Array Processor (RAP), a high performance parallel computer, involved development for three hardware platforms: Sun SPARC workstations, Heurikon MC68020 boards running the VxWorks real-time operating system, and Texas Instruments TMS320C30 DSPs. The RAP now runs in Sun workstations under UNIX and in a VME based system using VxWorks. A flexible set of tools has been provided both to the RAP user and programmer. Primary emphasis has been placed on improving the efficiency of layered artificial neural network algorithms. This was done by providing a library of assembly language routines, some of which use node-custom compilation. An object-oriented RAP interface in C++ is provided that allows programmers to incorporate the RAP as a computational server into their own UNIX applications. For those not wishing to program in C++, a command interpreter has been built that provides interactive and shell-script style RAP manipulation. ",
"neighbors": [
362,
2275
],
"mask": "Validation"
},
{
"node_id": 2192,
"label": 1,
"text": "Title: #1 Robust Feature Selection Algorithms \nAbstract: Selecting a set of features which is optimal for a given task is a problem which plays an important role in a wide variety of contexts including pattern recognition, adaptive control, and machine learning. Our experience with traditional feature selection algorithms in the domain of machine learning lead to an appreciation for their computational efficiency and a concern for their brittleness. This paper describes an alternate approach to feature selection which uses genetic algorithms as the primary search component. Results are presented which suggest that genetic algorithms can be used to increase the robustness of feature selection algorithms without a significant decrease in computational efficiency. ",
"neighbors": [
177,
1743
],
"mask": "Train"
},
{
"node_id": 2193,
"label": 1,
"text": "Title: Growing neural networks \nAbstract: Selecting a set of features which is optimal for a given task is a problem which plays an important role in a wide variety of contexts including pattern recognition, adaptive control, and machine learning. Our experience with traditional feature selection algorithms in the domain of machine learning lead to an appreciation for their computational efficiency and a concern for their brittleness. This paper describes an alternate approach to feature selection which uses genetic algorithms as the primary search component. Results are presented which suggest that genetic algorithms can be used to increase the robustness of feature selection algorithms without a significant decrease in computational efficiency. ",
"neighbors": [
129,
538,
2165
],
"mask": "Train"
},
{
"node_id": 2194,
"label": 5,
"text": "Title: Minimizing Register Requirements under Resource-Constrained Rate-Optimal Software Pipelining \nAbstract: In this paper we address the following software pipelin-ing problem: given a loop and a machine architecture with a fixed number of processor resources (e.g. function units), how can one construct a software-pipelined schedule which runs on the given architecture at the maximum possible iteration rate (a la rate-optimal) while minimizing the number of registers? The main contributions of this paper are: * First, we demonstrate that such problem can be described by a simple mathematical formulation with precise optimization objectives under periodic linear scheduling framework. The mathematical formulation provides a clear picture which permits one to visualize the overall solution space (for rate-optimal schedules) under different sets of con straints. * Secondly, we show that a precise mathematical formulation and its solution does make a significant performance difference! We evaluated the performance of our method against three other leading contemporary heuristic methods: Huff 's Slack Scheduling [9], Wang, Eisenbeis, Jourdan and Su's FRLC [23], and Gasperoni and Schwiegelshohn's modified list scheduling [6]. Experimental results show that the method described in this paper performed significantly better than these methods. ",
"neighbors": [
1955,
2149,
2188,
2189,
2190
],
"mask": "Validation"
},
{
"node_id": 2195,
"label": 5,
"text": "Title: LEARNING FOR DECISION MAKING: The FRD Approach and a Comparative Study Machine Learning and Inference Laboratory \nAbstract: This paper concerns the issue of what is the best form for learning, representing and using knowledge for decision making. The proposed answer is that such knowledge should be learned and represented in a declarative form. When needed for decision making, it should be efficiently transferred to a procedural form that is tailored to the specific decision making situation. Such an approach combines advantages of the declarative representation, which facilitates learning and incremental knowledge modification, and the procedural representation, which facilitates the use of knowledge for decision making. This approach also allows one to determine decision structures that may avoid attributes that unavailable or difficult to measure in any given situation. Experimental investigations of the system, FRD-1, have demonstrated that decision structures obtained via the declarative route often have not only higher predictive accuracy but are also are simpler than those learned directly from facts.",
"neighbors": [
286,
378,
1963
],
"mask": "Test"
},
{
"node_id": 2196,
"label": 1,
"text": "Title: Effects of Occam's Razor in Evolving Sigma-Pi Neural Nets \nAbstract: Several evolutionary algorithms make use of hierarchical representations of variable size rather than linear strings of fixed length. Variable complexity of the structures provides an additional representational power which may widen the application domain of evolutionary algorithms. The price for this is, however, that the search space is open-ended and solutions may grow to arbitrarily large size. In this paper we study the effects of structural complexity of the solutions on their generalization performance by analyzing the fitness landscape of sigma-pi neural networks. The analysis suggests that smaller networks achieve, on average, better generalization accuracy than larger ones, thus confirming the usefulness of Occam's razor. A simple method for implementing the Occam's razor principle is described and shown to be effective in improv ing the generalization accuracy without limiting their learning capacity.",
"neighbors": [
163,
380,
938,
2267
],
"mask": "Train"
},
{
"node_id": 2197,
"label": 6,
"text": "Title: MLC Tutorial A Machine Learning library of C classes. \nAbstract: Several evolutionary algorithms make use of hierarchical representations of variable size rather than linear strings of fixed length. Variable complexity of the structures provides an additional representational power which may widen the application domain of evolutionary algorithms. The price for this is, however, that the search space is open-ended and solutions may grow to arbitrarily large size. In this paper we study the effects of structural complexity of the solutions on their generalization performance by analyzing the fitness landscape of sigma-pi neural networks. The analysis suggests that smaller networks achieve, on average, better generalization accuracy than larger ones, thus confirming the usefulness of Occam's razor. A simple method for implementing the Occam's razor principle is described and shown to be effective in improv ing the generalization accuracy without limiting their learning capacity.",
"neighbors": [
430,
2342
],
"mask": "Test"
},
{
"node_id": 2198,
"label": 6,
"text": "Title: An Incremental Interactive Algorithm for Regular Grammar Inference \nAbstract: We present provably correct interactive algorithms for learning regular grammars from positive examples and membership queries. A structurally complete set of strings from a language L(G) corresponding to a target regular grammar G implicitly specifies a lattice of finite state automata (FSA) which contains a FSA M G corresponding to G. The lattice is compactly represented as a version-space and M G is identified by searching the version-space using membership queries. We explore the problem of regular grammar inference in a setting where positive examples are provided intermittently. We provide an incremental version of the algorithm along with a set of sufficient conditions for its convergence.",
"neighbors": [
1560,
2537,
2695
],
"mask": "Validation"
},
{
"node_id": 2199,
"label": 1,
"text": "Title: Position Paper, Workshop on Evolutionary Computation with Variable Size Representation, ICGA, Fitness Causes Bloat in\nAbstract: We argue based upon the numbers of representations of given length, that increase in representation length is inherent in using a fixed evaluation function with a discrete but variable length representation. Two examples of this are analysed, including the use of Price's Theorem. Both examples confirm the tendency for solutions to grow in size is caused by fitness based selection.",
"neighbors": [
1184,
1784,
2133
],
"mask": "Train"
},
{
"node_id": 2200,
"label": 1,
"text": "Title: Adaptation in constant utility non-stationary environments \nAbstract: Environments that vary over time present a fundamental problem to adaptive systems. Although in the worst case there is no hope of effective adaptation, some forms environmental variability do provide adaptive opportunities. We consider a broad class of non-stationary environments, those which combine a variable result function with an invariant utility function, and demonstrate via simulation that an adaptive strategy employing both evolution and learning can tolerate a much higher rate of environmental variation than an evolution-only strategy. We suggest that in many cases where stability has previously been assumed, the constant utility non-stationary environment may in fact be a more powerful viewpoint.",
"neighbors": [
163,
1797,
1969,
2703
],
"mask": "Train"
},
{
"node_id": 2201,
"label": 2,
"text": "Title: Neural competitive maps for reactive and adaptive navigation \nAbstract: We have recently introduced a neural network for reactive obstacle avoidance based on a model of classical and operant conditioning. In this article we describe the success of this model when implemented on two real autonomous robots. Our results show the promise of self-organizing neural networks in the domain of intelligent robotics. ",
"neighbors": [
2233
],
"mask": "Train"
},
{
"node_id": 2202,
"label": 1,
"text": "Title: An Evolutionary Approach to Combinatorial Optimization Problems \nAbstract: The paper reports on the application of genetic algorithms, probabilistic search algorithms based on the model of organic evolution, to NP-complete combinatorial optimization problems. In particular, the subset sum, maximum cut, and minimum tardy task problems are considered. Except for the fitness function, no problem-specific changes of the genetic algorithm are required in order to achieve results of high quality even for the problem instances of size 100 used in the paper. For constrained problems, such as the subset sum and the minimum tardy task, the constraints are taken into account by incorporating a graded penalty term into the fitness function. Even for large instances of these highly multimodal optimization problems, an iterated application of the genetic algorithm is observed to find the global optimum within a number of runs. As the genetic algorithm samples only a tiny fraction of the search space, these results are quite encouraging. ",
"neighbors": [
163,
1303,
1571,
1980,
2638
],
"mask": "Train"
},
{
"node_id": 2203,
"label": 2,
"text": "Title: CuPit-2: Portable and Efficient High-Level Parallel Programming of Neural Networks for the Systems Analysis Modelling\nAbstract: CuPit-2 is a special-purpose programming language designed for expressing dynamic neural network learning algorithms. It provides most of the flexibility of general-purpose languages such as C or C ++ , but is more expressive. It allows writing much clearer and more elegant programs, in particular for algorithms that change the network topology dynamically (constructive algorithms, pruning algorithms). In contrast to other languages, CuPit-2 programs can be compiled into efficient code for parallel machines without any changes in the source program, thus providing an easy start for using parallel platforms. This article analyzes the circumstances under which the CuPit-2 approach is the most useful one, presents a description of most language constructs and reports performance results for CuPit-2 on symmetric multiprocessors (SMPs). It concludes that in many cases CuPit-2 is a good basis for neural learning algorithm research on small-scale parallel machines. ",
"neighbors": [
881,
1411,
2397,
2405
],
"mask": "Train"
},
{
"node_id": 2204,
"label": 1,
"text": "Title: University of Nevada Reno Design Strategies for Evolutionary Robotics \nAbstract: CuPit-2 is a special-purpose programming language designed for expressing dynamic neural network learning algorithms. It provides most of the flexibility of general-purpose languages such as C or C ++ , but is more expressive. It allows writing much clearer and more elegant programs, in particular for algorithms that change the network topology dynamically (constructive algorithms, pruning algorithms). In contrast to other languages, CuPit-2 programs can be compiled into efficient code for parallel machines without any changes in the source program, thus providing an easy start for using parallel platforms. This article analyzes the circumstances under which the CuPit-2 approach is the most useful one, presents a description of most language constructs and reports performance results for CuPit-2 on symmetric multiprocessors (SMPs). It concludes that in many cases CuPit-2 is a good basis for neural learning algorithm research on small-scale parallel machines. ",
"neighbors": [
163,
636,
846,
2173
],
"mask": "Validation"
},
{
"node_id": 2205,
"label": 1,
"text": "Title: A Genetic Local Search Approach to the Quadratic Assignment Problem \nAbstract: Augmenting genetic algorithms with local search heuristics is a promising approach to the solution of combinatorial optimization problems. In this paper, a genetic local search approach to the quadratic assignment problem (QAP) is presented. New genetic operators for realizing the approach are described, and its performance is tested on various QAP instances containing between 30 and 256 facilities/locations. The results indicate that the proposed algorithm is able to arrive at high quality solutions in a relatively short time limit: for the largest publicly known prob lem instance, a new best solution could be found.",
"neighbors": [
1799
],
"mask": "Validation"
},
{
"node_id": 2206,
"label": 1,
"text": "Title: Why Ants are Hard genetic programming, simulated annealing and hill climbing performance is shown not\nAbstract: The problem of programming an artificial ant to follow the Santa Fe trail is used as an example program search space. Analysis of shorter solutions shows they have many of the characteristics often ascribed to manually coded programs. Enumeration of a small fraction of the total search space and random sampling characterise it as rugged with many multiple plateaus split by deep valleys and many local and global optima. This suggests it is difficult for hill climbing algorithms. Analysis of the program search space in terms of fixed length schema suggests it is highly deceptive and that for the simplest solutions large building blocks must be assembled before they have above average fitness. In some cases we show solutions cannot be assembled using a fixed representation from small building blocks of above average fitness. These suggest the Ant problem is difficult for Genetic Algorithms. Random sampling of the program search space suggests on average the density of global optima changes only slowly with program size but the density of neutral networks linking points of the same fitness grows approximately linearly with program length. This is part of the cause of bloat. ",
"neighbors": [
1911,
1925,
1984,
2133,
2175,
2261,
2379
],
"mask": "Train"
},
{
"node_id": 2207,
"label": 4,
"text": "Title: Machine Learning Research: Four Current Directions \nAbstract: Machine Learning research has been making great progress is many directions. This article summarizes four of these directions and discusses some current open problems. The four directions are (a) improving classification accuracy by learning ensembles of classifiers, (b) methods for scaling up supervised learning algorithms, (c) reinforcement learning, and (d) learning complex stochastic models. ",
"neighbors": [
79,
1786
],
"mask": "Train"
},
{
"node_id": 2208,
"label": 3,
"text": "Title: Extensions of Fill's algorithm for perfect simulation \nAbstract: Fill's algorithm for perfect simulation for attractive finite state space models, unbiased for user impatience, is presented in terms of stochastic recursive sequences and extended in two ways. Repulsive discrete Markov random fields with two coding sets like the auto-Poisson distribution on a lattice with 4-neighbourhood can be treated as monotone systems if a particular partial ordering and quasi-maximal and quasi-minimal states are used. Fill's algorithm then applies directly. Combining Fill's rejection sampling with sandwiching leads to a version of the algorithm, which works for general discrete conditionally specified repulsive models. Extensions to other types of models are briefly discussed. ",
"neighbors": [
126,
1761,
2234,
2235,
2313
],
"mask": "Train"
},
{
"node_id": 2209,
"label": 4,
"text": "Title: PAC Adaptive Control of Linear Systems \nAbstract: We consider a special case of reinforcement learning where the environment can be described by a linear system. The states of the environment and the actions the agent can perform are represented by real vectors and the system dynamic is given by a linear equation with a stochastic component. The problem is equivalent to the so-called linear quadratic regulator problem studied in the optimal and adaptive control literature. We propose a learning algorithm for that problem and analyze it in a PAC learning framework. Unlike the algorithms in the adaptive control literature, our algorithm actively explores the environment to learn an accurate model of the system faster. We show that the control law produced by our algorithm has, with high probability, a value that is close to that of an optimal policy relative to the magnitude of the initial state of the system. The time taken by the algorithm is polynomial in the dimension n of the state-space and in the dimension r of the action-space when the ratio n=r is a constant.",
"neighbors": [
2689
],
"mask": "Train"
},
{
"node_id": 2210,
"label": 6,
"text": "Title: An Empirical Analysis of the Benefit of Decision Tree Size Biases as a Function of\nAbstract: The results reported here empirically show the benefit of decision tree size biases as a function of concept distribution. First, it is shown how concept distribution complexity (the number of internal nodes in the smallest decision tree consistent with the example space) affects the benefit of minimum size and maximum size decision tree biases. Second, a policy is described that defines what a learner should do given knowledge of the complexity of the distribution of concepts. Third, explanations for why the distribution of concepts seen in practice is amenable to the minimum size decision tree bias are given and evaluated empirically. ",
"neighbors": [
1808
],
"mask": "Train"
},
{
"node_id": 2211,
"label": 1,
"text": "Title: Collective Memory Search 1 Collective Memory Search: Exploiting an Information Center for Exploration \nAbstract: The results reported here empirically show the benefit of decision tree size biases as a function of concept distribution. First, it is shown how concept distribution complexity (the number of internal nodes in the smallest decision tree consistent with the example space) affects the benefit of minimum size and maximum size decision tree biases. Second, a policy is described that defines what a learner should do given knowledge of the complexity of the distribution of concepts. Third, explanations for why the distribution of concepts seen in practice is amenable to the minimum size decision tree bias are given and evaluated empirically. ",
"neighbors": [
854,
1232,
1971
],
"mask": "Validation"
},
{
"node_id": 2212,
"label": 2,
"text": "Title: ANALYSIS OF SOUND TEXTURES IN MUSICAL AND MACHINE SOUNDS BY MEANS OF HIGHER ORDER STATISTICAL FEATURES. \nAbstract: In this paper we describe a sound classification method, which seems to be applicable to a broad domain of stationary, non-musical sounds, such as machine noises and other man made non periodic sounds. The method is based on matching higher order spectra (HOS) of the acoustic signals and it generalizes our earlier results on classification of sustained musical sounds by higher order statistics. An efficient \"decorrelated matched filter\" implemetation is presented. The results show good sound classification statistics and a comparison to spectral matching methods is also discussed. ",
"neighbors": [
2121
],
"mask": "Validation"
},
{
"node_id": 2213,
"label": 5,
"text": "Title: Generating Declarative Language Bias for Top-Down ILP Algorithms \nAbstract: Many of today's algorithms for Inductive Logic Programming (ILP) put a heavy burden and responsibility on the user, because their declarative bias have to be defined in a rather low-level fashion. To address this issue, we developed a method for generating declarative language bias for top-down ILP systems from high-level declarations. The key feature of our approach is the distinction between a user level and an expert level of language bias declarations. The expert provides abstract meta-declarations, and the user declares the relationship between the meta-level and the given database to obtain a low-level declarative language bias. The suggested languages allow for compact and abstract specifications of the declarative language bias for top-down ILP systems using schemata. We verified several properties of the translation algorithm that generates schemata, and applied it successfully to a few chemical domains. As a consequence, we propose to use a two-level approach to generate declarative language bias.",
"neighbors": [
1259,
2126,
2253,
2290,
2539
],
"mask": "Train"
},
{
"node_id": 2214,
"label": 2,
"text": "Title: Behavior Near Zero of the Distribution of GCV Smoothing Parameter Estimates 1 \nAbstract: Many of today's algorithms for Inductive Logic Programming (ILP) put a heavy burden and responsibility on the user, because their declarative bias have to be defined in a rather low-level fashion. To address this issue, we developed a method for generating declarative language bias for top-down ILP systems from high-level declarations. The key feature of our approach is the distinction between a user level and an expert level of language bias declarations. The expert provides abstract meta-declarations, and the user declares the relationship between the meta-level and the given database to obtain a low-level declarative language bias. The suggested languages allow for compact and abstract specifications of the declarative language bias for top-down ILP systems using schemata. We verified several properties of the translation algorithm that generates schemata, and applied it successfully to a few chemical domains. As a consequence, we propose to use a two-level approach to generate declarative language bias.",
"neighbors": [
420,
2223
],
"mask": "Test"
},
{
"node_id": 2215,
"label": 0,
"text": "Title: Learning Approximate Control Rules Of High Utility \nAbstract: One of the difficult problems in the area of explanation based learning is the utility problem; learning too many rules of low utility can lead to swamping, or degradation of performance. This paper introduces two new techniques for improving the utility of learned rules. The first technique is to combine EBL with inductive learning techniques to learn a better set of control rules; the second technique is to use these inductive techniques to learn approximate control rules. The two techniques are synthesized in an algorithm called approximating abductive explanation based learning (AxA-EBL). AxA-EBL is shown to improve substantially over standard EBL in several domains.",
"neighbors": [
344,
551,
675,
1877,
2057,
2650
],
"mask": "Train"
},
{
"node_id": 2216,
"label": 1,
"text": "Title: Hybridized Crossover-Based Search Techniques for Program Discovery \nAbstract: In this paper we address the problem of program discovery as defined by Genetic Programming [10]. We have two major results: First, by combining a hierarchical crossover operator with two traditional single point search algorithms: Simulated Annealing and Stochastic Iterated Hill Climbing, we have solved some problems with fewer fitness evaluations and a greater probability of a success than Genetic Programming. Second, we have managed to enhance Genetic Programming by hybridizing it with the simple scheme of hill climbing from a few individuals, at a fixed interval of generations. The new hill climbing component has two options for generating candidate solutions: mutation or crossover. When it uses crossover, mates are either randomly created, randomly drawn from the population at large, or drawn from a pool of fittest individuals.",
"neighbors": [
2361,
2688,
2705
],
"mask": "Train"
},
{
"node_id": 2217,
"label": 5,
"text": "Title: Application of Clausal Discovery to Temporal Databases \nAbstract: Most of KDD applications consider databases as static objects, and however many databases are inherently temporal, i.e., they store the evolution of each object with the passage of time. Thus, regularities about the dynamics of these databases cannot be discovered as the current state might depend in some way on the previous states. To this end, a pre-processing of data is needed aimed at extracting relationships intimately connected to the temporal nature of data that will be make available to the discovery algorithm. The predicate logic language of ILP methods together with the recent advances as to ef ficiency makes them adequate for this task.",
"neighbors": [
1007,
2282
],
"mask": "Test"
},
{
"node_id": 2218,
"label": 2,
"text": "Title: L 0 |The First Four Years Abstract A summary of the progress and plans of\nAbstract: Most of KDD applications consider databases as static objects, and however many databases are inherently temporal, i.e., they store the evolution of each object with the passage of time. Thus, regularities about the dynamics of these databases cannot be discovered as the current state might depend in some way on the previous states. To this end, a pre-processing of data is needed aimed at extracting relationships intimately connected to the temporal nature of data that will be make available to the discovery algorithm. The predicate logic language of ILP methods together with the recent advances as to ef ficiency makes them adequate for this task.",
"neighbors": [
2021,
2049,
2337
],
"mask": "Train"
},
{
"node_id": 2219,
"label": 3,
"text": "Title: Exponential Convergence of Langevin Diffusions and Their Discrete Approximations \nAbstract: In this paper we consider a continous time method of approximating a given distribution using the Langevin diffusion dL t = dW t + 1 2 r log (L t )dt: We find conditions under which this diffusion converges exponentially quickly to or does not: in one dimension, these are essentially that for distributions with exponential tails of the form (x) / exp(fljxj fi ), 0 < fi < 1, exponential convergence occurs if and only if fi 1. We then consider conditions under which the discrete approximations to the diffusion converge. We first show that even when the diffusion itself converges, naive discretisations need not do so. We then consider a \"Metropolis-adjusted\" version of the algorithm, and find conditions under which this also converges at an exponential rate: perhaps surprisingly, even the Metropolised version need not converge exponentially fast even if the diffusion does. We briefly discuss a truncated form of the algorithm which, in practice, should avoid the difficulties of the other forms. ",
"neighbors": [
1977,
2008,
2022,
2153
],
"mask": "Validation"
},
{
"node_id": 2220,
"label": 1,
"text": "Title: The Automatic Programming of Agents that Learn Mental Models and Create Simple Plans of Action \nAbstract: An essential component of an intelligent agent is the ability to notice, encode, store, and utilize information about its environment. Traditional approaches to program induction have focused on evolving functional or reactive programs. This paper presents MAPMAKER, an approach to the automatic generation of agents that discover information about their environment, encode this information for later use, and create simple plans utilizing the stored mental models. In this approach, agents are multipart computer programs that communicate through a shared memory. Both the programs and the representation scheme are evolved using genetic programming. An illustrative problem of 'gold' collection is used to demonstrate the approach in which one part of a program makes a map of the world and stores it in memory, and the other part uses this map to find the gold The results indicate that the approach can evolve programs that store simple representations of their environments and use these representations to produce simple plans. 1. Introduction ",
"neighbors": [
129,
290,
1409,
1940,
1950,
1958,
2139,
2226,
2252,
2478,
2563,
2600
],
"mask": "Test"
},
{
"node_id": 2221,
"label": 3,
"text": "Title: Reasoning about Time and Probability \nAbstract: An essential component of an intelligent agent is the ability to notice, encode, store, and utilize information about its environment. Traditional approaches to program induction have focused on evolving functional or reactive programs. This paper presents MAPMAKER, an approach to the automatic generation of agents that discover information about their environment, encode this information for later use, and create simple plans utilizing the stored mental models. In this approach, agents are multipart computer programs that communicate through a shared memory. Both the programs and the representation scheme are evolved using genetic programming. An illustrative problem of 'gold' collection is used to demonstrate the approach in which one part of a program makes a map of the world and stores it in memory, and the other part uses this map to find the gold The results indicate that the approach can evolve programs that store simple representations of their environments and use these representations to produce simple plans. 1. Introduction ",
"neighbors": [
566,
1459,
1527,
1757,
2404
],
"mask": "Test"
},
{
"node_id": 2222,
"label": 4,
"text": "Title: Multi-Time Models for Reinforcement Learning \nAbstract: Reinforcement learning can be used not only to predict rewards, but also to predict states, i.e. to learn a model of the world's dynamics. Models can be defined at different levels of temporal abstraction. Multi-time models are models that focus on predicting what will happen, rather than when a certain event will take place. Based on multi-time models, we can define abstract actions, which enable planning (presumably in a more efficient way) at various levels of abstraction.",
"neighbors": [
1954,
2150
],
"mask": "Test"
},
{
"node_id": 2223,
"label": 2,
"text": "Title: Smoothing Spline Models With Correlated Random Errors 1 \nAbstract: Reinforcement learning can be used not only to predict rewards, but also to predict states, i.e. to learn a model of the world's dynamics. Models can be defined at different levels of temporal abstraction. Multi-time models are models that focus on predicting what will happen, rather than when a certain event will take place. Based on multi-time models, we can define abstract actions, which enable planning (presumably in a more efficient way) at various levels of abstraction.",
"neighbors": [
190,
510,
519,
2214
],
"mask": "Train"
},
{
"node_id": 2224,
"label": 2,
"text": "Title: Design of Optimization Criteria for Multiple Sequence Alignment \nAbstract: DIMACS Technical Report 96-53 January 1997 ",
"neighbors": [
1827
],
"mask": "Train"
},
{
"node_id": 2225,
"label": 6,
"text": "Title: Error-Correcting Output Coding Corrects Bias and Variance \nAbstract: Previous research has shown that a technique called error-correcting output coding (ECOC) can dramatically improve the classification accuracy of supervised learning algorithms that learn to classify data points into one of k 2 classes. This paper presents an investigation of why the ECOC technique works, particularly when employed with decision-tree learning algorithms. It shows that the ECOC method| like any form of voting or committee|can reduce the variance of the learning algorithm. Furthermore|unlike methods that simply combine multiple runs of the same learning algorithm|ECOC can correct for errors caused by the bias of the learning algorithm. Experiments show that this bias correction ability relies on the non-local be havior of C4.5.",
"neighbors": [
256,
1608,
1732,
2423
],
"mask": "Validation"
},
{
"node_id": 2226,
"label": 1,
"text": "Title: Simultaneous Evolution of Programs and their Control Structures Simultaneous Evolution of Programs and their Control\nAbstract: Previous research has shown that a technique called error-correcting output coding (ECOC) can dramatically improve the classification accuracy of supervised learning algorithms that learn to classify data points into one of k 2 classes. This paper presents an investigation of why the ECOC technique works, particularly when employed with decision-tree learning algorithms. It shows that the ECOC method| like any form of voting or committee|can reduce the variance of the learning algorithm. Furthermore|unlike methods that simply combine multiple runs of the same learning algorithm|ECOC can correct for errors caused by the bias of the learning algorithm. Experiments show that this bias correction ability relies on the non-local be havior of C4.5.",
"neighbors": [
1950,
1958,
2139,
2220,
2478
],
"mask": "Validation"
},
{
"node_id": 2227,
"label": 2,
"text": "Title: The EM Algorithm for Mixtures of Factor Analyzers \nAbstract: Technical Report CRG-TR-96-1 May 21, 1996 (revised Feb 27, 1997) Abstract Factor analysis, a statistical method for modeling the covariance structure of high dimensional data using a small number of latent variables, can be extended by allowing different local factor models in different regions of the input space. This results in a model which concurrently performs clustering and dimensionality reduction, and can be thought of as a reduced dimension mixture of Gaussians. We present an exact Expectation-Maximization algorithm for fitting the parameters of this mixture of factor analyzers.",
"neighbors": [
667,
1923,
1974,
2072,
2390
],
"mask": "Train"
},
{
"node_id": 2228,
"label": 2,
"text": "Title: Modeling dynamic receptive field changes in primary visual cortex using inhibitory learning \nAbstract: The position, size, and shape of the visual receptive field (RF) of some primary visual cortical neurons change dynamically, in response to artificial scotoma conditioning in cats (Pettet & Gilbert, 1992) and to retinal lesions in cats and monkeys (Darian-Smith & Gilbert, 1995). The \"EXIN\" learning rules (Marshall, 1995) are used to model dynamic RF changes. The EXIN model is compared with an adaptation model (Xing & Gerstein, 1994) and the LISSOM model (Sirosh & Miikkulainen, 1994; Sirosh et al., 1996). To emphasize the role of the lateral inhibitory learning rules, the EXIN and the LISSOM simulations were done with only lateral inhibitory learning. During scotoma conditioning, the EXIN model without feedforward learning produces centrifugal expansion of RFs initially inside the scotoma region, accompanied by increased responsiveness, without changes in spontaneous activation. The EXIN model without feedforward learning is more consistent with the neurophysiological data than are the adaptation model and the LISSOM model. The comparison between the EXIN and the LISSOM models suggests experiments to determine the role of feedforward excitatory and lateral inhibitory learning in producing dynamic RF changes during scotoma conditioning. ",
"neighbors": [
127,
1093,
1094,
2068,
2085
],
"mask": "Train"
},
{
"node_id": 2229,
"label": 5,
"text": "Title: Bottom-up induction of logic programs with more than one recursive clause \nAbstract: In this paper we present a bottom-up algorithm called MRI to induce logic programs from their examples. This method can induce programs with a base clause and more than one recursive clause from a very small number of examples. MRI is based on the analysis of saturations of examples. It first generates a path structure, which is an expression of a stream of values processed by predicates. The concept of path structure was originally introduced by Identam-Almquist and used in TIM [ Idestam-Almquist, 1996 ] . In this paper, we introduce the concepts of extension and difference of path structure. Recursive clauses can be expressed as a difference between a path structure and its extension. The paper presents the algorithm and shows experimental results obtained by the method.",
"neighbors": [
344,
1428,
2663
],
"mask": "Train"
},
{
"node_id": 2230,
"label": 2,
"text": "Title: In Advances in Neural Information Processing Systems 8 Gaussian Processes for Regression \nAbstract: The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions. In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and averaging (via Hybrid Monte Carlo) over hyperparameters have been tested on a number of challenging problems and have produced excellent results. ",
"neighbors": [
157,
608,
611,
2095
],
"mask": "Train"
},
{
"node_id": 2231,
"label": 0,
"text": "Title: Explaining Anomalies as a Basis for Knowledge Base Refinement \nAbstract: Explanations play a key role in operationalization-based anomaly detection techniques. In this paper we show that their role is not limited to anomaly detection; they can also be used for guiding automated knowledge base refinement. We introduce a refinement procedure which takes: (i) a small number of refinement rules (rather than test cases), and (ii) explanations constructed in an attempt to reveal the cause (or causes) for inconsistencies detected during the verification process, and returns rule revisions aiming to recover the consistency of the KB-theory. Inconsistencies caused by more than one anomaly are handled at the same time, which improves the efficiency of the refinement process. ",
"neighbors": [
136,
2635
],
"mask": "Train"
},
{
"node_id": 2232,
"label": 1,
"text": "Title: Facing The Facts: Necessary Requirements For The Artificial Evolution of Complex Behaviour \nAbstract: This paper sets out a conceptual framework for the open-ended artificial evolution of complex behaviour in autonomous agents. If recurrent dynamical neural networks (or similar) are used as phenotypes, then a Genetic Algorithm that employs variable length genotypes, such as Inman Harvey's SAGA, is capable of evolving arbitrary levels of be-havioural complexity. Furthermore, with simple restrictions on the encoding scheme that governs how genotypes develop into phenotypes, it may be guaranteed that if an increase in fitness requires an increase in be-havioural complexity, then it will evolve. In order for this process to be practicable as a design alternative, however, the time periods involved must be acceptable. The final part of this paper looks at general ways in which the encoding scheme may be modified to speed up the process. Experiments are reported in which different categories of scheme were tested against each other, and conclusions are offered as to the most promising type of encoding scheme for a vi able open-ended Evolutionary Robotics.",
"neighbors": [
163,
411,
2058
],
"mask": "Validation"
},
{
"node_id": 2233,
"label": 2,
"text": "Title: An unsupervised neural network for low-level control of a wheeled mobile robot: noise resistance, stability,\nAbstract: We have recently introduced a neural network mobile robot controller (NETMORC) that autonomously learns the forward and inverse odometry of a differential drive robot through an unsupervised learning-by-doing cycle. After an initial learning phase, the controller can move the robot to an arbitrary stationary or moving target while compensating for noise and other forms of disturbance, such as wheel slippage or changes in the robot's plant. In addition, the forward odometric map allows the robot to reach targets in the absence of sensory feedback. The controller is also able to adapt in response to long-term changes in the robot's plant, such as a change in the radius of the wheels. In this article we review the NETMORC architecture and describe its simplified algorithmic implementation, we present new, quantitative results on NETMORC's performance and adaptability under noise-free and noisy conditions, we compare NETMORC's performance on a trajectory-following task with the performance of an alternative controller, and we describe preliminary results on the hardware implementation of NETMORC with the mobile robot ROBUTER. ",
"neighbors": [
636,
703,
2201
],
"mask": "Validation"
},
{
"node_id": 2234,
"label": 3,
"text": "Title: Perfect Sampling of Harris Recurrent Markov Chains \nAbstract: We develop an algorithm for simulating \"perfect\" random samples from the invariant measure of a Harris recurrent Markov chain. The method uses backward coupling of embedded regeneration times, and works most effectively for finite chains and for stochastically monotone chains even on continuous spaces, where paths may be sandwiched below \"upper\" and \"lower\" processes. Examples show that more naive approaches to constructing such bounding processes may be considerably biased, but that the algorithm can be simplified in certain cases to make it easier to run. We give explicit analytic bounds on the backward coupling times in the stochastically monotone case. ",
"neighbors": [
2208
],
"mask": "Train"
},
{
"node_id": 2235,
"label": 3,
"text": "Title: EXACT SIMULATION USING MARKOV CHAINS \nAbstract: This reports gives a review of the new exact simulation algorithms using Markov chains. The first part covers the discrete case. We consider two different algorithms, Propp and Wilsons coupling from the past (CFTP) technique and Fills rejection sampler. The algorithms are tested on the Ising model, with and without an external field. The second part covers continuous state spaces. We present several algorithms developed by Murdoch and Green, all based on coupling from the past. We discuss the applicability of these methods on a Bayesian analysis problem of surgical failure rates. ",
"neighbors": [
2208
],
"mask": "Train"
},
{
"node_id": 2236,
"label": 2,
"text": "Title: Robust Convergence of Two-Stage Nonlinear Algorithms for Identification in H 1 \nAbstract: ",
"neighbors": [
2262,
2435,
2542
],
"mask": "Train"
},
{
"node_id": 2237,
"label": 1,
"text": "Title: Specialization under Social Conditions in Shared Environments \nAbstract: Specialist and generalist behaviors in populations of artificial neural networks are studied. A genetic algorithm is used to simulate evolution processes, and thereby to develop neural network control systems that exhibit specialist or generalist behaviors according to the fitness formula. With evolvable fitness formulae the evaluation measure is let free to evolve, and we obtain a co-evolution of the expressed behavior and the individual evolvable fitness formula. The use of evolvable fitness formulae lets us work in a dynamic fitness landscape, opposed to most work, that traditionally applies to static fitness landscapes, only. The role of competition in specialization is studied by letting the individuals live under social conditions in the same, shared environment and directly compete with each other. We find, that competition can act to provide population diversification in populations of organisms with individual evolvable fitness formulae.",
"neighbors": [
2170,
2274
],
"mask": "Test"
},
{
"node_id": 2238,
"label": 1,
"text": "Title: Where Does the Good Stuff Go, and Why? How contextual semantics influences program structure in\nAbstract: Using deliberately designed primitive sets, we investigate the relationship between context-based expression mechanisms and the size, height and density of genetic program trees during the evolutionary process. We show that contextual semantics influence the composition, location and flows of operative code in a program. In detail we analyze these dynamics and discuss the impact of our findings on micro-level descriptions of genetic programming.",
"neighbors": [
2271
],
"mask": "Test"
},
{
"node_id": 2239,
"label": 2,
"text": "Title: Predicting Conditional Probability Distributions: A Connectionist Approach \nAbstract: Most traditional prediction techniques deliver the mean of the probability distribution (a single point). For multimodal processes, instead of predicting the mean of the probability distribution, it is important to predict the full distribution. This article presents a new connectionist method to predict the conditional probability distribution in response to an input. The main idea is to transform the problem from a regression to a classification problem. The conditional probability distribution network can perform both direct predictions and iterated predictions, a task which is specific for time series problems. We compare our method to fuzzy logic and discuss important differences, and also demonstrate the architecture on two time series. The first is the benchmark laser series used in the Santa Fe competition, a deterministic chaotic system. The second is a time series from a Markov process which exhibits structure on two time scales. The network produces multimodal predictions for this series. We compare the predictions of the network with a nearest-neighbor predictor and find that the conditional probability network is more than twice as likely a model.",
"neighbors": [
587,
1366,
2413,
2414,
2507,
2513
],
"mask": "Train"
},
{
"node_id": 2240,
"label": 5,
"text": "Title: Stable ILP Exploring the Added Expressivity of Negation in the Background Knowledge \nAbstract: We present stable ILP, a cross-disciplinary concept straddling machine learning and nonmonotonic reasoning. Stable models give meaning to logic programs containing negative assertions. In stable ILP, we employ stable models to represent the current state specified by (possibly) negative EDB and IDB rules. The state then serves as the background knowledge for a top-down ILP learner. We present a framework and implementation (system INDED) of one realization of stable ILP. ",
"neighbors": [
2466
],
"mask": "Validation"
},
{
"node_id": 2241,
"label": 3,
"text": "Title: On Decision-Theoretic Foundations for Defaults \nAbstract: In recent years, considerable effort has gone into understanding default reasoning. Most of this effort concentrated on the question of entailment, i.e., what conclusions are warranted by a knowledge-base of defaults. Surprisingly, few works formally examine the general role of defaults. We argue that an examination of this role is necessary in order to understand defaults, and suggest a concrete role for defaults: Defaults simplify our decision-making process, allowing us to make fast, approximately optimal decisions by ignoring certain possible states. In order to formalize this approach, we examine decision making in the framework of decision theory. We use probability and utility to measure the impact of possible states on the decision-making process. We accept a default if it ignores states with small impact according to our measure. We motivate our choice of measures and show that the resulting formalization of defaults satisfies desired properties of defaults, namely cumulative reasoning. Finally, we compare our approach with Poole's decision-theoretic defaults, and show how both can be combined to form an attractive framework for reasoning about decisions. We make numerous assumptions each day: the car will start, the road will not be blocked, there will be heavy traffic at 5pm, etc. Many of these assumptions are defeasible; we are willing to retract them given sufficient evidence. Humans naturally state defaults and draw conclusions from default information. Hence, defaults seem to play an important part in common-sense reasoning. To use such statements, however, we need a formal understanding of what defaults represent and what conclusions they admit. The problem of default entailment|roughly, what conclusions we should draw from a knowledge-base of defaults|has attracted a great deal of attention. Many researchers attempt to find \"context-free\" patterns of default reasoning (e.g., [ Kraus et al., 1990 ] ). As this research shows, much can be done in this approach. We claim, however, that the utility of this approach is limited; to gain a better understanding of defaults, we need to understand in what situations we should be willing to state a default. Our main thesis is that an investigation of defaults should elaborate their role in the behavior of the reasoning agent. This role should allow us to examine when a default is appropriate in terms of its implications on the agent's overall performance. In this paper, we suggest a particular role for defaults and show how this role allows us to provide a semantics for defaults. Of course, we do not claim that this is the only role defaults can play. In many applications, the end result of reasoning is a choice of actions. Usually, this choice is not optimal; there is too much uncertainty about the state of the world and the effects of actions to allow for an examination of all possibilities. We suggest that one role of defaults lies in simplifying our decision-making process by stating assumptions that reduce the space of examined possibilities. More precisely, we suggest that a default ' ! is a license to ignore : situations when our knowledge amounts to '. One particular suggestion that can be understood in this light is *-semantics [ Pearl, 1989 ] . In *-semantics, we accept a default ' ! if given the knowledge ', the probability of : is very small. This small probability of the :' states gives us a license to ignore them. Although probability plays an important part in our decisions, we claim that we should also examine the utility of our actions. For example, while most people think that it is highly unlikely that they will die next year, they also believe that they should not accept this as a default assumption in the context of a decision as to whether or not to buy life insurance. In this context, the stakes are too high to ignore this outcome, even though it is unlikely. We suggest that the license to ignore a set should be given based on its impact on our decision. To paraphrase this view, we should accept Bird ! Fly if assuming that the bird flies cannot get us into too much trouble. To formalize our intuitions we examine decision-making in the framework of decision theory [ Luce and Raiffa, 1957 ] . Decision theory represents a decision problem using several components: a set of possible states, a probability measure over these sets, and a utility function that assigns to each action and state a numerical value. fl To appear in IJCAI'95.",
"neighbors": [
1994
],
"mask": "Train"
},
{
"node_id": 2242,
"label": 3,
"text": "Title: Density estimation by wavelet thresholding \nAbstract: Density estimation is a commonly used test case for non-parametric estimation methods. We explore the asymptotic properties of estimators based on thresholding of empirical wavelet coefficients. Minimax rates of convergence are studied over a large range of Besov function classes B s;p;q and for a range of global L 0 p error measures, 1 p 0 < 1. A single wavelet threshold estimator is asymptotically minimax within logarithmic terms simultaneously over a range of spaces and error measures. In particular, when p 0 > p, some form of non-linearity is essential, since the minimax linear estimators are suboptimal by polynomial powers of n. A second approach, using an approximation of a Gaussian white noise model in a Mallows metric, is used Acknowledgements: We thank Alexandr Sakhanenko for helpful discussions and references to his work on Berry Esseen theorems used in Section 5. This work was supported in part by NSF DMS 92-09130. The second author would like to thank Universite de ",
"neighbors": [
1910,
2458,
2661
],
"mask": "Train"
},
{
"node_id": 2243,
"label": 2,
"text": "Title: Using Precepts to Augment Training Set Learning an input whose value is don't-care in some\nAbstract: are used in turn to approximate A. Empirical studies show that good results can be achieved with TSL [8, 11]. However, TSL has several drawbacks. Training set learners (e.g., backpropagation) are typically slow as they may require many passes over the training set. Also, there is no guarantee that, given an arbitrary training set, the system will find enough good critical features to get a reasonable approximation of A. Moreover, the number of features to be searched is exponential in the number of inputs, and TSL becomes computationally expensive [1]. Finally, the scarcity of interesting positive theoretical results suggests the difficulty of learning without sufficient a priori knowledge. The goal of learning systems is to generalize. Generalization is commonly based on the set of critical features the system has available. Training set learners typically extract critical features from a random set of examples. While this approach is attractive, it suffers from the exponential growth of the number of features to be searched. We propose to extend it by endowing the system with some a priori knowledge, in the form of precepts. Advantages of the augmented system are speedup, improved generalization, and greater parsimony. This paper presents a precept-driven learning algorithm. Its main features include: 1) distributed implementation, 2) bounded learning and execution times, and 3) ability to handle both correct and incorrect precepts. Results of simulations on real-world data demonstrate promise. This paper presents precept-driven learning (PDL). PDL is intended to overcome some of TSL's weaknesses. In PDL, the training set is augmented by a small set of precepts. A pair p = (i, o) in I O is called an example. A precept is an example in which some of the i-entries (inputs) are set to the special value don't-care. An input whose value is not don't-care is said to be asserted. If i has no effect on the value of the output. The use of the special value don't-care is therefore as a shorthand. A pair containing don't-care inputs represents as many examples as the product of the sizes of the input domains of its don't-care inputs. 1. Introduction ",
"neighbors": [
831,
2244,
2245
],
"mask": "Train"
},
{
"node_id": 2244,
"label": 0,
"text": "Title: AN INCREMENTAL LEARNING MODEL FOR COMMONSENSE REASONING \nAbstract: ",
"neighbors": [
2243
],
"mask": "Train"
},
{
"node_id": 2245,
"label": 0,
"text": "Title: AN EFFICIENT METRIC FOR HETEROGENEOUS INDUCTIVE LEARNING APPLICATIONS IN THE ATTRIBUTE-VALUE LANGUAGE 1 \nAbstract: Many inductive learning problems can be expressed in the classical attribute-value language. In order to learn and to generalize, learning systems often rely on some measure of similarity between their current knowledge base and new information. The attribute-value language defines a heterogeneous multidimensional input space, where some attributes are nominal and others linear. Defining similarity, or proximity, of two points in such input spaces is non trivial. We discuss two representative homogeneous metrics and show examples of why they are limited to their own domains. We then address the issues raised by the design of a heterogeneous metric for inductive learning systems. In particular, we discuss the need for normalization and the impact of don't-care values. We propose a heterogeneous metric and evaluate it empirically on a simplified version of ILA. ",
"neighbors": [
87,
2243,
2471
],
"mask": "Train"
},
{
"node_id": 2246,
"label": 6,
"text": "Title: Learning to model sequences generated by switching distributions \nAbstract: We study efficient algorithms for solving the following problem, which we call the switching distributions learning problem. A sequence S = 1 2 : : : n , over a finite alphabet S is generated in the following way. The sequence is a concatenation of K runs, each of which is a consecutive subsequence. Each run is generated by independent random draws from a distribution ~p i over S, where ~p i is an element in a set of distributions f~p 1 ; : : : ; ~p N g. The learning algorithm is given this sequence and its goal is to find approximations of the distributions ~p 1 ; : : : ; ~p N , and give an approximate segmentation of the sequence into its constituting runs. We give an efficient algorithm for solving this problem and show conditions under which the algorithm is guaranteed to work with high probability.",
"neighbors": [
2356,
2475
],
"mask": "Train"
},
{
"node_id": 2247,
"label": 2,
"text": "Title: A Connectionist Architecture with Inherent Systematicity \nAbstract: For connectionist networks to be adequate for higher level cognitive activities such as natural language interpretation, they have to generalize in a way that is appropriate given the regularities of the domain. Fodor and Pylyshyn (1988) identified an important pattern of regularities in such domains, which they called systematicity. Several attempts have been made to show that connectionist networks can generalize in accordance with these regularities, but not to the satisfaction of the critics. To address this challenge, this paper starts by establishing the implications of systematicity for connectionist solutions to the variable binding problem. Based on the work of Hadley (1994a), we argue that the network must generalize information it learns in one variable binding to other variable bindings. We then show that temporal synchrony variable binding (Shas-tri and Ajjanagadde, 1993) inherently generalizes in this way. Thereby we show that temporal synchrony variable binding is a connectionist architecture that accounts for systematicity. This is an important step in showing that connectionism can be an adequate architecture for higher level cognition. ",
"neighbors": [
2263,
2701
],
"mask": "Train"
},
{
"node_id": 2248,
"label": 1,
"text": "Title: Heuristic for Improved Genetic Bin Packing \nAbstract: University of Tulsa Technical Report UTULSA-MCS-93-8, May, 1993. Submitted to Information Processing Letters, May, 1993. ",
"neighbors": [
145,
163,
2296
],
"mask": "Train"
},
{
"node_id": 2249,
"label": 1,
"text": "Title: Using a Distance Metric on Genetic Programs to Understand Genetic Operators \nAbstract: I describe a distance metric called \"edit\" distance which quantifies the syntactic difference between two genetic programs. In the context of one specific problem, the 6 bit multiplexor, I use the metric to analyze the amount of new material introduced by different crossover operators, the difference among the best individuals of a population and the difference among the best individuals and the rest of the population. The relationships between these data and run performance are imprecise but they are sufficiently interesting to encourage encourage further investigation into the use of edit distance.",
"neighbors": [
2175,
2271
],
"mask": "Train"
},
{
"node_id": 2250,
"label": 1,
"text": "Title: The Impact of External Dependency in Genetic Programming Primitives \nAbstract: Both control and data dependencies among primitives impact the behavioural consistency of subprograms in genetic programming solutions. Behavioural consistency in turn impacts the ability of genetic programming to identify and promote appropriate subprograms. We present the results of modelling dependency through a parameterized problem in which a subprogram exhibits internal and external dependency levels that change as the subprogram is successively combined into larger subsolutions. We find that the key difference between non-existent and \"full\" external dependency is a longer time to solution identification and a lower likelihood of success as shown by increased difficulty in identifying and promoting correct subprograms. ",
"neighbors": [
1696,
1940,
2175,
2271
],
"mask": "Train"
},
{
"node_id": 2251,
"label": 1,
"text": "Title: A PARALLEL ISLAND MODEL GENETIC ALGORITHM FOR THE MULTIPROCESSOR SCHEDULING PROBLEM \nAbstract: In this paper we compare the performance of a serial and a parallel island model Genetic Algorithm for solving the Multiprocessor Scheduling Problem. We show results using fixed and scaled problems both using and not using migration. We have found that in addition to providing a speedup through the use of parallel processing, the parallel island model GA with migration finds better quality solutions than the serial GA. ",
"neighbors": [
145,
163,
2296
],
"mask": "Test"
},
{
"node_id": 2252,
"label": 1,
"text": "Title: Neural Programming and an Internal Reinforcement Policy \nAbstract: An important reason for the continued popularity of Artificial Neural Networks (ANNs) in the machine learning community is that the gradient-descent backpropagation procedure gives ANNs a locally optimal change procedure and, in addition, a framework for understanding the ANN learning performance. Genetic programming (GP) is also a successful evolutionary learning technique that provides powerful parameterized primitive constructs. Unlike ANNs, though, GP does not have such a principled procedure for changing parts of the learned system based on its current performance. This paper introduces Neural Programming, a connectionist representation for evolving programs that maintains the benefits of GP. The connectionist model of Neural Programming allows for a regression credit-blame procedure in an evolutionary learning system. We describe a general method for an informed feedback mechanism for Neural Programming, Internal Reinforcement. We introduce an Internal Reinforcement procedure and demon strate its use through an illustrative experiment.",
"neighbors": [
2220,
2271,
2277
],
"mask": "Train"
},
{
"node_id": 2253,
"label": 5,
"text": "Title: Top-down Induction of Logical Decision Trees \nAbstract: A first order framework for top-down induction of logical decision trees is introduced. Logical decision trees are more expressive than the flat logic programs typically induced by empirical inductive logic programming systems because logical decision trees introduce invented predicates and mix existential and universal quantification of variables. An implementation of the framework, the Tilde system, is presented and empirically evaluated.",
"neighbors": [
2213,
2591
],
"mask": "Train"
},
{
"node_id": 2254,
"label": 1,
"text": "Title: An Indexed Bibliography of Genetic Algorithms: Years 1957-1993 compiled by \nAbstract: A first order framework for top-down induction of logical decision trees is introduced. Logical decision trees are more expressive than the flat logic programs typically induced by empirical inductive logic programming systems because logical decision trees introduce invented predicates and mix existential and universal quantification of variables. An implementation of the framework, the Tilde system, is presented and empirically evaluated.",
"neighbors": [
2039,
2347
],
"mask": "Train"
},
{
"node_id": 2255,
"label": 1,
"text": "Title: Evolutionary Learning of the Crossover Operator \nAbstract: 1 Abstract ",
"neighbors": [
427,
2412
],
"mask": "Train"
},
{
"node_id": 2256,
"label": 2,
"text": "Title: Improved Center Point Selection for Probabilistic Neural Networks \nAbstract: Probabilistic Neural Networks (PNN) typically learn more quickly than many neural network models and have had success on a variety of applications. However, in their basic form, they tend to have a large number of hidden nodes. One common solution to this problem is to keep only a randomly-selected subset of the original training data in building the network. This paper presents an algorithm called the Reduced Probabilistic Neural Network (RPNN) that seeks to choose a better-than-random subset of the available instances to use as center points of nodes in the network. The algorithm tends to retain non-noisy border points while removing nodes with instances in regions of the input space that are highly homogeneous. In experiments on 22 datasets, the RPNN had better average generalization accuracy than two other PNN models, while requiring an average of less than one-third the number of nodes. ",
"neighbors": [
2597
],
"mask": "Train"
},
{
"node_id": 2257,
"label": 1,
"text": "Title: Real-time Interactive Neuro-evolution \nAbstract: In standard neuro-evolution, a population of networks is evolved in the task, and the network that best solves the task is found. This network is then fixed and used to solve future instances of the problem. Networks evolved in this way do not handle real-time interaction very well. It is hard to evolve a solution ahead of time that can cope effectively with all the possible environments that might arise in the future and with all the possible ways someone may interact with it. This paper proposes evolving feedforward neural networks online to create agents that improve their performance through real-time interaction. This approach is demonstrated in a game world where neural-network-controlled individuals play against humans. Through evolution, these individuals learn to react to varying opponents while appropriately taking into account conflicting goals. After initial evaluation offline, the population is allowed to evolve online, and its performance improves considerably. The population not only adapts to novel situations brought about by changing strategies in the opponent and the game layout, but it also improves its performance in situations that it has already seen in offline training. This paper will describe an implementation of online evolution and shows that it is a practical method that exceeds the performance of offline evolution alone. ",
"neighbors": [
22,
247,
1767,
1768,
2444
],
"mask": "Test"
},
{
"node_id": 2258,
"label": 2,
"text": "Title: LU TP 93-24 Predicting System Loads with Artificial Neural \nAbstract: Networks Methods and Results from Abstract: We devise a feed-forward Artificial Neural Network (ANN) procedure for predicting utility loads and present the resulting predictions for two test problems given by \"The Great Energy Predictor Shootout The First Building Data Analysis and Prediction Competition\" [1]. Key ingredients in our approach are a method (ffi -test) for determining relevant inputs and the Multilayer Perceptron. These methods are briefly reviewed together with comments on alternative schemes like fitting to polynomials and the use of recurrent networks. ",
"neighbors": [
427,
1887
],
"mask": "Train"
},
{
"node_id": 2259,
"label": 1,
"text": "Title: An Experimental Analysis of Schema Creation, Propagation and Disruption in Genetic Programming \nAbstract: In this paper we first review the main results in the theory of schemata in Genetic Programming (GP) and summarise a new GP schema theory which is based on a new definition of schema. Then we study the creation, propagation and disruption of this new form of schemata in real runs, for standard crossover, one-point crossover and selection only. Finally, we discuss these results in the light our GP schema theorem. ",
"neighbors": [
163,
1257,
1959,
2175,
2261,
2271
],
"mask": "Train"
},
{
"node_id": 2260,
"label": 2,
"text": "Title: Radial Basis Functions for Process Control \nAbstract: Radial basis function (RBFs) neural networks provide an attractive method for high dimensional nonparametric estimation for use in nonlinear control. They are faster to train than conventional feedforward networks with sigmoidal activation networks (\"backpropagation nets\"), and provide a model structure better suited for adaptive control. This article gives a brief survey of the use of RBFs and then introduces a new statistical interpretation of radial basis functions and a new method of estimating the parameters, using the EM algorithm. This new statistical interpretation allows us to provide confidence limits on predictions made using the networks. ",
"neighbors": [
611,
2501
],
"mask": "Test"
},
{
"node_id": 2261,
"label": 1,
"text": "Title: Genetic Programming with One-Point Crossover and Point Mutation \nAbstract: Technical Report: CSRP-97-13 April 1997 Abstract In recent theoretical and experimental work on schemata in genetic programming we have proposed a new simpler form of crossover in which the same crossover point is selected in both parent programs. We call this operator one-point crossover because of its similarity with the corresponding operator in genetic algorithms. One point crossover presents very interesting properties from the theory point of view. In this paper we describe this form of crossover as well as a new variant called strict one-point crossover highlighting their useful theoretical and practical features. We also present experimental evidence which shows that one-point crossover compares favourably with standard crossover.",
"neighbors": [
1719,
2087,
2206,
2259
],
"mask": "Test"
},
{
"node_id": 2262,
"label": 2,
"text": "Title: OPTIMAL ASYMPTOTIC IDENTIFICATION UNDER BOUNDED DISTURBANCES \nAbstract: This paper investigates the intrinsic limitation of worst-case identification of LTI systems using data corrupted by bounded disturbances, when the unknown plant is known to belong to a given model set. This is done by analyzing the optimal worst-case asymptotic error achievable by performing experiments using any bounded inputs and estimating the plant using any identification algorithm. First, it is shown that under some topological conditions on the model set, there is an identification algorithm which is asymptotically optimal for any input. Characterization of the optimal asymptotic error as a function of the inputs is also obtained. These results hold for any error metric and disturbance norm. Second, these general results are applied to three specific identification problems: identification of stable systems in the ` 1 norm, identification of stable rational systems in the H 1 norm, and identification of unstable rational systems in the gap metric. For each of these problems, the general characterization of optimal asymptotic error is used to find near-optimal inputs to minimize the error. ",
"neighbors": [
2236,
2435,
2542
],
"mask": "Train"
},
{
"node_id": 2263,
"label": 2,
"text": "Title: A Connectionist Architecture for Learning to Parse \nAbstract: We present a connectionist architecture and demonstrate that it can learn syntactic parsing from a corpus of parsed text. The architecture can represent syntactic constituents, and can learn generalizations over syntactic constituents, thereby addressing the sparse data problems of previous connectionist architectures. We apply these Simple Synchrony Networks to mapping sequences of word tags to parse trees. After training on parsed samples of the Brown Corpus, the networks achieve precision and recall on constituents that approaches that of statistical methods for this task. ",
"neighbors": [
2247,
2701
],
"mask": "Train"
},
{
"node_id": 2264,
"label": 1,
"text": "Title: Evolutionary Computation in Air Traffic Control Planning \nAbstract: Air Traffic Control is involved in the real-time planning of aircraft trajectories. This is a heavily constrained optimization problem. We concentrate on free-route planning, in which aircraft are not required to fly over way points. The choice of a proper representation for this real-world problem is non-trivial. We propose a two level representation: one level on which the evolutionary operators work, and a derived level on which we do calculations. Furthermore we show that a specific choice of the fitness function is important for finding good solutions to large problem instances. We use a hybrid approach in the sense that we use knowledge about air traffic control by using a number of heuristics. We have built a prototype of a planning tool, and this resulted in a flexible tool for generating a free-route planning of low cost, for a number of aircraft. ",
"neighbors": [
2519
],
"mask": "Train"
},
{
"node_id": 2265,
"label": 1,
"text": "Title: AN APPROACH TO A PROBLEM IN NETWORK DESIGN USING GENETIC ALGORITHMS \nAbstract: Air Traffic Control is involved in the real-time planning of aircraft trajectories. This is a heavily constrained optimization problem. We concentrate on free-route planning, in which aircraft are not required to fly over way points. The choice of a proper representation for this real-world problem is non-trivial. We propose a two level representation: one level on which the evolutionary operators work, and a derived level on which we do calculations. Furthermore we show that a specific choice of the fitness function is important for finding good solutions to large problem instances. We use a hybrid approach in the sense that we use knowledge about air traffic control by using a number of heuristics. We have built a prototype of a planning tool, and this resulted in a flexible tool for generating a free-route planning of low cost, for a number of aircraft. ",
"neighbors": [
163,
2347
],
"mask": "Train"
},
{
"node_id": 2266,
"label": 3,
"text": "Title: A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian\nAbstract: We describe the maximum-likelihood parameter estimation problem and how the Expectation-Maximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor. ",
"neighbors": [
74,
345,
2421
],
"mask": "Validation"
},
{
"node_id": 2267,
"label": 1,
"text": "Title: Evolving Optimal Neural Networks Using Genetic Algorithms with Occam's Razor \nAbstract: Genetic algorithms have been used for neural networks in two main ways: to optimize the network architecture and to train the weights of a fixed architecture. While most previous work focuses on only one of these two options, this paper investigates an alternative evolutionary approach called Breeder Genetic Programming (BGP) in which the architecture and the weights are optimized simultaneously. The genotype of each network is represented as a tree whose depth and width are dynamically adapted to the particular application by specifically defined genetic operators. The weights are trained by a next-ascent hillclimbing search. A new fitness function is proposed that quantifies the principle of Occam's razor. It makes an optimal trade-off between the error fitting ability and the parsimony of the network. Simulation results on two benchmark problems of differing complexity suggest that the method finds minimal size networks on clean data. The experiments on noisy data show that using Occam's razor not only improves the generalization performance, it also accel erates the convergence speed of evolution. fl Published in Complex Systems, 7(3): 199-220, 1993",
"neighbors": [
2196
],
"mask": "Test"
},
{
"node_id": 2268,
"label": 2,
"text": "Title: SPERT: A VLIW/SIMD Microprocessor for Artificial Neural Network Computations \nAbstract: SPERT (Synthetic PERceptron Testbed) is a fully programmable single chip microprocessor designed for efficient execution of artificial neural network algorithms. The first implementation will be in a 1.2 m CMOS technology with a 50MHz clock rate, and a prototype system is being designed to occupy a double SBus slot within a Sun Sparcstation. SPERT will sustain over 300 fi 10 6 connections per second during pattern classification, and around 100 fi 10 6 connection updates per second while running the popular error backpropagation training algorithm. This represents a speedup of around two orders of magnitude over a Sparcstation-2 for algorithms of interest. An earlier system produced by our group, the Ring Array Processor (RAP), used commercial DSP chips. Compared with a RAP multiprocessor of similar performance, SPERT represents over an order of magnitude reduction in cost for problems where fixed-point arithmetic is satisfactory. fl International Computer Science Institute, 1947 Center Street, Berkeley, CA 94704",
"neighbors": [
1753,
2275,
2445
],
"mask": "Train"
},
{
"node_id": 2269,
"label": 1,
"text": "Title: Some Steps Towards a Form of Parallel Distributed Genetic Programming \nAbstract: Genetic Programming is a method of program discovery consisting of a special kind of genetic algorithm capable of operating on nonlinear chromosomes (parse trees) representing programs and an interpreter which can run the programs being optimised. This paper describes PDGP (Parallel Distributed Genetic Programming), a new form of genetic programming which is suitable for the development of fine-grained parallel programs. PDGP is based on a graph-like representation for parallel programs which is manipulated by crossover and mutation operators which guarantee the syntactic correctness of the offspring. The paper describes these operators and reports some preliminary results obtained with this paradigm. ",
"neighbors": [
2277
],
"mask": "Train"
},
{
"node_id": 2270,
"label": 2,
"text": "Title: Using generative models for handwritten digit recognition \nAbstract: Genetic Programming is a method of program discovery consisting of a special kind of genetic algorithm capable of operating on nonlinear chromosomes (parse trees) representing programs and an interpreter which can run the programs being optimised. This paper describes PDGP (Parallel Distributed Genetic Programming), a new form of genetic programming which is suitable for the development of fine-grained parallel programs. PDGP is based on a graph-like representation for parallel programs which is manipulated by crossover and mutation operators which guarantee the syntactic correctness of the offspring. The paper describes these operators and reports some preliminary results obtained with this paradigm. ",
"neighbors": [
480,
667,
2043
],
"mask": "Train"
},
{
"node_id": 2271,
"label": 1,
"text": "Title: How Fitness Structure Affects Subsolution Acquisition in Genetic Programming \nAbstract: We define fitness structure in genetic programming to be the mapping between the subprograms of a program and their respective fitness values. This paper shows how various fitness structures of a problem with independent subsolutions relate to the acquisition of sub-solutions. The rate of subsolution acquisition is found to be directly correlated with fitness structure whether that structure is uniform, linear or exponential. An understanding of fitness structure provides partial insight into the complicated relationship between fitness function and the outcome of genetic programming's search.",
"neighbors": [
2238,
2249,
2250,
2252,
2259
],
"mask": "Validation"
},
{
"node_id": 2272,
"label": 2,
"text": "Title: Rapid learning of binding-match and binding-error detector circuits via long-term potentiation \nAbstract: It is argued that the memorization of events and situations (episodic memory) requires the rapid formation of neural circuits responsive to binding errors and binding matches. While the formation of circuits responsive to binding matches can be modeled by associative learning mechanisms, the rapid formation of circuits responsive to binding errors is difficult to explain given their seemingly paradoxical behavior; such a circuit must be formed in response to the occurrence of a binding (i.e., a particular pattern in the input), but subsequent to its formation, it must not fire anymore in response to the occurrence of the very binding (i.e., pattern) that led to its formation. A plausible account of the formation of such circuits has not been offered. A computational model is described that demonstrates how a transient pattern of activity representing an event can lead to the rapid formation of circuits for detecting bindings and binding errors as a result of long-term potentiation within structures whose architecture and circuitry are similar to those of the hippocampal formation, a neural structure known to be critical to episodic memory. The model exhibits a high memory capacity and is robust against limited amounts of diffuse cell loss. The model also offers an alternate interpretation of the functional role of region CA3 in the formation of episodic memories, and predicts the nature of memory impairment that would result from damage to various regions of the hippocampal formation. ",
"neighbors": [
1176,
1866
],
"mask": "Train"
},
{
"node_id": 2273,
"label": 6,
"text": "Title: Learning Harmonic Progression Using Markov Models EECS545 Project \nAbstract: It is argued that the memorization of events and situations (episodic memory) requires the rapid formation of neural circuits responsive to binding errors and binding matches. While the formation of circuits responsive to binding matches can be modeled by associative learning mechanisms, the rapid formation of circuits responsive to binding errors is difficult to explain given their seemingly paradoxical behavior; such a circuit must be formed in response to the occurrence of a binding (i.e., a particular pattern in the input), but subsequent to its formation, it must not fire anymore in response to the occurrence of the very binding (i.e., pattern) that led to its formation. A plausible account of the formation of such circuits has not been offered. A computational model is described that demonstrates how a transient pattern of activity representing an event can lead to the rapid formation of circuits for detecting bindings and binding errors as a result of long-term potentiation within structures whose architecture and circuitry are similar to those of the hippocampal formation, a neural structure known to be critical to episodic memory. The model exhibits a high memory capacity and is robust against limited amounts of diffuse cell loss. The model also offers an alternate interpretation of the functional role of region CA3 in the formation of episodic memories, and predicts the nature of memory impairment that would result from damage to various regions of the hippocampal formation. ",
"neighbors": [
2360
],
"mask": "Train"
},
{
"node_id": 2274,
"label": 1,
"text": "Title: Specialization in Populations of Artificial Neural Networks \nAbstract: Specialization in populations of artificial neural networks is studied. Organisms with both fixed and evolvable fitness formulae are placed in isolated and shared environments, and the emerged behaviors are compared. An evolvable fitness formula specifies, that the evaluation measure is let free to evolve, and we obtain co-evolution of the expressed behavior and the individual evolvable fitness formula. In an isolated environment a generalist behavior emerges when organisms have a fixed fitness formula, and a specialist behavior emerges when organisms have individual evolvable fitness formulae. A population diversification analysis shows, that almost all organisms in a population in an isolated environment converge towards the same behavioral strategy, while we find, that competition can act to provide population diversification in populations of organisms in a shared environment.",
"neighbors": [
163,
2237
],
"mask": "Test"
},
{
"node_id": 2275,
"label": 2,
"text": "Title: Connectionist Layered Object-Oriented Network Simulator (CLONES): User's Manual minimize the learning curve for using CLONES,\nAbstract: CLONES is a object-oriented library for constructing, training and utilizing layered connectionist networks. The CLONES library contains all the object classes needed to write a simulator with a small amount of added source code (examples are included). The size of experimental ANN programs is greatly reduced by using an object-oriented library; at the same time these programs are easier to read, write and evolve. The library includes database, network behavior and training procedures that can be customized by the user. It is designed to run efficiently on data parallel computers (such as the RAP [6] and SPERT [1]) as well as uniprocessor workstations. While efficiency and portability to parallel computers are the primary goals, there are several secondary design goals: 3. allow heterogeneous algorithms and training procedures to be interconnected and trained together. Within these constraints we attempt to maximize the variety of artificial neural net work algorithms that can be supported. ",
"neighbors": [
1120,
2191,
2268,
2445,
2522
],
"mask": "Train"
},
{
"node_id": 2276,
"label": 0,
"text": "Title: Finding Analogues for Innovative Design \nAbstract: Knowledge Systems Laboratory March 1995 Report No. KSL 95-32 ",
"neighbors": [
30,
486,
1864
],
"mask": "Test"
},
{
"node_id": 2277,
"label": 1,
"text": "Title: Discovery of Symbolic, Neuro-Symbolic and Neural Networks with Parallel Distributed Genetic Programming \nAbstract: Technical Report: CSRP-96-14 August 1996 Abstract Genetic Programming is a method of program discovery consisting of a special kind of genetic algorithm capable of operating on parse trees representing programs and an interpreter which can run the programs being optimised. This paper describes Parallel Distributed Genetic Programming (PDGP), a new form of genetic programming which is suitable for the development of parallel programs in which symbolic and neural processing elements can be combined a in free and natural way. PDGP is based on a graph-like representation for parallel programs which is manipulated by crossover and mutation operators which guarantee the syntactic correctness of the offspring. The paper describes these operators and reports some results obtained with the exclusive-or problem. ",
"neighbors": [
1277,
1931,
2252,
2269,
2624
],
"mask": "Train"
},
{
"node_id": 2278,
"label": 2,
"text": "Title: Routing in Optical Multistage Interconnection Networks: a Neural Network Solution \nAbstract: There has been much interest in using optics to implement computer interconnection networks. However, there has been little discussion of any routing methodologies besides those already used in electronics. In this paper, a neural network routing methodology is proposed that can generate control bits for a broad range of optical multistage interconnection networks (OMINs). Though we present no optical implementation of this methodology, we illustrate its control for an optical interconnection network. These OMINs can be used as communication media for distributed computing systems. The routing methodology makes use of an Artificial Neural Network (ANN) that functions as a parallel computer for generating the routes. The neural network routing scheme can be applied to electrical as well as optical interconnection networks. However, since the ANN can be implemented using optics, this routing approach is especially appealing for an optical computing environment. Although the ANN does not always generate the best solution, the parallel nature of the ANN computation may make this routing scheme faster than conventional routing approaches, especially for OMINs that have an irregular structure. Furthermore, the ANN router is fault-tolerant. Results are shown for generating routes in a 16 fi 16, 3-stage OMIN.",
"neighbors": [
2284
],
"mask": "Train"
},
{
"node_id": 2279,
"label": 2,
"text": "Title: Quicknet on MultiSpert: Fast Parallel Neural Network Training \nAbstract: The MultiSpert parallel system is a straight-forward extension of the Spert workstation accelerator, which is predominantly used in speech recognition research at ICSI. In order to deliver high performance for Artificial Neural Network training without requiring changes to the user interfaces, the exisiting Quicknet ANN library was modified to run on MultiSpert. In this report, we present the algorithms used in the parallelization of the Quicknet code and analyse their communication and computation requirements. The resulting performance model yields a better understanding of system speed-ups and potential bottlenecks. Experimental results from actual training runs validate the model and demonstrate the achieved performance levels. ",
"neighbors": [
1806,
2579
],
"mask": "Train"
},
{
"node_id": 2280,
"label": 1,
"text": "Title: A GENETIC ALGORITHM FOR FRAGMENT ALLOCATION IN A DISTRIBUTED DATABASE SYSTEM \nAbstract: In this paper we explore the distributed database allocation problem, which is intractable. We also discuss genetic algorithms and how they have been used successfully to solve combinatorial problems. Our experimental results show the GA to be far superior to the greedy heuristic in obtaining optimal and near optimal fragment placements for the allocation problem with various data sets.",
"neighbors": [
145,
163,
2286
],
"mask": "Test"
},
{
"node_id": 2281,
"label": 2,
"text": "Title: GENE REGULATION AND BIOLOGICAL DEVELOPMENT IN NEURAL NETWORKS: AN EXPLORATORY MODEL \nAbstract: In this paper we explore the distributed database allocation problem, which is intractable. We also discuss genetic algorithms and how they have been used successfully to solve combinatorial problems. Our experimental results show the GA to be far superior to the greedy heuristic in obtaining optimal and near optimal fragment placements for the allocation problem with various data sets.",
"neighbors": [
1134,
2429
],
"mask": "Train"
},
{
"node_id": 2282,
"label": 5,
"text": "Title: The ILP description learning problem: Towards a general model-level definition of data mining in ILP \nAbstract: stefan.wrobel@gmd.de, saso.dzeroski@gmd.de Proc. FGML-95, Annual Workshop of the GI Special Interest Group Machine Learning (GI FG 1.1.3), ed. K. Morik and J. Herrmann, Research Report 580, Univ.Dortmund, 1995. Abstract The task of discovering interesting regularities in (large) sets of data (data mining, knowledge discovery) has recently met with increased interest in Machine Learning in general and in Inductive Logic Programming (ILP) in particular. However, while there is a widely accepted definition for the task of concept learning from examples in ILP, definitions for the data mining task have been proposed only recently. In this paper, we examine these so-called \"non-monotonic semantics\" definitions and show that non-monotonicity is only an incidental property of the data mining learning task, and that this task makes perfect sense without such an assumption. We therefore introduce and define a generalized definition of the data mining task called the ILP description learning problem and discuss its properties and relation to the traditional concept learning (prediction) learning problem. Since our characterization is entirely on the level of models, the definition applies independently of the chosen hypothesis language.",
"neighbors": [
1686,
2217,
2426
],
"mask": "Test"
},
{
"node_id": 2283,
"label": 2,
"text": "Title: Predictive Control of Opto-Electronic Reconfigurable Interconnection Networks Using Neural Networks \nAbstract: Opto-electronic reconfigurable interconnection networks are limited by significant control latency when used in large multiprocessor systems. This latency is the time required to analyze the current traffic and reconfigure the network to establish the required paths. The goal of latency hiding is to minimize the effect of this control overhead. In this paper, we introduce a technique that performs latency hiding by learning the patterns of communication traffic and using that information to anticipate the need for communication paths. Hence, the network provides the required communication paths before a request for a path is made. In this study, the communication patterns (memory accesses) of a parallel program are used as input to a time delay neural network (TDNN) to perform online training and prediction. These predicted communication patterns are used by the interconnection network controller that provides routes for the memory requests. Based on our experiments, the neural network was able to learn highly repetitive communication patterns, and was thus able to predict the allocation of communication paths, resulting in a reduction of communication latency. Communication latency is a significant issue in the design of lar ge scale multiprocessor systems. Point-to-point interconnection networks, which directly connect all processors and/or memories, provide minimum communication latency but suffer from high cost and limited scalability. A plethora of electr onic singlestage and multistage networks have been pr oposed, designed and built [Siegel90, Leighton93]. An alternative is the use of opto-electr onic reconfigurable interconnection networks which offer a limited number of high bandwidth communication channels configured on demand, to satisfy the r equired communication traffic [CLMQ94b]. A network controller determines the network configuration based on processor requests. Once the controller provides the optical communication paths requested, the communication proceeds at high speeds. Hence, the end-to-end latency incurred by such networks can be characterized by three components: control time, which is the time needed to determine the new network configuration and to physically establish the paths; launch time, the time to transmit the data into the network; and y time, the time needed for the message to travel through the network to its final destination. For high bandwidth short distance networks, the control time dominates the overall ",
"neighbors": [
2284
],
"mask": "Validation"
},
{
"node_id": 2284,
"label": 2,
"text": "Title: Performance of On-Line Learning Methods in Predicting Multiprocessor Memory Access Patterns \nAbstract: Technical Report UMIACS-TR-96-59 and CS-TR-3676 Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 Abstract Shared memory multiprocessors require reconfigurable interconnection networks (INs) for scalability. These INs are reconfigured by an IN control unit. However, these INs are often plagued by undesirable reconfiguration time that is primarily due to control latency, the amount of time delay that the control unit takes to decide on a desired new IN configuration. To reduce control latency, a trainable prediction unit (PU) was devised and added to the IN controller. The PUs job is to anticipate and reduce control configuration time, the major component of the control latency. Three different on-line prediction techniques were tested to learn and predict repetitive memory access patterns for three typical parallel processing applications, the 2-D relaxation algorithm, matrix multiply and Fast Fourier Transform. The predictions were then used by a routing control algorithm to reduce control latency by configuring the IN to provide needed memory access paths before they were requested. Three prediction techniques were used and tested: 1). a Markov predictor, 2). a linear predictor and 3). a time delay neural network (TDNN) predictor. As expected, different predictors performed best on different applications, however, the TDNN produced the best overall results. ",
"neighbors": [
74,
1293,
2278,
2283
],
"mask": "Validation"
},
{
"node_id": 2285,
"label": 3,
"text": "Title: Simulation Based Bayesian Nonparametric Regression Methods \nAbstract: Technical Report UMIACS-TR-96-59 and CS-TR-3676 Institute for Advanced Computer Studies University of Maryland College Park, MD 20742 Abstract Shared memory multiprocessors require reconfigurable interconnection networks (INs) for scalability. These INs are reconfigured by an IN control unit. However, these INs are often plagued by undesirable reconfiguration time that is primarily due to control latency, the amount of time delay that the control unit takes to decide on a desired new IN configuration. To reduce control latency, a trainable prediction unit (PU) was devised and added to the IN controller. The PUs job is to anticipate and reduce control configuration time, the major component of the control latency. Three different on-line prediction techniques were tested to learn and predict repetitive memory access patterns for three typical parallel processing applications, the 2-D relaxation algorithm, matrix multiply and Fast Fourier Transform. The predictions were then used by a routing control algorithm to reduce control latency by configuring the IN to provide needed memory access paths before they were requested. Three prediction techniques were used and tested: 1). a Markov predictor, 2). a linear predictor and 3). a time delay neural network (TDNN) predictor. As expected, different predictors performed best on different applications, however, the TDNN produced the best overall results. ",
"neighbors": [
2138,
2311
],
"mask": "Train"
},
{
"node_id": 2286,
"label": 1,
"text": "Title: A Genetic Algorithm for File and Task Placement in a Distributed System \nAbstract: In this paper we explore the distributed file and task placement problem, which is intractable. We also discuss genetic algorithms and how they have been used successfully to solve combinatorial problems. Our experimental results show the GA to be far superior to the greedy heuristic in obtaining optimal and near optimal file and task placements for the problem with various data sets. ",
"neighbors": [
145,
163,
2280
],
"mask": "Train"
},
{
"node_id": 2287,
"label": 3,
"text": "Title: Consistency of Posterior Distributions for Neural Networks \nAbstract: In this paper we show that the posterior distribution for feedforward neural networks is asymptotically consistent. This paper extends earlier results on universal approximation properties of neural networks to the Bayesian setting. The proof of consistency embeds the problem in a density estimation problem, then uses bounds on the bracketing entropy to show that the posterior is consistent over Hellinger neighborhoods. It then relates this result back to the regression setting. We show consistency in both the setting of the number of hidden nodes growing with the sample size, and in the case where the number of hidden nodes is treated as a parameter. Thus we provide a theoretical justification for using neural networks for nonparametric regression in a Bayesian framework. ",
"neighbors": [
560,
2315
],
"mask": "Validation"
},
{
"node_id": 2288,
"label": 3,
"text": "Title: Anytime Influence Diagrams \nAbstract: In this paper we show that the posterior distribution for feedforward neural networks is asymptotically consistent. This paper extends earlier results on universal approximation properties of neural networks to the Bayesian setting. The proof of consistency embeds the problem in a density estimation problem, then uses bounds on the bracketing entropy to show that the posterior is consistent over Hellinger neighborhoods. It then relates this result back to the regression setting. We show consistency in both the setting of the number of hidden nodes growing with the sample size, and in the case where the number of hidden nodes is treated as a parameter. Thus we provide a theoretical justification for using neural networks for nonparametric regression in a Bayesian framework. ",
"neighbors": [
1759,
2697
],
"mask": "Validation"
},
{
"node_id": 2289,
"label": 0,
"text": "Title: An Interactive Planning Architecture The Forest Fire Fighting case \nAbstract: This paper describes an interactive planning system that was developed inside an Intelligent Decision Support System aimed at supporting an operator when planning the initial attack to forest fires. The planning architecture rests on the integration of case-based reasoning techniques with constraint reasoning techniques exploited, mainly, for performing temporal reasoning on temporal metric information. Temporal reasoning plays a central role in supporting interactive functions that are provided to the user when performing two basic steps of the planning process: plan adaptation and resource scheduling. A first prototype was integrated with a situation assessment and a resource allocation manager subsystem and is currently being tested.",
"neighbors": [
1804,
1805
],
"mask": "Train"
},
{
"node_id": 2290,
"label": 5,
"text": "Title: A Comparison of Pruning Methods for Relational Concept Learning \nAbstract: Pre-Pruning and Post-Pruning are two standard methods of dealing with noise in concept learning. Pre-Pruning methods are very efficient, while Post-Pruning methods typically are more accurate, but much slower, because they have to generate an overly specific concept description first. We have experimented with a variety of pruning methods, including two new methods that try to combine and integrate pre- and post-pruning in order to achieve both accuracy and efficiency. This is verified with test series in a chess position classification task.",
"neighbors": [
344,
378,
585,
1275,
2213,
2291
],
"mask": "Train"
},
{
"node_id": 2291,
"label": 5,
"text": "Title: Top-Down Pruning in Relational Learning \nAbstract: Pruning is an effective method for dealing with noise in Machine Learning. Recently pruning algorithms, in particular Reduced Error Pruning, have also attracted interest in the field of Inductive Logic Programming. However, it has been shown that these methods can be very inefficient, because most of the time is wasted for generating clauses that explain noisy examples and subsequently pruning these clauses. We introduce a new method which searches for good theories in a top-down fashion to get a better starting point for the pruning algorithm. Experiments show that this approach can significantly lower the complexity of the task without losing predictive accuracy. ",
"neighbors": [
344,
378,
585,
1275,
2290
],
"mask": "Train"
},
{
"node_id": 2292,
"label": 3,
"text": "Title: Logarithmic-Time Updates and Queries in Probabilistic Networks \nAbstract: Traditional databases commonly support efficient query and update procedures that operate in time which is sublinear in the size of the database. Our goal in this paper is to take a first step toward dynamic reasoning in probabilistic databases with comparable efficiency. We propose a dynamic data structure that supports efficient algorithms for updating and querying singly connected Bayesian networks. In the conventional algorithm, new evidence is absorbed in time O(1) and queries are processed in time O(N ), where N is the size of the network. We propose an algorithm which, after a preprocessing phase, allows us to answer queries in time O(log N ) at the expense of O(log N ) time per evidence absorption. The usefulness of sub-linear processing time manifests itself in applications requiring (near) real-time response over large probabilistic databases. We briefly discuss a potential application of dynamic probabilistic reasoning in computational biology.",
"neighbors": [
1111,
1899,
2140
],
"mask": "Test"
},
{
"node_id": 2293,
"label": 3,
"text": "Title: Localized Partial Evaluation of Belief Networks \nAbstract: in the network. Often, however, an application will not need information about every node in the network nor will it need exact probabilities. We present the localized partial evaluation (LPE) propagation algorithm, which computes interval bounds on the marginal probability of a specified query node by examining a subset of the nodes in the entire network. Conceptually, LPE ignores parts of the network that are \"too far away\" from the queried node to have much impact on its value. LPE has the \"anytime\" property of being able to produce better solutions (tighter intervals) given more time to consider more of the network.",
"neighbors": [
1937
],
"mask": "Validation"
},
{
"node_id": 2294,
"label": 0,
"text": "Title: Cooperative Bayesian and Case-Based Reasoning for Solving Multiagent Planning Tasks \nAbstract: We describe an integrated problem solving architecture named INBANCA in which Bayesian networks and case-based reasoning (CBR) work cooperatively on multiagent planning tasks. This includes two-team dynamic tasks, and this paper concentrates on simulated soccer as an example. Bayesian networks are used to characterize action selection whereas a case-based approach is used to determine how to implement actions. This paper has two contributions. First, we survey integrations of case-based and Bayesian approaches from the perspective of a popular CBR task decomposition framework, thus explaining what types of integrations have been attempted. This allows us to explain the unique aspects of our proposed integration. Second, we demonstrate how Bayesian nets can be used to provide environmental context, and thus feature selection information, for the case-based reasoner.",
"neighbors": [
66,
649,
1140,
2380,
2529
],
"mask": "Train"
},
{
"node_id": 2295,
"label": 1,
"text": "Title: Diplomarbeit A Genetic Algorithm for the Topological Optimization of Neural Networks \nAbstract: We describe an integrated problem solving architecture named INBANCA in which Bayesian networks and case-based reasoning (CBR) work cooperatively on multiagent planning tasks. This includes two-team dynamic tasks, and this paper concentrates on simulated soccer as an example. Bayesian networks are used to characterize action selection whereas a case-based approach is used to determine how to implement actions. This paper has two contributions. First, we survey integrations of case-based and Bayesian approaches from the perspective of a popular CBR task decomposition framework, thus explaining what types of integrations have been attempted. This allows us to explain the unique aspects of our proposed integration. Second, we demonstrate how Bayesian nets can be used to provide environmental context, and thus feature selection information, for the case-based reasoner.",
"neighbors": [
163,
427,
881,
2667
],
"mask": "Train"
},
{
"node_id": 2296,
"label": 1,
"text": "Title: TECHNIQUES FOR REDUCING THE DISRUPTION OF SUPERIOR BUILDING BLOCKS IN GENETIC ALGORITHMS \nAbstract: We describe an integrated problem solving architecture named INBANCA in which Bayesian networks and case-based reasoning (CBR) work cooperatively on multiagent planning tasks. This includes two-team dynamic tasks, and this paper concentrates on simulated soccer as an example. Bayesian networks are used to characterize action selection whereas a case-based approach is used to determine how to implement actions. This paper has two contributions. First, we survey integrations of case-based and Bayesian approaches from the perspective of a popular CBR task decomposition framework, thus explaining what types of integrations have been attempted. This allows us to explain the unique aspects of our proposed integration. Second, we demonstrate how Bayesian nets can be used to provide environmental context, and thus feature selection information, for the case-based reasoner.",
"neighbors": [
145,
163,
2248,
2251
],
"mask": "Test"
},
{
"node_id": 2297,
"label": 6,
"text": "Title: Efficient Construction of Networks for Learned Representations with General to Specific Relationships \nAbstract: Machine learning systems often represent concepts or rules as sets of attribute-value pairs. Many learning algorithms generalize or specialize these concept representations by removing or adding pairs. Thus concepts are created that have general to specific relationships. This paper presents algorithms to connect concepts into a network based on their general to specific relationships. Since any concept can access related concepts quickly, the resulting structure allows increased efficiency in learning and reasoning. The time complexity of one set of learning models improves from O(n log n) to O(log n) (where n is the number of nodes) when using the general to specific structure. ",
"neighbors": [
2304
],
"mask": "Validation"
},
{
"node_id": 2298,
"label": 1,
"text": "Title: Convergence Analysis of Canonical Genetic Algorithms \nAbstract: This paper analyzes the convergence properties of the canonical genetic algorithm (CGA) with mutation, crossover and proportional reproduction applied to static optimization problems. It is proved by means of homogeneous finite Markov chain analysis that a CGA will never converge to the global optimum regardless of the initialization, crossover operator and objective function. But variants of CGAs that always maintain the best solution in the population, either before or after selection, are shown to converge to the global optimum due to the irreducibility property of the underlying original nonconvergent CGA. These results are discussed with respect to the schema theorem.",
"neighbors": [
163,
2518
],
"mask": "Train"
},
{
"node_id": 2299,
"label": 0,
"text": "Title: Case Retrieval Nets Applied to Large Case Bases \nAbstract: This article presents some experimental results obtained from applying the Case Retrieval Net approach to large case bases. The obtained results suggest that CRNs can successfully handle case bases larger than considered in other reports.",
"neighbors": [
75,
1854,
1855
],
"mask": "Train"
},
{
"node_id": 2300,
"label": 5,
"text": "Title: Applying a Machine Learning Workbench: Experience with Agricultural Databases \nAbstract: This paper reviews our experience with the application of machine learning techniques to agricultural databases. We have designed and implemented a machine learning workbench, WEKA, which permits rapid experimentation on a given dataset using a variety of machine learning schemes, and has several facilities for interactive investigation of the data: preprocessing attributes, evaluating and comparing the results of different schemes, and designing comparative experiments to be run offline. We discuss the partnership between agricultural scientist and machine learning researcher that our experience has shown to be vital to success. We review in some detail a particular agricultural application concerned with the culling of dairy herds. ",
"neighbors": [
479,
1337,
2636
],
"mask": "Validation"
},
{
"node_id": 2301,
"label": 3,
"text": "Title: editors. Representing Preferences as Ceteris Paribus Comparatives \nAbstract: Decision-theoretic preferences specify the relative desirability of all possible outcomes of alternative plans. In order to express general patterns of preference holding in a domain, we require a language that can refer directly to preferences over classes of outcomes as well as individuals. We present the basic concepts of a theory of meaning for such generic compar-atives to facilitate their incremental capture and exploitation in automated reasoning systems. Our semantics lifts comparisons of individuals to comparisons of classes \"other things being equal\" by means of contextual equivalences, equivalence relations among individuals that vary with the context of application. We discuss implications of the theory for represent ing preference information.",
"neighbors": [
1901,
1907,
2588
],
"mask": "Validation"
},
{
"node_id": 2302,
"label": 1,
"text": "Title: Genes, Phenes and the Baldwin Effect: Learning and Evolution in a Simulated Population \nAbstract: The Baldwin Effect, first proposed in the late nineteenth century, suggests that the course of evolutionary change can be influenced by individually learned behavior. The existence of this effect is still a hotly debated topic. In this paper clear evidence is presented that learning-based plasticity at the phenotypic level can and does produce directed changes at the genotypic level. This research confirms earlier experimental work done by others, notably Hinton & Nowlan (1987). Further, the amount of plasticity of the learned behavior is shown to be crucial to the size of the Baldwin Effect: either too little or too much and the effect disappears or is significantly reduced. Finally, for learnable traits, the case is made that over many generations it will become easier for the population as a whole to learn these traits (i.e. the phenotypic plasticity of these traits will increase). In this gradual transition from a genetically driven population to one driven by learning, the importance of the Baldwin Effect decreases. ",
"neighbors": [
403,
1353,
1880,
2104,
2111
],
"mask": "Train"
},
{
"node_id": 2303,
"label": 0,
"text": "Title: Case-based reactive navigation: A case-based method for on-line selection and adaptation of reactive control parameters\nAbstract: This article presents a new line of research investigating on-line learning mechanisms for autonomous intelligent agents. We discuss a case-based method for dynamic selection and modification of behavior assemblages for a navigational system. The case-based reasoning module is designed as an addition to a traditional reactive control system, and provides more flexible performance in novel environments without extensive high-level reasoning that would otherwise slow the system down. The method is implemented in the ACBARR (A Case-BAsed Reactive Robotic) system, and evaluated through empirical simulation of the system on several different environments, including \"box canyon\" environments known to be problematic for reactive control systems in general. fl Technical Report GIT-CC-92/57, College of Computing, Georgia Institute of Technology, Atlanta, Geor gia, 1992. ",
"neighbors": [
858,
1951,
2035
],
"mask": "Train"
},
{
"node_id": 2304,
"label": 2,
"text": "Title: GENERALIZATION BY CONTROLLED EXPANSION OF EXAMPLES \nAbstract: SG (Specific to General) is a learning system that derives general rules from specific examples. SG learns incrementally with good speed and generalization. The SG network is built of many simple nodes that adapt to the problem being learned. Learning is done without requiring user adjustment of sensitive parameters and noise is tolerated with graceful degradation in performance. Nodes learn important features in the input space and then monitor the ability of the features to predict output values. Learning is O(n log n) for each example, where n is the number of nodes in the network, and the number of inputs and output values are treated as constants. An enhanced network topology reduces time complexity to O(log n). Empirical results show that the model gives good generalization and that learning converges in a small number of training passes. ",
"neighbors": [
908,
2297
],
"mask": "Test"
},
{
"node_id": 2305,
"label": 4,
"text": "Title: Analytical Mean Squared Error Curves for Temporal Difference Learning \nAbstract: We provide analytical expressions governing changes to the bias and variance of the lookup table estimators provided by various Monte Carlo and temporal difference value estimation algorithms with o*ine updates over trials in absorbing Markov reward processes. We have used these expressions to develop software that serves as an analysis tool: given a complete description of a Markov reward process, it rapidly yields an exact mean-square-error curve, the curve one would get from averaging together sample mean-square-error curves from an infinite number of learning trials on the given problem. We use our analysis tool to illustrate classes of mean-square-error curve behavior in a variety of example reward processes, and we show that although the various temporal difference algorithms are quite sensitive to the choice of step-size and eligibility-trace parameters, there are values of these parameters that make them similarly competent, and generally good. ",
"neighbors": [
321,
2150,
2183
],
"mask": "Train"
},
{
"node_id": 2306,
"label": 2,
"text": "Title: On the Applicability of Neural Network and Machine Learning Methodologies to Natural Language Processing \nAbstract: We examine the inductive inference of a complex grammar specifically, we consider the task of training a model to classify natural language sentences as grammatical or ungrammatical, thereby exhibiting the same kind of discriminatory power provided by the Principles and Parameters linguistic framework, or Government-and-Binding theory. We investigate the following models: feed-forward neural networks, Fransconi-Gori-Soda and Back-Tsoi locally recurrent networks, Elman, Narendra & Parthasarathy, and Williams & Zipser recurrent networks, Euclidean and edit-distance nearest-neighbors, simulated annealing, and decision trees. The feed-forward neural networks and non-neural network machine learning models are included primarily for comparison. We address the question: How can a neural network, with its distributed nature and gradient descent based iterative calculations, possess linguistic capability which is traditionally handled with symbolic computation and recursive processes? Initial simulations with all models were only partially successful by using a large temporal window as input. Models trained in this fashion did not learn the grammar to a significant degree. Attempts at training recurrent networks with small temporal input windows failed until we implemented several techniques aimed at improving the convergence of the gradient descent training algorithms. We discuss the theory and present an empirical study of a variety of models and learning algorithms which highlights behaviour not present when attempting to learn a simpler grammar. ",
"neighbors": [
427,
2049,
2594
],
"mask": "Validation"
},
{
"node_id": 2307,
"label": 2,
"text": "Title: Parallel Gradient Distribution in Unconstrained Optimization \nAbstract: A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of k parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasi-Newton or a conjugate gradient algorithm. Each processor can perform one or many steps of a serial algorithm on a portion of the gradient of the objective function assigned to it, independently of the other processors. Eventually a synchronization step is performed which, for differentiable convex functions, consists of taking a strong convex combination of the k points found by the k processors. For nonconvex, as well as convex, differentiable functions, the best point found by the k processors is taken, or any better point. The fundamental result that we establish is that any accumulation point of the parallel algorithm is stationary for the nonconvex case, and is a global solution for the convex case. Computational testing on the Thinking Machines CM-5 multiprocessor indicate a speedup of the order of the number of processors employed. ",
"neighbors": [
406,
1772,
1939
],
"mask": "Train"
},
{
"node_id": 2308,
"label": 0,
"text": "Title: Problem Formulation, Program Synthesis and Program Transformation Techniques for Simulation, Optimization and Constraint Satisfaction (Research Statement) \nAbstract: A parallel version is proposed for a fundamental theorem of serial unconstrained optimization. The parallel theorem allows each of k parallel processors to use simultaneously a different algorithm, such as a descent, Newton, quasi-Newton or a conjugate gradient algorithm. Each processor can perform one or many steps of a serial algorithm on a portion of the gradient of the objective function assigned to it, independently of the other processors. Eventually a synchronization step is performed which, for differentiable convex functions, consists of taking a strong convex combination of the k points found by the k processors. For nonconvex, as well as convex, differentiable functions, the best point found by the k processors is taken, or any better point. The fundamental result that we establish is that any accumulation point of the parallel algorithm is stationary for the nonconvex case, and is a global solution for the convex case. Computational testing on the Thinking Machines CM-5 multiprocessor indicate a speedup of the order of the number of processors employed. ",
"neighbors": [
240,
2652
],
"mask": "Train"
},
{
"node_id": 2309,
"label": 4,
"text": "Title: EVOLVING SENSORS IN ENVIRONMENTS OF CONTROLLED COMPLEXITY \nAbstract: 1 . Sensors represent a crucial link between the evolutionary forces shaping a species' relationship with its environment, and the individual's cognitive abilities to behave and learn. We report on experiments using a new class of \"latent energy environments\" (LEE) models to define environments of carefully controlled complexity which allow us to state bounds for random and optimal behaviors that are independent of strategies for achieving the behaviors. Using LEE's analytic basis for defining environments, we then use neural networks (NNets) to model individuals and a steady - state genetic algorithm to model an evolutionary process shaping the NNets, in particular their sensors. Our experiments consider two types of \"contact\" and \"ambient\" sensors, and variants where the NNets are not allowed to learn, learn via error correction from internal prediction, and via reinforcement learning. We find that predictive learning, even when using a larger repertoire of the more sophisticated ambient sensors, provides no advantage over NNets unable to learn. However, reinforcement learning using a small number of crude contact sensors does provide a significant advantage. Our analysis of these results points to a tradeoff between the genetic \"robustness\" of sensors and their informativeness to a learning system. ",
"neighbors": [
403,
538,
681,
1325,
1969
],
"mask": "Validation"
},
{
"node_id": 2310,
"label": 0,
"text": "Title: Machine Learning: An Annotated Bibliography for the 1995 AI Statistics Tutorial on Machine Learning (Version 1) \nAbstract: This is a brief annotated bibliography that I wanted to make available to the attendees of my Machine Learning tutorial at the 1995 AI & Statistics Workshop. These slides are available in my WWW pages under slides. Please contact me if you have any questions. Please also note the date (listed above) on which this was most recently updated. While I plan to make occasional updates to this file, it is bound to be outdated quickly. Also, I apologize for the lack of figures, but my time on this project is limited and the slides should compensate. Finally, this bibliography is, by definition, This book is now out of date. Both Pat Langley and Tom Mitchell are in the process of writing textbooks on this subject, but we're still waiting for them. Until then, I suggest looking at both the Readings and the recent ML conference proceedings (both International and European). There are also a few introductory papers on this subject, though I haven't gotten around to putting them in here yet. However, Pat Langley and Dennis Kibler (1988) have written a good paper on ML as an empirical science, and Pat has written several editorials of use to the ML author (Langley 1986; 1987; 1990). incomplete, and I've left out many other references that may be of some use.",
"neighbors": [
66,
318,
2583,
2607
],
"mask": "Train"
},
{
"node_id": 2311,
"label": 3,
"text": "Title: Bayesian MARS \nAbstract: A Bayesian approach to multivariate adaptive regression spline (MARS) fitting (Friedman, 1991) is proposed. This takes the form of a probability distribution over the space of possible MARS models which is explored using reversible jump Markov chain Monte Carlo methods (Green, 1995). The generated sample of MARS models produced is shown to have good predictive power when averaged and allows easy interpretation of the relative importance of predictors to the overall fit. ",
"neighbors": [
161,
2285,
2448
],
"mask": "Train"
},
{
"node_id": 2312,
"label": 5,
"text": "Title: Theory-Guided Induction of Logic Programs by Inference of Regular Languages recursive clauses. merlin on the\nAbstract: resent allowed sequences of resolution steps for the initial theory. There are, however, many characterizations of allowed sequences of resolution steps that cannot be expressed by a set of resolvents. One approach to this problem is presented, the system mer-lin, which is based on an earlier technique for learning finite-state automata that represent allowed sequences of resolution steps. merlin extends the previous technique in three ways: i) negative examples are considered in addition to positive examples, ii) a new strategy for performing generalization is used, and iii) a technique for converting the learned automaton to a logic program is included. Results from experiments are presented in which merlin outperforms both a system using the old strategy for performing generalization, and a traditional covering technique. The latter result can be explained by the limited expressiveness of hypotheses produced by covering and also by the fact that covering needs to produce the correct base clauses for a recursive definition before ",
"neighbors": [
521,
1082,
1259,
2587
],
"mask": "Train"
},
{
"node_id": 2313,
"label": 3,
"text": "Title: PERFECT SIMULATION OF CONDITIONALLY SPECIFIED MODELS \nAbstract: We discuss how the ideas of producing perfect simulations based on coupling from the past for finite state space models naturally extend to mul-tivariate distributions with infinite or uncountable state spaces such as auto-gamma, auto-Poisson and auto-negative-binomial models, using Gibbs sampling in combination with sandwiching methods originally introduced for perfect simulation of point processes. ",
"neighbors": [
2208
],
"mask": "Train"
},
{
"node_id": 2314,
"label": 2,
"text": "Title: GENERAL CLASSES OF CONTROL-LYAPUNOV FUNCTIONS \nAbstract: The main result of this paper establishes the equivalence between null asymptotic controllability of nonlinear finite-dimensional control systems and the existence of continuous control-Lyapunov functions (clf's) defined by means of generalized derivatives. In this manner, one obtains a complete characterization of asymptotic controllability, applying in principle to a far wider class of systems than Artstein's Theorem (which relates closed-loop feedback stabilization to the existence of smooth clf's). The proof relies on viability theory and optimal control techniques. 1. Introduction. In this paper, we study systems of the general form ",
"neighbors": [
2187
],
"mask": "Train"
},
{
"node_id": 2315,
"label": 6,
"text": "Title: Metric Entropy and Minimax Risk in Classification \nAbstract: We apply recent results on the minimax risk in density estimation to the related problem of pattern classification. The notion of loss we seek to minimize is an information theoretic measure of how well we can predict the classification of future examples, given the classification of previously seen examples. We give an asymptotic characterization of the minimax risk in terms of the metric entropy properties of the class of distributions that might be generating the examples. We then use these results to characterize the minimax risk in the special case of noisy two-valued classification problems in terms of the Assouad density and the ",
"neighbors": [
109,
2287
],
"mask": "Train"
},
{
"node_id": 2316,
"label": 1,
"text": "Title: Guided Crossover: A New Operator for Genetic Algorithm Based Optimization \nAbstract: Genetic algorithms (GAs) have been extensively used in different domains as a means of doing global optimization in a simple yet reliable manner. They have a much better chance of getting to global optima than gradient based methods which usually converge to local sub optima. However, GAs have a tendency of getting only moderately close to the optima in a small number of iterations. To get very close to the optima, the GA needs a very large number of iterations. Whereas gradient based optimizers usually get very close to local optima in a relatively small number of iterations. In this paper we describe a new crossover operator which is designed to endow the GA with gradient-like abilities without actually computing any gradients and without sacrificing global optimality. The operator works by using guidance from all members of the GA population to select a direction for exploration. Empirical results in two engineering design domains and across both binary and floating point representations demonstrate that the operator can significantly improve the steady state error of the GA optimizer. ",
"neighbors": [
163,
743,
744,
2030,
2659
],
"mask": "Validation"
},
{
"node_id": 2317,
"label": 1,
"text": "Title: Cellular Encoding Applied to Neurocontrol \nAbstract: Neural networks are trained for balancing 1 and 2 poles attached to a cart on a fixed track. For one variant of the single pole system, only pole angle and cart position variables are supplied as inputs; the network must learn to compute velocities. All of the problems are solved using a fixed architecture and using a new version of cellular encoding that evolves an application specific architecture with real-valued weights. The learning times and generalization capabilities are compared for neural networks developed using both methods. After a post processing simplification, topologies produced by cellular encoding were very simple and could be analyzed. Architectures with no hidden units were produced for the single pole and the two pole problem when velocity information is supplied as an input. Moreover, these linear solutions display good generalization. For all the control problems, cellular encoding can automatically generate architectures whose complexity and structure reflect the features of the problem to solve.",
"neighbors": [
1353,
2429,
2624
],
"mask": "Validation"
},
{
"node_id": 2318,
"label": 3,
"text": "Title: EXACT TRANSITION PROBABILITIES FOR THE INDEPENDENCE METROPOLIS SAMPLER \nAbstract: A recent result of Jun Liu's has shown how to compute explicitly the eigen-values and eigenvectors for the Markov chain derived from a special case of the Hastings sampling algorithm, known as the indepdendence Metropolis sampler. In this note, we show how to extend the result to obtain exact n-step transition probabilities for any n. This is done first for a chain on a finite state space, and then extended to a general (discrete or continuous) state space. The paper concludes with some implications for diagnostic tests of convergence of Markov chain samplers. ",
"neighbors": [
491,
1977,
2153
],
"mask": "Train"
},
{
"node_id": 2319,
"label": 0,
"text": "Title: Learning When Reformulation is Appropriate for Iterative Design \nAbstract: It is well known that search-space reformulation can improve the speed and reliability of numerical optimization in engineering design. We argue that the best choice of reformulation depends on the design goal, and present a technique for automatically constructing rules that map the design goal into a reformulation chosen from a space of possible reformulations. We tested our technique in the domain of racing-yacht-hull design, where each reformulation corresponds to incorporating constraints into the search space. We applied a standard inductive-learning algorithm, C4.5, to a set of training data describing which constraints are active in the optimal design for each goal encountered in a previous design session. We then used these rules to choose an appropriate reformulation for each of a set of test cases. Our experimental results show that using these reformulations improves both the speed and the reliability of design optimization, outperforming competing methods and approaching the best performance possible. ",
"neighbors": [
227,
2131,
2479
],
"mask": "Train"
},
{
"node_id": 2320,
"label": 6,
"text": "Title: Inserting the best known bounds for weighted bipar tite matching [11], with 1=2 p polynomial-time\nAbstract: we apply the reduction to their two core children, the total sum of their matching weights becomes O(n), and if for each comparison of a spine node and a critical node we apply the reduction to the core child of the spine node, the total sum of their matching weights becomes O(n). With regards to the O( 2 ) comparisons of two critical nodes, their sum cannot exceed O( 2 n) in total weight. Thus, since we have a total of O(n) edges involved in the matchings, in time O(n), we can reduce the total sum of the matching weights to O( 2 n). Theorem 6.6 Let M : IR 1 ! IR be a monotone function bounding the time complexity UWBM. Moreover, let M satisfy that M (x) = x 1+\" f(x), where \" 0 is a constant, f (x) = O(x o(1) ), f is monotone, and for some constants b 1 ; b 2 , 8x; y b 1 : f(xy) b 2 f (x)f (y). Then, with = \" p in time O(n 1+o(1) + M (n)). Proof: We spend O(n polylog n + time(UWBM(k 2 n))) on the matchings. So, by Theorem 5.10, we have that Comp-Core-Trees can be computed in time O(n polylog n + time(UWBM(k 2 n))). Applying Theo 4 = 16, we get Corollary 6.7 MAST is computable in time O(n 1:5 log n). [4] W.H.E. Day. Computational complexity of inferring phylogenies from dissimilarity matrices. Bulletin of Mathematical Biology, 49(4):461-467, 1987. [5] M. Farach, S. Kannan, and T. Warnow. A robust model for finding optimal evolutionary trees. Al-gorithmica, 1994. In press. See also STOC '93. ",
"neighbors": [
1853,
2511
],
"mask": "Train"
},
{
"node_id": 2321,
"label": 2,
"text": "Title: Asymptotic Controllability Implies Feedback Stabilization \nAbstract: | ",
"neighbors": [
2186,
2187,
2370
],
"mask": "Train"
},
{
"node_id": 2322,
"label": 2,
"text": "Title: PHONETIC CLASSIFICATION OF TIMIT SEGMENTS PREPROCESSED WITH LYON'S COCHLEAR MODEL USING A SUPERVISED/UNSUPERVISED HYBRID NEURAL NETWORK \nAbstract: We report results on vowel and stop consonant recognition with tokens extracted from the TIMIT database. Our current system differs from others doing similar tasks in that we do not use any specific time normalization techniques. We use a very detailed biologically motivated input representation of the speech tokens - Lyon's cochlear model as implemented by Slaney [20]. This detailed, high dimensional representation, known as a cochleagram, is classified by either a back-propagation or by a hybrid supervised/unsupervised neural network classifier. The hybrid network is composed of a biologically motivated unsupervised network and a supervised back-propagation network. This approach produces results comparable to those obtained by others without the addition of time normalization. ",
"neighbors": [
359,
2498,
2499,
2500
],
"mask": "Test"
},
{
"node_id": 2323,
"label": 3,
"text": "Title: PHONETIC CLASSIFICATION OF TIMIT SEGMENTS PREPROCESSED WITH LYON'S COCHLEAR MODEL USING A SUPERVISED/UNSUPERVISED HYBRID NEURAL NETWORK \nAbstract: MOU 130: Feasibility study of fully autonomous vehicles using decision-theoretic control Final Report ",
"neighbors": [
492,
788,
1268,
2419
],
"mask": "Train"
},
{
"node_id": 2324,
"label": 6,
"text": "Title: APPLICATION OF ESOP MINIMIZATION IN MACHINE LEARNING AND KNOWLEDGE DISCOVERY \nAbstract: This paper presents a new application of an Exclusive-Sum-Of-Products (ESOP) minimizer EXORCISM-MV-2: to Machine Learning, and particularly, in Pattern Theory. An analysis of various logic synthesis programs has been conducted at Wright Laboratory for machine learning applications. Creating a robust and efficient Boolean minimizer for machine learning that would minimize a decomposed function cardinality (DFC) measure of functions would help to solve practical problems in application areas that are of interest to the Pattern Theory Group especially those problems that require strongly unspecified multiple-valued-input functions with a large number of variables. For many functions, the complexity minimization of EXORCISM-MV-2 is better than that of Espresso. For small functions, they are worse than those of the Curtis-like Decomposer. However, EXORCISM is much faster, can run on problems with more variables, and significant DFC improvements have also been found. We analyze the cases when EXORCISM is worse than Espresso and propose new improvements for strongly unspecified functions. ",
"neighbors": [
1161,
2326
],
"mask": "Validation"
},
{
"node_id": 2325,
"label": 2,
"text": "Title: Incremental Polynomial Model-Controller Network: a self organising non-linear controller \nAbstract: The aim of this study is to present the \"Incremental Polynomial Model-Controller Network\" (IPMCN). This network is composed of controllers each one attached to a model used for its indirect design. At each instant the controller connected to the model performing the best is selected. An automatic network construction algorithm is discribed in this study. It makes the IPMCN a self-organising non-linear controller. However the emphasis is on the polynomial controllers that are the building blocks of the IPMCN. From an analysis of the properties of polynomial functions for system modelling it is shown that multiple low order odd polynomials are very suitable to model non-linear systems. A closed loop reference model method to design a controller from a odd polynomial model is then described. The properties of the IPMCN are illustrated according to a second order system having both system states y and _y involving non-linear behaviour. It shows that as a component of a network or alone, a low order odd polynomial controller performs much better than a linear adaptive controller. Moreover, the number of controllers is significantly reduced with the increase of the polynomial order of the controllers and an improvement of the control performance is proportional to the decrease of the number of controllers. In addition, the clustering free approach, applied for the selection of the controllers, makes the IPMCN insensitive to the number of quantities involving nonlinearity in the system. The use of local controllers capable of handling systems with complex dynamics will make this scheme one of the most effective approaches for the control of non-linear systems.",
"neighbors": [
611,
745,
2538
],
"mask": "Train"
},
{
"node_id": 2326,
"label": 6,
"text": "Title: Pattern Theoretic Feature Extraction and Constructive Induction \nAbstract: This paper offers a perspective on features and pattern finding in general. This perspective is based on a robust complexity measure called Decomposed Function Car-dinality. A function decomposition algorithm for minimizing this complexity measure and finding the associated features is outlined. Results from experiments with this algorithm are also summarized.",
"neighbors": [
317,
508,
2324
],
"mask": "Train"
},
{
"node_id": 2327,
"label": 6,
"text": "Title: A Comparison of New and Old Algorithms for A Mixture Estimation Problem \nAbstract: We investigate the problem of estimating the proportion vector which maximizes the likelihood of a given sample for a mixture of given densities. We adapt a framework developed for supervised learning and give simple derivations for many of the standard iterative algorithms like gradient projection and EM. In this framework, the distance between the new and old proportion vectors is used as a penalty term. The square distance leads to the gradient projection update, and the relative entropy to a new update which we call the exponentiated gradient update (EG ). Curiously, when a second order Taylor expansion of the relative entropy is used, we arrive at an update EM which, for = 1, gives the usual EM update. Experimentally, both the EM -update and the EG -update for > 1 outperform the EM algorithm and its variants. We also prove a polynomial bound on the rate of convergence of the EG algorithm. ",
"neighbors": [
76,
1924,
2015,
2034
],
"mask": "Train"
},
{
"node_id": 2328,
"label": 4,
"text": "Title: A Comparison of Direct and Model-Based Reinforcement Learning \nAbstract: This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We find that in this task model-based approaches support reinforcement learning from smaller amounts of training data and efficient handling of changing goals. ",
"neighbors": [
1782
],
"mask": "Validation"
},
{
"node_id": 2329,
"label": 6,
"text": "Title: Programming Research Group A LEARNABILITY MODEL FOR UNIVERSAL REPRESENTATIONS \nAbstract: This paper compares direct reinforcement learning (no explicit model) and model-based reinforcement learning on a simple task: pendulum swing up. We find that in this task model-based approaches support reinforcement learning from smaller amounts of training data and efficient handling of changing goals. ",
"neighbors": [
672,
1290,
1428,
1918,
2080,
2589
],
"mask": "Train"
},
{
"node_id": 2330,
"label": 1,
"text": "Title: A comparison of the fixed and floating building block representation in the genetic algorithm \nAbstract: This article compares the traditional, fixed problem representation style of a genetic algorithm (GA) with a new floating representation in which the building blocks of a problem are not fixed at specific locations on the individuals of the population. In addition, the effects of non-coding segments on both of these representations is studied. Non-coding segments are a computational model of non-coding DNA and floating building blocks mimic the location independence of genes. The fact that these structures are prevalent in natural genetic systems suggests that they may provide some advantages to the evolutionary process. Our results show that there is a significant difference in how GAs solve a problem in the fixed and floating representations. GAs are able to maintain a more diverse population with the floating representation. The combination of non-coding segments and floating building blocks appears to encourage a GA to take advantage of its parallel search and recombination abilities. ",
"neighbors": [
1631,
1696,
1769,
2598,
2604
],
"mask": "Validation"
},
{
"node_id": 2331,
"label": 6,
"text": "Title: Improved Hoeffding-Style Performance Guarantees for Accurate Classifiers \nAbstract: We extend Hoeffding bounds to develop superior probabilistic performance guarantees for accurate classifiers. The original Hoeffding bounds on classifier accuracy depend on the accuracy itself as a parameter. Since the accuracy is not known a priori, the parameter value that gives the weakest bounds is used. We present a method that loosely bounds the accuracy using the old method and uses the loose bound as an improved parameter value for tighter bounds. We show how to use the bounds in practice, and we generalize the bounds for individual classifiers to form uniform bounds over multiple classifiers. ",
"neighbors": [
571,
2694
],
"mask": "Train"
},
{
"node_id": 2332,
"label": 1,
"text": "Title: Improved Hoeffding-Style Performance Guarantees for Accurate Classifiers \nAbstract: Evolving Cooperative Groups: Preliminary Results Abstract Multi-agent systems require coordination of sources with distinct expertise to perform complex tasks effectively. In this paper, we use co-evolutionary approach using genetic algorithms to evolve multiple individuals who can effectively cooperate to solve a common problem. We concurrently run a GA for each individual in the group. In this paper, we experiment with a room painting domain which requires cooperation of two agents. We have used two mechanisms for evaluating an individual in one population: (a) pair it randomly with members from the other population, (b) pair it with members of the other population in a shared memory containing the best pairs found so far. Both the approaches are successful in generating optimal behavior patterns. However, our preliminary results exhibit a slight edge for the shared memory approach.",
"neighbors": [
1117,
2334
],
"mask": "Validation"
},
{
"node_id": 2333,
"label": 6,
"text": "Title: Recursive Automatic Algorithm Selection for Inductive Learning \nAbstract: COINS Technical Report 94-61 August 1994 ",
"neighbors": [
102,
318,
2135,
2583
],
"mask": "Train"
},
{
"node_id": 2334,
"label": 1,
"text": "Title: New Methods for Competitive Coevolution \nAbstract: We consider \"competitive coevolution,\" in which fitness is based on direct competition among individuals selected from two independently evolving populations of \"hosts\" and \"parasites.\" Competitive coevolution can lead to an \"arms race,\" in which the two populations reciprocally drive one another to increasing levels of performance and complexity. We use the games of Nim and 3-D Tic-Tac-Toe as test problems to explore three new techniques in competitive coevolution. \"Competitive fitness sharing\" changes the way fitness is measured, \"shared sampling\" provides a method for selecting a strong, diverse set of parasites, and the \"hall of fame\" encourages arms races by saving good individuals from prior generations. We provide several different motivations for these methods, and mathematical insights into their use. Experimental comparisons are done, and a detailed analysis of these experiments is presented in terms of testing issues, diversity, extinction, arms race progress measurements, and drift. ",
"neighbors": [
54,
209,
602,
1588,
1790,
1832,
1836,
1917,
2103,
2332,
2353
],
"mask": "Test"
},
{
"node_id": 2335,
"label": 2,
"text": "Title: Function Approximation with Neural Networks and Local Methods: Bias, Variance and Smoothness \nAbstract: We review the use of global and local methods for estimating a function mapping R m ) R n from samples of the function containing noise. The relationship between the methods is examined and an empirical comparison is performed using the multi-layer perceptron (MLP) global neural network model, the single nearest-neighbour model, a linear local approximation (LA) model, and the following commonly used datasets: the Mackey-Glass chaotic time series, the Sunspot time series, British English Vowel data, TIMIT speech phonemes, building energy prediction data, and the sonar dataset. We find that the simple local approximation models often outperform the MLP. No criterion such as classification/prediction, size of the training set, dimensionality of the training set, etc. can be used to distinguish whether the MLP or the local approximation method will be superior. However, we find that if we consider histograms of the k-NN density estimates for the training datasets then we can choose the best performing method a priori by selecting local approximation when the spread of the density histogram is large and choosing the MLP otherwise. This result correlates with the hypothesis that the global MLP model is less appropriate when the characteristics of the function to be approximated varies throughout the input space. We discuss the results, the smoothness assumption often made in function approximation, and the bias/variance dilemma. ",
"neighbors": [
74,
2378
],
"mask": "Test"
},
{
"node_id": 2336,
"label": 2,
"text": "Title: A Fast Kohonen Net Implementation for \nAbstract: We present an implementation of Kohonen Self-Organizing Feature Maps for the Spert-II vector microprocessor system. The implementation supports arbitrary neural map topologies and arbitrary neighborhood functions. For small networks, as used in real-world tasks, a single Spert-II board is measured to run Kohonen net classification at up to 208 million connections per second (MCPS). On a speech coding benchmark task, Spert-II performs on-line Kohonen net training at over 100 million connection updates per second (MCUPS). This represents almost a factor of 10 improvement compared to previously reported implementations. The asymptotic peak speed of the system is 213 MCPS and 213 MCUPS.",
"neighbors": [
745,
2579
],
"mask": "Train"
},
{
"node_id": 2337,
"label": 2,
"text": "Title: Learning to segment images using dynamic feature binding an isolated object in an image is\nAbstract: Despite the fact that complex visual scenes contain multiple, overlapping objects, people perform object recognition with ease and accuracy. One operation that facilitates recognition is an early segmentation process in which features of objects are grouped and labeled according to which object they belong. Current computational systems that perform this operation are based on predefined grouping heuristics. We describe a system called MAGIC that learns how to group features based on a set of presegmented examples. In many cases, MAGIC discovers grouping heuristics similar to those previously proposed, but it also has the capability of finding nonintuitive structural regularities in images. Grouping is performed by a relaxation network that attempts to dynamically bind related features. Features transmit a complex-valued signal (amplitude and phase) to one another; binding can thus be represented by phase locking related features. MAGIC's training procedure is a generalization of recurrent back propagation to complex-valued units. ",
"neighbors": [
2218,
2606,
2610
],
"mask": "Train"
},
{
"node_id": 2338,
"label": 0,
"text": "Title: Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier \nAbstract: The simple Bayesian classifier (SBC) is commonly thought to assume that attributes are independent given the class, but this is apparently contradicted by the surprisingly good performance it exhibits in many domains that contain clear attribute dependences. No explanation for this has been proposed so far. In this paper we show that the SBC does not in fact assume attribute independence, and can be optimal even when this assumption is violated by a wide margin. The key to this finding lies in the distinction between classification and probability estimation: correct classification can be achieved even when the probability estimates used contain large errors. We show that the previously-assumed region of optimality of the SBC is a second-order infinitesimal fraction of the actual one. This is followed by the derivation of several necessary and several sufficient conditions for the optimality of the SBC. For example, the SBC is optimal for learning arbitrary conjunctions and disjunctions, even though they violate the independence assumption. The paper also reports empirical evidence of the SBC's competitive performance in domains containing substantial degrees of attribute dependence. ",
"neighbors": [
1024,
1478,
1986,
2127,
2443,
2677
],
"mask": "Test"
},
{
"node_id": 2339,
"label": 5,
"text": "Title: An intelligent search method using Inductive Logic Programming \nAbstract: We propose a method to use Inductive Logic Programming to give heuristic functions for searching goals to solve problems. The method takes solutions of a problem or a history of search and a set of background knowledge on the problem. In a large class of problems, a problem is described as a set of states and a set of operators, and is solved by finding a series of operators. A solution, a series of operators that brings an initial state to a final state, is transformed into positive and negative examples of a relation \"better-choice\", which describes that an operator is better than others in a state. We also give a way to use the \"better-choice\" relation as a heuristic function. The method can use any logic program as background knowledge to induce heuristics, and induced heuristics has high readability. The paper inspects the method by applying to a puzzle.",
"neighbors": [
344,
675,
2126
],
"mask": "Test"
},
{
"node_id": 2340,
"label": 2,
"text": "Title: Generalization to local remappings of the visuomotor coordinate transformation \nAbstract: We propose a method to use Inductive Logic Programming to give heuristic functions for searching goals to solve problems. The method takes solutions of a problem or a history of search and a set of background knowledge on the problem. In a large class of problems, a problem is described as a set of states and a set of operators, and is solved by finding a series of operators. A solution, a series of operators that brings an initial state to a final state, is transformed into positive and negative examples of a relation \"better-choice\", which describes that an operator is better than others in a state. We also give a way to use the \"better-choice\" relation as a heuristic function. The method can use any logic program as background knowledge to induce heuristics, and induced heuristics has high readability. The paper inspects the method by applying to a puzzle.",
"neighbors": [
611,
1935
],
"mask": "Train"
},
{
"node_id": 2341,
"label": 3,
"text": "Title: Dynamic Belief Networks for Discrete Monitoring \nAbstract: We describe the development of a monitoring system which uses sensor observation data about discrete events to construct dynamically a probabilistic model of the world. This model is a Bayesian network incorporating temporal aspects, which we call a Dynamic Belief Network; it is used to reason under uncertainty about both the causes and consequences of the events being monitored. The basic dynamic construction of the network is data-driven. However the model construction process combines sensor data about events with externally provided information about agents' behaviour, and knowledge already contained within the model, to control the size and complexity of the network. This means that both the network structure within a time interval, and the amount of history and detail maintained, can vary over time. We illustrate the system with the example domain of monitoring robot vehicles and people in a restricted dynamic environment using light-beam sensor data. In addition to presenting a generic network structure for monitoring domains, we describe the use of more complex network structures which address two specific monitoring problems, sensor validation and the Data Association Problem.",
"neighbors": [
559,
623,
788,
1172,
1757,
1842,
2425
],
"mask": "Train"
},
{
"node_id": 2342,
"label": 6,
"text": "Title: The Power of Decision Tables \nAbstract: We evaluate the power of decision tables as a hypothesis space for supervised learning algorithms. Decision tables are one of the simplest hypothesis spaces possible, and usually they are easy to understand. Experimental results show that on artificial and real-world domains containing only discrete features, IDTM, an algorithm inducing decision tables, can sometimes outperform state-of-the-art algorithms such as C4.5. Surprisingly, performance is quite good on some datasets with continuous features, indicating that many datasets used in machine learning either do not require these features, or that these features have few values. We also describe an incremental method for performing cross-validation that is applicable to incremental learning algorithms including IDTM. Using incremental cross-validation, it is possible to cross-validate a given dataset and IDTM in time that is linear in the number of instances, the number of features, and the number of label values. The time for incremental cross-validation is independent of the number of folds chosen, hence leave-one-out cross-validation and ten-fold cross-validation take the same time. ",
"neighbors": [
381,
497,
1270,
2151,
2197,
2577,
2593
],
"mask": "Validation"
},
{
"node_id": 2343,
"label": 3,
"text": "Title: Feature Subset Selection Using the Wrapper Method: Overfitting and Dynamic Search Space Topology \nAbstract: In the wrapper approach to feature subset selection, a search for an optimal set of features is made using the induction algorithm as a black box. The estimated future performance of the algorithm is the heuristic guiding the search. Statistical methods for feature subset selection including forward selection, backward elimination, and their stepwise variants can be viewed as simple hill-climbing techniques in the space of feature subsets. We utilize best-first search to find a good feature subset and discuss overfitting problems that may be associated with searching too many feature subsets. We introduce compound operators that dynamically change the topology of the search space to better utilize the information available from the evaluation of feature subsets. We show that compound operators unify previous approaches that deal with relevant and irrelevant features. The improved feature subset selection yields significant improvements for real-world datasets when using the ID3 and the Naive-Bayes induction algorithms. ",
"neighbors": [
208,
430,
1337,
1618,
2443
],
"mask": "Train"
},
{
"node_id": 2344,
"label": 2,
"text": "Title: A Neural Network Based Head Tracking System \nAbstract: We have constructed an inexpensive, video-based, motorized tracking system that learns to track a head. It uses real time graphical user inputs or an auxiliary infrared detector as supervisory signals to train a convolutional neural network. The inputs to the neural network consist of normalized luminance and chrominance images and motion information from frame differences. Subsampled images are also used to provide scale invariance. During the online training phase, the neural network rapidly adjusts the input weights depending upon the reliability of the different channels in the surrounding environment. This quick adaptation allows the system to robustly track a head even when other objects are moving within a cluttered background.",
"neighbors": [
2707
],
"mask": "Test"
},
{
"node_id": 2345,
"label": 6,
"text": "Title: The Hardness of Problems on Thin Colored Graphs \nAbstract: In this paper, we consider the complexity of a number of combinatorial problems; namely, Intervalizing Colored Graphs (DNA physical mapping), Triangulating Colored Graphs (perfect phylogeny), (Directed) (Modified) Colored Cutwidth, Feasible Register Assignment and Module Allocation for graphs of bounded treewidth. Each of these problems has as a characteristic a uniform upper bound on the tree or path width of the graphs in \"yes\"-instances. For all of these problems with the exceptions of feasible register assignment and module allocation, a vertex or edge coloring is given as part of the input. Our main results are that the parameterized variant of each of the considered problems is hard for the complexity classes W [t] for all t 2 Z + . We also show that Intervalizing Colored Graphs, Triangulating Colored Graphs, and ",
"neighbors": [
2005,
2418,
2511
],
"mask": "Train"
},
{
"node_id": 2346,
"label": 6,
"text": "Title: Parity: The Problem that Won't Go Away \nAbstract: It is well-known that certain learning methods (e.g., the perceptron learning algorithm) cannot acquire complete, parity mappings. But it is often overlooked that state-of-the-art learning methods such as C4.5 and backpropagation cannot generalise from incomplete parity mappings. The failure of such methods to generalise on parity mappings may be sometimes dismissed on the grounds that it is `impossible' to generalise over such mappings, or that parity problems are mathematical constructs having little to do with real-world learning. However, this paper argues that such a dismissal is unwarranted. It shows that parity mappings are hard to learn because they are statistically neutral and that statistical neutrality is a property which we should expect to encounter frequently in real-world contexts. It also shows that the generalization failure on parity mappings occurs even when large, minimally incomplete mappings are used for training purposes, i.e., when claims about the impossibility of generalization are particularly suspect.",
"neighbors": [
397,
1301,
1595,
1967
],
"mask": "Train"
},
{
"node_id": 2347,
"label": 1,
"text": "Title: Chapter 4 Empirical comparison of stochastic algorithms Empirical comparison of stochastic algorithms in a graph\nAbstract: There are several stochastic methods that can be used for solving NP-hard optimization problems approximatively. Examples of such algorithms include (in order of increasing computational complexity) stochastic greedy search methods, simulated annealing, and genetic algorithms. We investigate which of these methods is likely to give best performance in practice, with respect to the computational effort each requires. We study this problem empirically by selecting a set of stochastic algorithms with varying computational complexity, and by experimentally evaluating for each method how the goodness of the results achieved improves with increasing computational time. For the evaluation, we use a graph optimization problem, which is closely related to several real-world practical problems. To get a wider perspective of the goodness of the achieved results, the stochastic methods are also compared against special-case greedy heuristics. This investigation suggests that although genetic algorithms can provide good results, simpler stochastic algorithms can achieve similar performance more quickly. ",
"neighbors": [
1303,
2254,
2265
],
"mask": "Train"
},
{
"node_id": 2348,
"label": 3,
"text": "Title: Sequential Importance Sampling for Nonparametric Bayes Models: The Next Generation Running Title: SIS for Nonparametric Bayes \nAbstract: There are two generations of Gibbs sampling methods for semi-parametric models involving the Dirichlet process. The first generation suffered from a severe drawback; namely that the locations of the clusters, or groups of parameters, could essentially become fixed, moving only rarely. Two strategies that have been proposed to create the second generation of Gibbs samplers are integration and appending a second stage to the Gibbs sampler wherein the cluster locations are moved. We show that these same strategies are easily implemented for the sequential importance sampler, and that the first strategy dramatically improves results. As in the case of Gibbs sampling, these strategies are applicable to a much wider class of models. They are shown to provide more uniform importance sampling weights and lead to additional Rao-Blackwellization of estimators. Steve MacEachern is Associate Professor, Department of Statistics, Ohio State University, Merlise Clyde is Assistant Professor, Institute of Statistics and Decision Sciences, Duke University, and Jun Liu is Assistant Professor, Department of Statistics, Stanford University. The work of the second author was supported in part by the National Science Foundation grants DMS-9305699 and DMS-9626135, and that of the last author by the National Science Foundation grants DMS-9406044, DMS-9501570, and the Terman Fellowship. ",
"neighbors": [
1783,
2682
],
"mask": "Train"
},
{
"node_id": 2349,
"label": 2,
"text": "Title: No Free Lunch for Early Stopping \nAbstract: We show that, with a uniform prior on hypothesis functions having the same training error, early stopping at some fixed training error above the training error minimum results in an increase in the expected generalization error. We also show that regularization methods are equivalent to early stopping with certain non-uniform prior on the early stopping solutions.",
"neighbors": [
1929
],
"mask": "Train"
},
{
"node_id": 2350,
"label": 6,
"text": "Title: Exact Learning of -DNF Formulas with Malicious Membership Queries \nAbstract: We show that, with a uniform prior on hypothesis functions having the same training error, early stopping at some fixed training error above the training error minimum results in an increase in the expected generalization error. We also show that regularization methods are equivalent to early stopping with certain non-uniform prior on the early stopping solutions.",
"neighbors": [
1004,
2168
],
"mask": "Train"
},
{
"node_id": 2351,
"label": 2,
"text": "Title: ERROR STABILITY PROPERTIES OF GENERALIZED GRADIENT-TYPE ALGORITHMS \nAbstract: We present a unified framework for convergence analysis of the generalized subgradient-type algorithms in the presence of perturbations. One of the principal novel features of our analysis is that perturbations need not tend to zero in the limit. It is established that the iterates of the algorithms are attracted, in a certain sense, to an \"-stationary set of the problem, where \" depends on the magnitude of perturbations. Characterization of those attraction sets is given in the general (nonsmooth and nonconvex) case. The results are further strengthened for convex, weakly sharp and strongly convex problems. Our analysis extends and unifies previously known results on convergence and stability properties of gradient and subgradient methods, including their incremental, parallel and \"heavy ball\" modifications. fl The first author is supported in part by CNPq grant 300734/95-6. Research of the second author was supported in part by the International Science Foundation Grant NBY000, the International Science Foundation and Russian Goverment Grant NBY300 and the Russian Foundation for Fundamental Research Grant N 95-01-01448. y Instituto de Matematica Pura e Aplicada, Estrada Dona Castorina 110, Jardim Bot^anico, Rio de Janeiro, RJ, CEP 22460-320, Brazil. Email : solodov@impa.br. z Operations Research Department, Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow, Russia, 119899. ",
"neighbors": [
1772
],
"mask": "Train"
},
{
"node_id": 2352,
"label": 2,
"text": "Title: Power System Security Margin Prediction Using Radial Basis Function Networks \nAbstract: Dr. McCalley's research is partially supported through grants from National Science Foundation and Pacific Gas and Electric Company. Dr. Honavar's research is partially supported through grants from National Science Foundation and the John Deere Foundation. This paper will appear in: Proceedings of the 29th Annual North American Power Symposium, Oct. 13-14. 1997, Laramie, Wyoming. ",
"neighbors": [
611,
2055
],
"mask": "Validation"
},
{
"node_id": 2353,
"label": 1,
"text": "Title: Dynamics of Co-evolutionary Learning \nAbstract: Co-evolutionary learning, which involves the embedding of adaptive learning agents in a fitness environment which dynamically responds to their progress, is a potential solution for many technological chicken and egg problems, and is at the heart of several recent and surprising successes, such as Sim's artificial robot and Tesauro's backgammon player. We recently solved the two spirals problem, a difficult neural network benchmark classification problem, using the genetic programming primitives set up by [Koza, 1992]. Instead of using absolute fitness, we use a relative fitness [Angeline & Pollack, 1993] based on a competition for coverage of the data set. As the population reproduces, the fitness function driving the selection changes, and subproblem niches are opened, rather than crowded out. The solutions found by our method have a symbiotic structure which suggests that by holding niches open, crossover is better able to discover modular build ing blocks.",
"neighbors": [
415,
2334
],
"mask": "Test"
},
{
"node_id": 2354,
"label": 6,
"text": "Title: The Power of Team Exploration: Two Robots Can Learn Unlabeled Directed Graphs \nAbstract: We show that two cooperating robots can learn exactly any strongly-connected directed graph with n indistinguishable nodes in expected time polynomial in n. We introduce a new type of homing sequence for two robots, which helps the robots recognize certain previously-seen nodes. We then present an algorithm in which the robots learn the graph and the homing sequence simultaneously by actively wandering through the graph. Unlike most previous learning results using homing sequences, our algorithm does not require a teacher to provide counterexamples. Furthermore, the algorithm can use efficiently any additional information available that distinguishes nodes. We also present an algorithm in which the robots learn by taking random walks. The rate at which a random walk on a graph converges to the stationary distribution is characterized by the conductance of the graph. Our random-walk algorithm learns in expected time polynomial in n and in the inverse of the conductance and is more efficient than the homing-sequence algorithm for high-conductance graphs. ",
"neighbors": [
400,
453,
555,
556,
2360,
2455
],
"mask": "Train"
},
{
"node_id": 2355,
"label": 2,
"text": "Title: CONVIS: Action Oriented Control and Visualization of Neural Networks Introduction and Technical Description \nAbstract: We show that two cooperating robots can learn exactly any strongly-connected directed graph with n indistinguishable nodes in expected time polynomial in n. We introduce a new type of homing sequence for two robots, which helps the robots recognize certain previously-seen nodes. We then present an algorithm in which the robots learn the graph and the homing sequence simultaneously by actively wandering through the graph. Unlike most previous learning results using homing sequences, our algorithm does not require a teacher to provide counterexamples. Furthermore, the algorithm can use efficiently any additional information available that distinguishes nodes. We also present an algorithm in which the robots learn by taking random walks. The rate at which a random walk on a graph converges to the stationary distribution is characterized by the conductance of the graph. Our random-walk algorithm learns in expected time polynomial in n and in the inverse of the conductance and is more efficient than the homing-sequence algorithm for high-conductance graphs. ",
"neighbors": [
217,
763,
1879
],
"mask": "Train"
},
{
"node_id": 2356,
"label": 6,
"text": "Title: Learning With Unreliable Boundary Queries \nAbstract: We introduce a model for learning from examples and membership queries in situations where the boundary between positive and negative examples is somewhat ill-defined. In our model, queries near the boundary of a target concept may receive incorrect or \"don't care\" responses, and the distribution of examples has zero probability mass on the boundary region. The motivation behind our model is that in many cases the boundary between positive and negative examples is complicated or \"fuzzy.\" However, one may still hope to learn successfully, because the typical examples that one sees do not come from that region. We present several positive results in this new model. We show how to learn the intersection of two arbitrary halfspaces when membership queries near the boundary may be answered incorrectly. Our algorithm is an extension of an algorithm of Baum [7, 6] that learns the intersection of two halfspaces whose bounding planes pass through the origin in the PAC-with-membership-queries model. We also describe algorithms for learning several subclasses of monotone DNF formulas.",
"neighbors": [
1105,
1705,
2246
],
"mask": "Train"
},
{
"node_id": 2357,
"label": 2,
"text": "Keyword: Running Title: Local Multivariate Binary Processors \nAbstract: We thank Sue Becker, Peter Hancock and Darragh Smyth for helpful comments on this work. The work of Dario Floreano and Bill Phillips was supported by a Network Grant from the Human Capital and Mobility Programme of the European Community. ",
"neighbors": [
1656,
2499
],
"mask": "Train"
},
{
"node_id": 2358,
"label": 2,
"text": "Title: Physiological Gain Leads to High ISI Variability in a Simple Model of a Cortical Regular\nAbstract: To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky and Koch, 1993), it is critical to examine the dynamics of their neuronal integration as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick et al., 1985). After setting RC parameters, the post-spike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency vs. injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory \"1= p N arguments hold and spiking is regular; after the \"memory\" of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state, and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady state behavior is predominant and ISI's are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs. ",
"neighbors": [
2503
],
"mask": "Train"
},
{
"node_id": 2359,
"label": 0,
"text": "Title: Computer-Supported Argumentation for Cooperative Design on the World-Wide Web \nAbstract: This paper describes an argumentation system for cooperative design applications on the Web. The system provides experts involved in such procedures means of expressing and weighing their individual arguments and preferences, in order to argue for or against the selection of a certain choice. It supports defeasible and qualitative reasoning in the presence of ill-structured information. Argumentation is performed through a set of discourse acts which call a variety of procedures for the propagation of information in the corresponding discussion graph. The paper also reports on the integration of Case Based Reasoning techniques, used to resolve current design issues by considering previous similar situations, and the specitcation of similarity measures between the various argumentation items, the aim being to estimate the variations among opinions of the designers involved in cooperative design. ",
"neighbors": [
66,
2520
],
"mask": "Train"
},
{
"node_id": 2360,
"label": 6,
"text": "Title: Efficient Learning of Typical Finite Automata from Random Walks (Extended Abstract) \nAbstract: This paper describes new and efficient algorithms for learning deterministic finite automata. Our approach is primarily distinguished by two features: (1) the adoption of an average-case setting to model the \"typical\" labeling of a finite automaton, while retaining a worst-case model for the underlying graph of the automaton, along with (2) a learning model in which the learner is not provided with the means to experiment with the machine, but rather must learn solely by observing the automaton's output behavior on a random input sequence. The main contribution of this paper is in presenting the first efficient algorithms for learning non-trivial classes of automata in an entirely passive learning model. We adopt an on-line learning model in which the learner is asked to predict the output of the next state, given the next symbol of the random input sequence; the goal of the learner is to make as few prediction mistakes as possible. Assuming the learner has a means of resetting the target machine to a fixed start state, we first present an efficient algorithm that makes an expected polynomial number of mistakes in this model. Next, we show how this first algorithm can be used as a subroutine by a second algorithm that also makes a polynomial number of mistakes even in the absence of a reset. Along the way, we prove a number of combinatorial results for randomly labeled automata. We also show that the labeling of the states and the bits of the input sequence need not be truly random, but merely semi-random. Finally, we discuss an extension of our results to a model in which automata are used to represent distributions over binary strings. ",
"neighbors": [
400,
556,
574,
672,
1006,
1386,
2004,
2040,
2273,
2354
],
"mask": "Train"
},
{
"node_id": 2361,
"label": 1,
"text": "Title: Program Search with a Hierarchical Variable Length Representation: Genetic Programming, Simulated Annealing and Hill Climbing \nAbstract: This paper presents a comparison of Genetic Programming(GP) with Simulated Annealing (SA) and Stochastic Iterated Hill Climbing (SIHC) based on a suite of program discovery problems which have been previously tackled only with GP. All three search algorithms employ the hierarchical variable length representation for programs brought into recent prominence with the GP paradigm [8]. We feel it is not intuitively obvious that mutation-based adaptive search can handle program discovery yet, to date, for each GP problem we have tried, SA or SIHC also work.",
"neighbors": [
139,
163,
2175,
2216
],
"mask": "Train"
},
{
"node_id": 2362,
"label": 3,
"text": "Title: Outperforming the Gibbs sampler empirical estimator for nearest neighbor random fields \nAbstract: Given a Markov chain sampling scheme, does the standard empirical estimator make best use of the data? We show that this is not so and construct better estimators. We restrict attention to nearest neighbor random fields and to Gibbs samplers with deterministic sweep, but our approach applies to any sampler that uses reversible variable-at-a-time updating with deterministic sweep. The structure of the transition distribution of the sampler is exploited to construct further empirical estimators that are combined with the standard empirical estimator to reduce asymptotic variance. The extra computational cost is negligible. When the random field is spatially homogeneous, symmetrizations of our estimator lead to further variance reduction. The performance of the estimators is evaluated in a simulation study of the Ising model.",
"neighbors": [
1870,
2510
],
"mask": "Test"
},
{
"node_id": 2363,
"label": 1,
"text": "Title: Modeling the Evolution of Motivation \nAbstract: In order for learning to improve the adaptiveness of an animal's behavior and thus direct evolution in the way Baldwin suggested, the learning mechanism must incorporate an innate evaluation of how the animal's actions influence its reproductive fitness. For example, many circumstances that damage an animal, or otherwise reduce its fitness are painful and tend to be avoided. We refer to the mechanism by which an animal evaluates the fitness consequences of its actions as a \"motivation system,\" and argue that such a system must evolve along with the behaviors it evaluates. We describe simulations of the evolution of populations of agents instantiating a number of different architectures for generating action and learning, in worlds of differing complexity. We find that in some cases, members of the populations evolve motivation systems that are accurate enough to direct learning so as to increase the fitness of the actions the agents perform. Furthermore, the motivation systems tend to incorporate systematic distortions in their representations of the worlds they inhabit; these distortions can increase the adaptiveness of the behavior generated. ",
"neighbors": [
129,
163,
1719,
1969,
2165
],
"mask": "Test"
},
{
"node_id": 2364,
"label": 0,
"text": "Title: Automatic Phonetic Transcription of Words Based On Sparse Data \nAbstract: The relation between the orthography and the phonology of a language has traditionally been modelled by hand-crafted rule sets. Machine-learning (ML) approaches offer a means to gather this knowledge automatically. Problems arise when the training material is sparse. Generalising from sparse data is a well-known problem for many ML algorithms. We present experiments in which connectionist, instance-based, and decision-tree learning algorithms are applied to a small corpus of Scottish Gaelic. instance-based learning in the ib1-ig algorithm yields the best generalisation performance, and that most algorithms tested perform tolerably well. Given the availability of a lexicon, even if it is sparse, ML is a valuable and efficient tool for automatic phonetic transcription of written text.",
"neighbors": [
862,
1644,
1812
],
"mask": "Train"
},
{
"node_id": 2365,
"label": 5,
"text": "Title: A Reduced Multipipeline Machine Description that Preserves Scheduling Constraints \nAbstract: High performance compilers increasingly rely on accurate modeling of the machine resources to efficiently exploit the instruction level parallelism of an application. In this paper, we propose a reduced machine description that results in faster detection of resource contentions while preserving the scheduling constraints present in the original machine description. The proposed approach reduces a machine description in an automated, error-free, and efficient fashion. Moreover, it fully supports schedulers that backtrack and process operations in arbitrary order. Reduced descriptions for the DEC Alpha 21064, MIPS R3000/R3010, and Cydra 5 result in 4 to 7 times faster detection of resource contentions and require 22 to 90% of the memory storage used by the original machine descriptions. ",
"neighbors": [
2189,
2190,
2668
],
"mask": "Train"
},
{
"node_id": 2366,
"label": 3,
"text": "Title: Choice of Thresholds for Wavelet Shrinkage Estimate of the Spectrum fff j g are level-dependent\nAbstract: We study the problem of estimating the log spectrum of a stationary Gaussian time series by thresholding the empirical wavelet coefficients. We propose the use of thresholds t j;n depending on sample size n, wavelet basis and resolution level j. At fine resolution levels (j = 1; 2; :::), we propose The purpose of this thresholding level is to make the reconstructed log-spectrum as nearly noise-free as possible. In addition to being pleasant from a visual point of view, the noise-free character leads to attractive theoretical properties over a wide range of smoothness assumptions. Previous proposals set much smaller thresholds and did not enjoy these properties. t j;n = ff j log n;",
"neighbors": [
1910,
2081,
2458,
2506,
2575
],
"mask": "Train"
},
{
"node_id": 2367,
"label": 6,
"text": "Title: Data Mining using MLC A Machine Learning Library in C http://www.sgi.com/Technology/mlc \nAbstract: Data mining algorithms including machine learning, statistical analysis, and pattern recognition techniques can greatly improve our understanding of data warehouses that are now becoming more widespread. In this paper, we focus on classification algorithms and review the need for multiple classification algorithms. We describe a system called MLC ++ , which was designed to help choose the appropriate classification algorithm for a given dataset by making it easy to compare the utility of different algorithms on a specific dataset of interest. MLC ++ not only provides a workbench for such comparisons, but also provides a library of C ++ classes to aid in the development of new algorithms, especially hybrid algorithms and multi-strategy algorithms. Such algorithms are generally hard to code from scratch. We discuss design issues, interfaces to other programs, and visualization of the resulting classifiers. ",
"neighbors": [
1833,
2577
],
"mask": "Train"
},
{
"node_id": 2368,
"label": 4,
"text": "Title: Reinforcement Learning with Modular Neural Networks for Control \nAbstract: Reinforcement learning methods can be applied to control problems with the objective of optimizing the value of a function over time. They have been used to train single neural networks that learn solutions to whole tasks. Jacobs and Jordan [5] have shown that a set of expert networks combined via a gating network can more quickly learn tasks that can be decomposed. Even the decomposition can be learned. Inspired by Boyan's work of modular neural networks for learning with temporal-difference methods [4], we modify the reinforcement learning algorithm called Q-Learning to train a modular neural network to solve a control problem. The resulting algorithm is demonstrated on the classical pole-balancing problem. The advantage of such a method is that it makes it possible to deal with complex dynamic control problem effectively by using task decomposition and competitive learning. ",
"neighbors": [
85,
465,
2642
],
"mask": "Train"
},
{
"node_id": 2369,
"label": 0,
"text": "Title: Case-Based Sonogram Classification \nAbstract: This report replicates and extends results reported by Naval Air Warfare Center (NAWC) personnel on the automatic classification of sonar images. They used novel case-based reasoning systems in their empirical studies, but did not obtain comparative analyses using standard classification algorithms. Therefore, the quality of the NAWC results were unknown. We replicated the NAWC studies and also tested several other classifiers (i.e., both case-based and otherwise) from the machine learning literature. These comparisons and their ramifications are detailed in this paper. Next, we investigated Fala and Walker's two suggestions for future work (i.e., on combining their similarity functions and on an alternative case representation). Finally, we describe several ways to incorporate additional domain-specific knowledge when applying case-based classifiers to similar tasks. ",
"neighbors": [
256,
426,
2607
],
"mask": "Validation"
},
{
"node_id": 2370,
"label": 2,
"text": "Title: CONTROL-LYAPUNOV FUNCTIONS FOR TIME-VARYING SET STABILIZATION \nAbstract: This paper shows that, for time varying systems, global asymptotic controllability to a given closed subset of the state space is equivalent to the existence of a continuous control-Lyapunov function with respect to the set. ",
"neighbors": [
2321
],
"mask": "Train"
},
{
"node_id": 2371,
"label": 0,
"text": "Title: Learning Adaptation Strategies by Introspective Reasoning about Memory Search \nAbstract: In case-based reasoning systems, the case adaptation process is traditionally controlled by static libraries of hand-coded adaptation rules. This paper proposes a method for learning adaptation knowledge in the form of adaptation strategies of the type developed and hand-coded by Kass [90] . Adaptation strategies differ from standard adaptation rules in that they encode general memory search procedures for finding the information needed during case adaptation; this paper focuses on the issues involved in learning memory search procedures to form the basis of new adaptation strategies. It proposes a method that starts with a small library of abstract adaptation rules and uses introspective reasoning about the system's memory organization to generate the memory search plans needed to apply those rules. The search plans are then packaged with the original abstract rules to form new adaptation strategies for future use. This process allows a CBR system not only to learn about its domain, by storing the results of case adaptation, but also to learn how to apply the cases in its memory more effectively. ",
"neighbors": [
583,
1126,
2489
],
"mask": "Test"
},
{
"node_id": 2372,
"label": 0,
"text": "Title: Goal-Driven Learning: Fundamental Issues (A Symposium Report) \nAbstract: In Artificial Intelligence, Psychology, and Education, a growing body of research supports the view that learning is a goal-directed process. Psychological experiments show that people with different goals process information differently; studies in education show that goals have strong effects on what students learn; and functional arguments from machine learning support the necessity of goal-based focusing of learner effort. At the Fourteenth Annual Conference of the Cognitive Science Society, a symposium brought together researchers in AI, psychology, and education to discuss goal-driven learning. This article presents the fundamental points illuminated by the symposium, placing them in the context of open questions and current research di rections in goal-driven learning. fl Appears in AI Magazine, 14(4):67-72, 1993",
"neighbors": [
1126,
2398,
2489
],
"mask": "Train"
},
{
"node_id": 2373,
"label": 2,
"text": "Title: Evaluating Neural Network Predictors by Bootstrapping \nAbstract: We present a new method, inspired by the bootstrap, whose goal it is to determine the quality and reliability of a neural network predictor. Our method leads to more robust forecasting along with a large amount of statistical information on forecast performance that we exploit. We exhibit the method in the context of multi-variate time series prediction on financial data from the New York Stock Exchange. It turns out that the variation due to different resamplings (i.e., splits between training, cross-validation, and test sets) is significantly larger than the variation due to different network conditions (such as architecture and initial weights). Furthermore, this method allows us to forecast a probability distribution, as opposed to the traditional case of just a single value at each time step. We demonstrate this on a strictly held-out test set that includes the 1987 stock market crash. We also compare the performance of the class of neural networks to identically bootstrapped linear models.",
"neighbors": [
916,
1366,
2374,
2413,
2414,
2507
],
"mask": "Train"
},
{
"node_id": 2374,
"label": 2,
"text": "Title: Predictions with Confidence Intervals (Local Error Bars) \nAbstract: We present a new method for obtaining local error bars, i.e., estimates of the confidence in the predicted value that depend on the input. We approach this problem of nonlinear regression in a maximum likelihood framework. We demonstrate our technique first on computer generated data with locally varying, normally distributed target noise. We then apply it to the laser data from the Santa Fe Time Series Competition. Finally, we extend the technique to estimate error bars for iterated predictions, and apply it to the exact competition task where it gives the best performance to date.",
"neighbors": [
916,
1366,
2373,
2507,
2513
],
"mask": "Validation"
},
{
"node_id": 2375,
"label": 3,
"text": "Title: Minimax Bayes, asymptotic minimax and sparse wavelet priors \nAbstract: Pinsker(1980) gave a precise asymptotic evaluation of the minimax mean squared error of estimation of a signal in Gaussian noise when the signal is known a priori to lie in a compact ellipsoid in Hilbert space. This `Minimax Bayes' method can be applied to a variety of global non-parametric estimation settings with parameter spaces far from ellipsoidal. For example it leads to a theory of exact asymptotic minimax estimation over norm balls in Besov and Triebel spaces using simple co-ordinatewise estimators and wavelet bases. This paper outlines some features of the method common to several applications. In particular, we derive new results on the exact asymptotic minimax risk over weak ` p balls in R n as n ! 1, and also for a class of `local' estimators on the Triebel scale. By its very nature, the method reveals the structure of asymptotically least favorable distributions. Thus we may simulate `least favorable' sample paths. We illustrate this for estimation of a signal in Gaussian white noise over norm balls in certain Besov spaces. In wavelet bases, when p < 2, the least favorable priors are sparse, and the resulting sample paths strikingly different from those observed in Pinsker's ellipsoidal setting (p = 2). Acknowledgements. I am grateful for many conversations with David Donoho and Carl Taswell, and to a referee for helpful comments. This work was supported in part by NSF grants DMS 84-51750, 9209130, and NIH PHS grant GM21215-12. ",
"neighbors": [
1910,
2416,
2661
],
"mask": "Train"
},
{
"node_id": 2376,
"label": 2,
"text": "Title: Multimodality Exploration in Training an Unsupervised Projection Pursuit Neural Network \nAbstract: Graphical inspection of multimodality is demonstrated using unsupervised lateral-inhibition neural networks. Three projection pursuit indices are compared on low dimensional simulated and real-world data: principal components [22], Legendre poly nomial [6] and projection pursuit network [16]. ",
"neighbors": [
359,
2422,
2499,
2500
],
"mask": "Train"
},
{
"node_id": 2377,
"label": 3,
"text": "Title: Adaptive proposal distribution for random walk Metropolis algorithm \nAbstract: The choice of a suitable MCMC method and further the choice of a proposal distribution is known to be crucial for the convergence of the Markov chain. However, in many cases the choice of an effective proposal distribution is difficult. As a remedy we suggest a method called Adaptive Proposal (AP). Although the stationary distribution of the AP algorithm is slightly biased, it appears to provide an efficient tool for, e.g., reasonably low dimensional problems, as typically encountered in non-linear regression problems in natural sciences. As a realistic example we include a successful application of the AP algorithm in parameter estimation for the satellite instrument 'GO-MOS'. In this paper we also present a comprehensive test procedure and systematic performance criteria for comparing Adaptive Proposal algorithm with more traditional Metropolis algorithms. ",
"neighbors": [
468,
491,
2025
],
"mask": "Train"
},
{
"node_id": 2378,
"label": 2,
"text": "Title: Priors, Stabilizers and Basis Functions: from regularization to radial, tensor and additive splines \nAbstract: We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular we had discussed how standard smoothness functionals lead to a subclass of regularization networks, the well-known Radial Basis Functions approximation schemes. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same extension that leads from Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions and some forms of Projection Pursuit Regression. We propose to use the term Generalized Regularization Networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we show the relation between activation functions of the Gaussian and sigmoidal type by considering the simple case of the kernel G(x) = jxj. In summary, different multilayer networks with one hidden layer, which we collectively call Generalized Regularization Networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are a) Radial Basis Functions that generalize into Hyper Basis Functions, b) some tensor product splines, and c) additive splines that generalize into schemes of the type of ridge approximation, hinge functions and one-hidden-layer perceptrons. This paper describes research done within the Center for Biological and Computational Learning in the Department of Brain and Cognitive Sciences and at the Artificial Intelligence Laboratory. This research is sponsored by grants from the Office of Naval Research under contracts N00014-91-J-1270 and N00014-92-J-1879; by a grant from the National Science Foundation under contract ASC-9217041 (which includes funds from DARPA provided under the HPCC program); and by a grant from the National Institutes of Health under contract NIH 2-S07-RR07047. Additional support is provided by the North Atlantic Treaty Organization, ATR Audio and Visual Perception Research Laboratories, Mitsubishi Electric Corporation, Sumitomo Metal Industries, and Siemens AG. Support for the A.I. Laboratory's artificial intelligence research is provided by ONR contract N00014-91-J-4038. Tomaso Poggio is supported by the Uncas and Helen Whitaker Chair at the Whitaker College, Massachusetts Institute of Technology. c fl Massachusetts Institute of Technology, 1993",
"neighbors": [
611,
1668,
2050,
2335
],
"mask": "Validation"
},
{
"node_id": 2379,
"label": 1,
"text": "Title: The Evolution of Size in Variable Length Representations \nAbstract: In many cases programs length's increase (known as bloat, fluff and increasing structural complexity) during artificial evolution. We show bloat is not specific to genetic programming and suggest it is inherent in search techniques with discrete variable length representations using simple static evaluation functions. We investigate the bloating characteristics of three non-population and one population based search techniques using a novel mutation operator. An artificial ant following the Santa Fe trail problem is solved by simulated annealing, hill climbing, strict hill climbing and population based search using two variants of the the new subtree based mutation operator. As predicted bloat is observed when using unbiased mutation and is absent in simulated annealing and both hill climbers when using the length neutral mutation however bloat occurs with both mutations when using a population. We conclude that there are two causes of bloat 1) search operators with no length bias tend to sample bigger trees and 2) competition within populations favours longer programs as they can usually reproduce more accurately. ",
"neighbors": [
1984,
2206
],
"mask": "Validation"
},
{
"node_id": 2380,
"label": 3,
"text": "Title: Massively Parallel Case-Based Reasoning with Probabilistic Similarity Metrics \nAbstract: We propose a probabilistic case-space metric for the case matching and case adaptation tasks. Central to our approach is a probability propagation algorithm adopted from Bayesian reasoning systems, which allows our case-based reasoning system to perform theoretically sound probabilistic reasoning. The same probability propagation mechanism actually offers a uniform solution to both the case matching and case adaptation problems. We also show how the algorithm can be implemented as a connectionist network, where efficient massively parallel case retrieval is an inherent property of the system. We argue that using this kind of an approach, the difficult problem of case indexing can be completely avoided. Pp. 144-154 in Topics in Case-Based Reasoning, edited by Stefan Wess, Klaus-Dieter Althoff and Michael M. Richter. Volume 837, Lecture ",
"neighbors": [
215,
288,
485,
1838,
2294,
2514,
2561
],
"mask": "Train"
},
{
"node_id": 2381,
"label": 2,
"text": "Title: Evolving Artificial Neural Networks using the Baldwin Effect \nAbstract: This paper describes how through simple means a genetic search towards optimal neural network architectures can be improved, both in the convergence speed as in the quality of the final result. This result can be theoretically explained with the Baldwin effect, which is implemented here not just by the learning process of the network alone, but also by changing the network architecture as part of the learning procedure. This can be seen as a combination of two different techniques, both help ing and improving on simple genetic search.",
"neighbors": [
687,
1606,
2667
],
"mask": "Train"
},
{
"node_id": 2382,
"label": 2,
"text": "Title: Trees and Splines in Survival Analysis \nAbstract: Technical Report No. 275 Revised March 30, 1995 University of Washington Department of Statistics Seattle, Washington 98195 Abstract During the past few years several nonparametric alternatives to the Cox proportional hazards model have appeared in the literature. These methods extend techniques that are well known from regression analysis to the analysis of censored survival data. In this paper we discuss methods based on (partition) trees and (polynomial) splines, analyze two datasets using both Survival Trees[1] and HARE[2], and compare the strengths and weaknesses of the two methods. One of the strengths of HARE is that its model fitting procedure has an implicit check for proportionality of the underlying hazards model. It also provides an explicit model for the conditional hazards function, which makes it very convenient to obtain graphical summaries. On the other hand, the tree-based methods automatically partition a dataset into groups of cases that are similar in survival history. Results obtained by survival trees and HARE are often complimentary. Trees and splines in survival analysis should provide the data analyst with two useful tools when analyzing survival data.",
"neighbors": [
2013
],
"mask": "Train"
},
{
"node_id": 2383,
"label": 2,
"text": "Title: Alternative Discrete-Time Operators and Their Application to Nonlinear Models \nAbstract: Technical Report No. 275 Revised March 30, 1995 University of Washington Department of Statistics Seattle, Washington 98195 Abstract During the past few years several nonparametric alternatives to the Cox proportional hazards model have appeared in the literature. These methods extend techniques that are well known from regression analysis to the analysis of censored survival data. In this paper we discuss methods based on (partition) trees and (polynomial) splines, analyze two datasets using both Survival Trees[1] and HARE[2], and compare the strengths and weaknesses of the two methods. One of the strengths of HARE is that its model fitting procedure has an implicit check for proportionality of the underlying hazards model. It also provides an explicit model for the conditional hazards function, which makes it very convenient to obtain graphical summaries. On the other hand, the tree-based methods automatically partition a dataset into groups of cases that are similar in survival history. Results obtained by survival trees and HARE are often complimentary. Trees and splines in survival analysis should provide the data analyst with two useful tools when analyzing survival data.",
"neighbors": [
427,
1820
],
"mask": "Test"
},
{
"node_id": 2384,
"label": 3,
"text": "Title: ESTIMATING FUNCTIONS OF PROBABILITY DISTRIBUTIONS FROM A FINITE SET OF SAMPLES Part II: Bayes Estimators\nAbstract: This paper is the second in a series of two on the problem of estimating a function of a probability distribution from a finite set of samples of that distribution. In the first paper 1 , the Bayes estimator for a function of a probability distribution was introduced, the optimal properties of the Bayes estimator were discussed, and the Bayes and frequency-counts estimators for the Sh-annon entropy were derived and graphically contrasted. In the current paper the analysis of the first paper is extended by the derivation of Bayes estimators for several other functions of interest in statistics and information theory. These functions are (powers of) the mutual information, chi-squared for tests of independence, variance, covariance, and average. Finding Bayes estimators for several of these functions requires extensions to the analytical techniques developed in the first paper, and these extensions form the main body of this paper. This paper extends the analysis in other ways as well, for example by enlarging the class of potential priors beyond the uniform prior assumed in the first paper. In particular, the use of the entropic and Dirichlet priors is considered. ",
"neighbors": [
2460
],
"mask": "Train"
},
{
"node_id": 2385,
"label": 2,
"text": "Title: Receptive Fields for Vision: from Hyperacuity to Object Recognition \nAbstract: Many of the lower-level areas in the mammalian visual system are organized retinotopically, that is, as maps which preserve to a certain degree the topography of the retina. A unit that is a part of such a retinotopic map normally responds selectively to stimulation in a well-delimited part of the visual field, referred to as its receptive field (RF). Receptive fields are probably the most prominent and ubiquitous computational mechanism employed by biological information processing systems. This paper surveys some of the possible computational reasons behind the ubiquity of RFs, by discussing examples of RF-based solutions to problems in vision, from spatial acuity, through sensory coding, to object recognition. fl Weizmann Institute CS-TR 95-29, 1995; to appear in Vision, R. J. Watt, ed., MIT Press, 1996.",
"neighbors": [
611,
2499,
2676
],
"mask": "Train"
},
{
"node_id": 2386,
"label": 2,
"text": "Title: Optimising Local Hebbian Learning: use the ffi-rule \nAbstract: Many of the lower-level areas in the mammalian visual system are organized retinotopically, that is, as maps which preserve to a certain degree the topography of the retina. A unit that is a part of such a retinotopic map normally responds selectively to stimulation in a well-delimited part of the visual field, referred to as its receptive field (RF). Receptive fields are probably the most prominent and ubiquitous computational mechanism employed by biological information processing systems. This paper surveys some of the possible computational reasons behind the ubiquity of RFs, by discussing examples of RF-based solutions to problems in vision, from spatial acuity, through sensory coding, to object recognition. fl Weizmann Institute CS-TR 95-29, 1995; to appear in Vision, R. J. Watt, ed., MIT Press, 1996.",
"neighbors": [
2387
],
"mask": "Validation"
},
{
"node_id": 2387,
"label": 2,
"text": "Title: CNN: a Neural Architecture that Learns Multiple Transformations of Spatial Representations \nAbstract: Many of the lower-level areas in the mammalian visual system are organized retinotopically, that is, as maps which preserve to a certain degree the topography of the retina. A unit that is a part of such a retinotopic map normally responds selectively to stimulation in a well-delimited part of the visual field, referred to as its receptive field (RF). Receptive fields are probably the most prominent and ubiquitous computational mechanism employed by biological information processing systems. This paper surveys some of the possible computational reasons behind the ubiquity of RFs, by discussing examples of RF-based solutions to problems in vision, from spatial acuity, through sensory coding, to object recognition. fl Weizmann Institute CS-TR 95-29, 1995; to appear in Vision, R. J. Watt, ed., MIT Press, 1996.",
"neighbors": [
2386
],
"mask": "Test"
},
{
"node_id": 2388,
"label": 2,
"text": "Title: Combining Neural Network Forecasts on Wavelet-Transformed Time Series \nAbstract: Many of the lower-level areas in the mammalian visual system are organized retinotopically, that is, as maps which preserve to a certain degree the topography of the retina. A unit that is a part of such a retinotopic map normally responds selectively to stimulation in a well-delimited part of the visual field, referred to as its receptive field (RF). Receptive fields are probably the most prominent and ubiquitous computational mechanism employed by biological information processing systems. This paper surveys some of the possible computational reasons behind the ubiquity of RFs, by discussing examples of RF-based solutions to problems in vision, from spatial acuity, through sensory coding, to object recognition. fl Weizmann Institute CS-TR 95-29, 1995; to appear in Vision, R. J. Watt, ed., MIT Press, 1996.",
"neighbors": [
427,
2575
],
"mask": "Test"
},
{
"node_id": 2389,
"label": 3,
"text": "Title: On Computing the Largest Fraction of Missing Information for the EM Algorithm and the Worst\nAbstract: We address the problem of computing the largest fraction of missing information for the EM algorithm and the worst linear function for data augmentation. These are the largest eigenvalue and its associated eigenvector for the Jacobian of the EM operator at a maximum likelihood estimate, which are important for assessing convergence in iterative simulation. An estimate of the largest fraction of missing information is available from the EM iterates; this is often adequate since only a few figures of accuracy are needed. In some instances the EM iteration also gives an estimate of the worst linear function. We show that the power method for eigencomputation can be used to compute efficient and accurate estimates of both quantities. Unlike eigenvalue decomposition, the power method computes only the largest eigenvalue and eigenvector of a matrix, it can take advantage of a good eigenvector estimate as an initial value and it can be terminated after only a few figures of accuracy are obtained. Moreover, the matrix products needed in the power method can be computed by extrapolation, obviating the need to form the Jacobian of the EM operator. We give results of simultation studies on multivariate normal data showing this approach becomes more efficient as the data dimension increases than methods that use a finite-difference approximation to the Jacobian, which is the only general-purpose alternative available. fl Funded by National Institutes of Health Small Business Innovation Reseach Grant 5R44CA65147-03, and by Office of Naval Research contracts N00014-96-1-0192 and N00014-96-1-0330. We are indebted to Tim Hesterberg, Jim Schimert, Doug Clarkson, Anne Greenbaum, and Adrian Raftery for comments and discussion that helped advance this research and improve this paper. ",
"neighbors": [
345,
2421
],
"mask": "Test"
},
{
"node_id": 2390,
"label": 2,
"text": "Title: A HIERARCHICAL COMMUNITY OF EXPERTS \nAbstract: We describe a directed acyclic graphical model that contains a hierarchy of linear units and a mechanism for dynamically selecting an appropriate subset of these units to model each observation. The non-linear selection mechanism is a hierarchy of binary units each of which gates the output of one of the linear units. There are no connections from linear units to binary units, so the generative model can be viewed as a logistic belief net (Neal 1992) which selects a skeleton linear model from among the available linear units. We show that Gibbs sampling can be used to learn the parameters of the linear and binary units even when the sampling is so brief that the Markov chain is far from equilibrium. ",
"neighbors": [
36,
74,
76,
2227
],
"mask": "Validation"
},
{
"node_id": 2391,
"label": 6,
"text": "Title: A Note on Learning from Multiple-Instance Examples \nAbstract: We describe a simple reduction from the problem of PAC-learning from multiple-instance examples to that of PAC-learning with one-sided random classification noise. Thus, all concept classes learnable with one-sided noise, which includes all concepts learnable in the usual 2-sided random noise model plus others such as the parity function, are learnable from multiple-instance examples. We also describe a more efficient (and somewhat technically more involved) reduction to the Statistical-Query model that results in a polynomial-time algorithm for learning axis-parallel rectangles with sample complexity ~ O(d 2 r=* 2 ), saving roughly a factor of r over the results of Auer et al. (1997). ",
"neighbors": [
507,
2427,
2548
],
"mask": "Validation"
},
{
"node_id": 2392,
"label": 1,
"text": "Title: Hierarchical Learning with Procedural Abstraction Mechanisms \nAbstract: We describe a simple reduction from the problem of PAC-learning from multiple-instance examples to that of PAC-learning with one-sided random classification noise. Thus, all concept classes learnable with one-sided noise, which includes all concepts learnable in the usual 2-sided random noise model plus others such as the parity function, are learnable from multiple-instance examples. We also describe a more efficient (and somewhat technically more involved) reduction to the Statistical-Query model that results in a polynomial-time algorithm for learning axis-parallel rectangles with sample complexity ~ O(d 2 r=* 2 ), saving roughly a factor of r over the results of Auer et al. (1997). ",
"neighbors": [
1925
],
"mask": "Train"
},
{
"node_id": 2393,
"label": 2,
"text": "Title: Coordination and Control Structures and Processes: Possibilities for Connectionist Networks (CN) \nAbstract: The absence of powerful control structures and processes that synchronize, coordinate, switch between, choose among, regulate, direct, modulate interactions between, and combine distinct yet interdependent modules of large connectionist networks (CN) is probably one of the most important reasons why such networks have not yet succeeded at handling difficult tasks (e.g. complex object recognition and description, complex problem-solving, planning). In this paper we examine how CN built from large numbers of relatively simple neuron-like units can be given the ability to handle problems that in typical multi-computer networks and artificial intelligence programs along with all other types of programs are always handled using extremely elaborate and precisely worked out central control (coordination, synchronization, switching, etc.). We point out the several mechanisms for central control of this un-brain-like sort that CN already have built into them albeit in hidden, often overlooked, ways. We examine the kinds of control mechanisms found in computers, programs, fetal development, cellular function and the immune system, evolution, social organizations, and especially brains, that might be of use in CN. Particularly intriguing suggestions are found in the pacemakers, oscillators, and other local sources of the brain's complex partial synchronies; the diffuse, global effects of slow electrical waves and neurohormones; the developmental program that guides fetal development; communication and coordination within and among living cells; the working of the immune system; the evolutionary processes that operate on large populations of organisms; and the great variety of partially competing partially cooperating controls found in small groups, organizations, and larger societies. All these systems are rich in control but typically control that emerges from complex interactions of many local and diffuse sources. We explore how several different kinds of plausible control mechanisms might be incorporated into CN, and assess their potential benefits with respect to their cost. ",
"neighbors": [
496,
663,
1813,
1896,
1952,
2029
],
"mask": "Train"
},
{
"node_id": 2394,
"label": 2,
"text": "Title: A Comparison of Dynamic Reposing and Tangent Distance for Drug Activity Prediction \nAbstract: In drug activity prediction (as in handwritten character recognition), the features extracted to describe a training example depend on the pose (location, orientation, etc.) of the example. In handwritten character recognition, one of the best techniques for addressing this problem is the tangent distance method of Simard, LeCun and Denker (1993). Jain, et al. (1993a; 1993b) introduce a new technique|dynamic reposing|that also addresses this problem. Dynamic reposing iteratively learns a neural network and then reposes the examples in an effort to maximize the predicted output values. New models are trained and new poses computed until models and poses converge. This paper compares dynamic reposing to the tangent distance method on the task of predicting the biological activity of musk compounds. In a 20-fold cross-validation, ",
"neighbors": [
2427
],
"mask": "Train"
},
{
"node_id": 2395,
"label": 0,
"text": "Title: Improving Competence by Integrating Case-Based Reasoning and Heuristic Search \nAbstract: We analyse the behaviour of a Propose & Revise architecture in the VT elevator design problem and we show that this problem solving method cannot solve all possible cases covered by the available domain knowledge. We investigate this problem and we show that this limitation is caused by the restricted search regime employed by the method and that the competence of the method cannot be improved by acquiring additional domain knowledge. We therefore propose an alternative design problem solver, which integrates case-based reasoning and heuristic search techniques and overcomes the competence-related limitations exhibited by the Propose & Revise architecture, while maintaining the same level of efficiency. We describe four algorithms for case-based design, which exploit both general properties of parametric design tasks and application specific heuristic knowledge.",
"neighbors": [
2665
],
"mask": "Train"
},
{
"node_id": 2396,
"label": 1,
"text": "Title: Properties of Genetic Representations of Neural Architectures \nAbstract: Genetic algorithms and related evolutionary techniques offer a promising approach for automatically exploring the design space of neural architectures for artificial intelligence and cognitive modeling. Central to this process of evolutionary design of neural architectures (EDNA) is the choice of the representation scheme that is used to encode a neural architecture in the form of a gene string (genotype) and to decode a genotype into the corresponding neural architecture (phenotype). The representation scheme used not only constrains the class of neural architectures that are representable (evolvable) in the system, but also determines the efficiency and the time-space complexity of the evolutionary design procedure as a whole. This paper identifies and discusses a set of properties that can be used to characterize different representations used in EDNA and to design or select representations with the necessary properties for particular classes of applications.",
"neighbors": [
163,
503,
1583,
1952,
2563
],
"mask": "Test"
},
{
"node_id": 2397,
"label": 2,
"text": "Title: CuPit-2 A Parallel Language for Neural Algorithms: Language Reference and Tutorial \nAbstract: and load balancing even for irregular neural networks. The idea to achieve these goals lies in the programming model: CuPit-2 programs are object-centered, with connections and nodes of a graph (which is the neural network) being the objects. Algorithms are based on parallel local computations in the nodes and connections and communication along the connections (plus broadcast and reduction operations). This report describes the design considerations and the resulting language definition and discusses in detail a tutorial example program. This CuPit-2 language manual and tutorial is an updated version of the original CuPit language manual [Pre94]. The new language CuPit-2 differs from the original CuPit in several ways. All language changes from CuPit to CuPit-2 are listed in the appendix. ",
"neighbors": [
2203,
2405
],
"mask": "Train"
},
{
"node_id": 2398,
"label": 0,
"text": "Title: Issues in Goal-Driven Explanation \nAbstract: When a reasoner explains surprising events for its internal use, a key motivation for explaining is to perform learning that will facilitate the achievement of its goals. Human explainers use a range of strategies to build explanations, including both internal reasoning and external information search, and goal-based considerations have a profound effect on their choices of when and how to pursue explanations. However, standard AI models of explanation rely on goal-neutral use of a single fixed strategy|generally backwards chaining|to build their explanations. This paper argues that explanation should be modeled as a goal-driven learning process for gathering and transforming information, and discusses the issues involved in developing an active multi-strategy process for goal-driven explanation. ",
"neighbors": [
583,
1498,
2184,
2372,
2399
],
"mask": "Test"
},
{
"node_id": 2399,
"label": 0,
"text": "Title: Abduction, Experience, and Goals: A Model of Everyday Abductive Explanation* \nAbstract: When a reasoner explains surprising events for its internal use, a key motivation for explaining is to perform learning that will facilitate the achievement of its goals. Human explainers use a range of strategies to build explanations, including both internal reasoning and external information search, and goal-based considerations have a profound effect on their choices of when and how to pursue explanations. However, standard AI models of explanation rely on goal-neutral use of a single fixed strategy|generally backwards chaining|to build their explanations. This paper argues that explanation should be modeled as a goal-driven learning process for gathering and transforming information, and discusses the issues involved in developing an active multi-strategy process for goal-driven explanation. ",
"neighbors": [
136,
2398,
2626
],
"mask": "Train"
},
{
"node_id": 2400,
"label": 2,
"text": "Title: A Neural Network Model of Visual Tilt Aftereffects \nAbstract: RF-LISSOM, a self-organizing model of laterally connected orientation maps in the primary visual cortex, was used to study the psychological phenomenon known as the tilt aftereffect. The same self-organizing processes that are responsible for the long-term development of the map and its lateral connections are shown to result in tilt aftereffects over short time scales in the adult. The model allows observing large numbers of neurons and connections simultaneously, making it possible to relate higher-level phenomena to low-level events, which is difficult to do experimentally. The results give computational support for the idea that direct tilt aftereffects arise from adaptive lateral interactions between feature detectors, as has long been surmised. They also suggest that indirect effects could result from the conservation of synaptic resources during this process. The model thus provides a unified computational explanation of self-organization and both direct and indirect tilt aftereffects in the primary visual cortex. ",
"neighbors": [
122,
124,
127,
1916
],
"mask": "Train"
},
{
"node_id": 2401,
"label": 3,
"text": "Title: Factor Graphs and Algorithms \nAbstract: A factor graph is a bipartite graph that expresses how a global function of several variables factors into a product of local functions. Factor graphs subsume many other graphical models, including Bayesian networks, Markov random fields, and Tanner graphs. We describe a general algorithm for computing \"marginals\" of the global function by distributed message-passing in the corresponding factor graph. A wide variety of algorithms developed in the artificial intelligence, statistics, signal processing, and digital communications communities can be derived as specific instances of this general algorithm, including Pearl's \"belief propagation\" and \"belief revision\" algorithms, the fast Fourier transform, the Viterbi algorithm, the forward/backward algorithm, and the iterative \"turbo\" decoding algorithm. ",
"neighbors": [
1988
],
"mask": "Test"
},
{
"node_id": 2402,
"label": 1,
"text": "Title: Evolution of a Time-Optimal Fly-To Controller Circuit using Genetic Programming \nAbstract: ",
"neighbors": [
1921,
1931
],
"mask": "Train"
},
{
"node_id": 2403,
"label": 0,
"text": "Title: Reasoning with Portions of Precedents \nAbstract: This paper argues that the task of matching in case-based reasoning can often be improved by comparing new cases to portions of precedents. An example is presented that illustrates how combining portions of multiple precedents can permit new cases to be resolved that would be indeterminate if new cases could only be compared to entire precedents. A system that uses of portions of precedents for legal analysis in the domain of Texas worker's compensation law, GREBE, is described, and examples of GREBE's analysis that combine reasoning steps from multiple precedents are presented. ",
"neighbors": [
649,
2581
],
"mask": "Train"
},
{
"node_id": 2404,
"label": 3,
"text": "Title: A Model for Projection and Action \nAbstract: In designing autonomous agents that deal competently with issues involving time and space, there is a tradeoff to be made between guaranteed response-time reactions on the one hand, and flexibility and expressiveness on the other. We propose a model of action with probabilistic reasoning and decision analytic evaluation for use in a layered control architecture. Our model is well suited to tasks that require reasoning about the interaction of behaviors and events in a fixed temporal horizon. Decisions are continuously reevaluated, so that there is no problem with plans becoming obsolete as new information becomes available. In this paper, we are particularly interested in the tradeoffs required to guarantee a fixed reponse time in reasoning about nondeterministic cause-and- effect relationships. By exploiting approximate decision making processes, we are able to trade accuracy in our predictions for speed in decision making in order to improve expected per ",
"neighbors": [
1459,
2221
],
"mask": "Train"
},
{
"node_id": 2405,
"label": 2,
"text": "Title: A Parallel Programming Model for Irregular Dynamic Neural Networks a programming model that allows to\nAbstract: A compiler for CuPit has been built for the MasPar MP-1/MP-2 using compilation techniques that can also be applied to most other parallel machines. The paper shortly presents the main ideas of the techniques used and results obtained by the various optimizations. ",
"neighbors": [
881,
1119,
2203,
2397
],
"mask": "Validation"
},
{
"node_id": 2406,
"label": 4,
"text": "Title: Approximating Value Trees in Structured Dynamic Programming \nAbstract: We propose and examine a method of approximate dynamic programming for Markov decision processes based on structured problem representations. We assume an MDP is represented using a dynamic Bayesian network, and construct value functions using decision trees as our function representation. The size of the representation is kept within acceptable limits by pruning these value trees so that leaves represent possible ranges of values, thus approximating the value functions produced during optimization. We propose a method for detecting convergence, prove errors bounds on the resulting approximately optimal value functions and policies, and describe some preliminary experi mental results. ",
"neighbors": [
2078
],
"mask": "Validation"
},
{
"node_id": 2407,
"label": 1,
"text": "Title: Evolving Turing-Complete Programs for a Register Machine with Self-modifying Code \nAbstract: The majority of commercial computers today are register machines of von Neumann type. We have developed a method to evolve Turing-complete programs for a register machine. The described implementation enables the use of most program constructs, such as arithmetic operators, large indexed memory, automatic decomposition into subfunctions and subroutines (ADFs), conditional constructs i.e. if-then-else, jumps, loop structures, recursion, protected functions, string and list functions. Any C-function can be compiled and linked into the function set of the system. The use of register machine language allows us to work at the lowest level of binary machine code without any interpreting steps. In a von Neumann machine, programs and data reside in the same memory and the genetic operators can thus directly manipulate the binary machine code in memory. The genetic operators themselves are written in C-language but they modify individuals in binary representation. The result is an execution speed enhancement of up to 100 times compared to an interpreting C-language implementation, and up to 2000 times compared to a LISP implementation. The use of binary machine code demands a very compact coding of about one byte per node in the individual. The resulting evolved programs are disassembled into C-modules and can be incorporated into a conventional software development environment. The low memory requirements and the significant speed enhancement of this technique could be of use when applying genetic programming to new application areas, platforms and research domains. ",
"neighbors": [
1631,
2704
],
"mask": "Test"
},
{
"node_id": 2408,
"label": 4,
"text": "Title: Exploratory Learning in the Game of GO \nAbstract: This paper considers the importance of exploration to game-playing programs which learn by playing against opponents. The central question is whether a learning program should play the move which offers the best chance of winning the present game, or if it should play the move which has the best chance of providing useful information for future games. An approach to addressing this question is developed using probability theory, and then implemented in two different learning methods. Initial experiments in the game of Go suggest that a program which takes exploration into account can learn better against a knowledgeable opponent than a program which does not. ",
"neighbors": [
523,
1975,
2145
],
"mask": "Train"
},
{
"node_id": 2409,
"label": 2,
"text": "Title: Framework for Combining Symbolic and Neural Learning rule extraction from neural networks the KBANN algorithm\nAbstract: Technical Report 1123, Computer Sciences Department, University of Wisconsin - Madison, Nov. 1992 ABSTRACT This article describes an approach to combining symbolic and connectionist approaches to machine learning. A three-stage framework is presented and the research of several groups is reviewed with respect to this framework. The first stage involves the insertion of symbolic knowledge into neural networks, the second addresses the refinement of this prior knowledge in its neural representation, while the third concerns the extraction of the refined symbolic knowledge. Experimental results and open research issues are discussed. A shorter version of this paper will appear in Machine Learning. ",
"neighbors": [
174,
477,
638,
1644,
1869,
2027,
2543,
2672
],
"mask": "Train"
},
{
"node_id": 2410,
"label": 2,
"text": "Title: Subsymbolic Case-Role Analysis of Sentences with Embedded Clauses \nAbstract: A distributed neural network model called SPEC for processing sentences with recursive relative clauses is described. The model is based on separating the tasks of segmenting the input word sequence into clauses, forming the case-role representations, and keeping track of the recursive embeddings into different modules. The system needs to be trained only with the basic sentence constructs, and it generalizes not only to new instances of familiar relative clause structures, but to novel structures as well. SPEC exhibits plausible memory degradation as the depth of the center embeddings increases, its memory is primed by earlier constituents, and its performance is aided by semantic constraints between the constituents. The ability to process structure is largely due to a central executive network that monitors and controls the execution of the entire system. This way, in contrast to earlier subsymbolic systems, parsing is modeled as a controlled high-level process rather than one based on automatic reflex responses. ",
"neighbors": [
204,
1811,
2049
],
"mask": "Validation"
},
{
"node_id": 2411,
"label": 4,
"text": "Title: Predictive Q-Routing: A Memory-based Reinforcement Learning Approach to Adaptive Traffic Control \nAbstract: In this paper, we propose a memory-based Q-learning algorithm called predictive Q-routing (PQ-routing) for adaptive traffic control. We attempt to address two problems encountered in Q-routing (Boyan & Littman, 1994), namely, the inability to fine-tune routing policies under low network load and the inability to learn new optimal policies under decreasing load conditions. Unlike other memory-based reinforcement learning algorithms in which memory is used to keep past experiences to increase learning speed, PQ-routing keeps the best experiences learned and reuses them by predicting the traffic trend. The effectiveness of PQ-routing has been verified under various network topologies and traffic conditions. Simulation results show that PQ-routing is superior to ",
"neighbors": [
2666
],
"mask": "Train"
},
{
"node_id": 2412,
"label": 4,
"text": "Title: On-Line Adaptation of a Signal Predistorter through Dual Reinforcement Learning \nAbstract: Several researchers have demonstrated how neural networks can be trained to compensate for nonlinear signal distortion in e.g. digital satellite communications systems. These networks, however, require that both the original signal and its distorted version are known. Therefore, they have to be trained off-line, and they cannot adapt to changing channel characteristics. In this paper, a novel dual reinforcement learning approach is proposed that can adapt on-line while the system is performing. Assuming that the channel characteristics are the same in both directions, two predistorters at each end of the communication channel co-adapt using the output of the other predistorter to determine their own reinforcement. Using the common Volterra Series model to simulate the channel, the system is shown to successfully learn to compensate for distortions up to 30%, which is significantly higher than what might be expected in an actual channel.",
"neighbors": [
427,
1758,
2255
],
"mask": "Train"
},
{
"node_id": 2413,
"label": 2,
"text": "Title: On-Line Adaptation of a Signal Predistorter through Dual Reinforcement Learning \nAbstract: Most connectionist modeling assumes noise-free inputs. This assumption is often violated. This paper introduces the idea of clearning, of simultaneously cleaning the data and learning the underlying structure. The cleaning step can be viewed as top-down processing (where the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). Clearning is used in conjunction with standard pruning. This paper discusses the statistical foundation of clearning, gives an interpretation in terms of a mechanical model, describes how to obtain both point predictions and conditional densities for the output, and shows how the resulting model can be used to discover properties of the data otherwise not accessible (such as the signal-to-noise ratio of the inputs). This paper uses clearning to predict foreign exchange rates, a noisy time series problem with well-known benchmark performances. On the out-of-sample 1993-1994 test period, clearning obtains an annualized return on investment above 30%, significantly better than an otherwise identical network. The final ultra-sparse network with 36 remaining non-zero input-to-hidden weights (of the 1035 initial weights between 69 inputs and 15 hidden units) is very robust against overfitting. This small network also lends itself to interpretation.",
"neighbors": [
668,
1366,
1718,
2239,
2373
],
"mask": "Train"
},
{
"node_id": 2414,
"label": 2,
"text": "Title: On-Line Adaptation of a Signal Predistorter through Dual Reinforcement Learning \nAbstract: Most connectionist modeling assumes noise-free inputs. This assumption is often violated. This paper introduces the idea of clearning, of simultaneously cleaning the data and learning the underlying structure. The cleaning step can be viewed as top-down processing (where the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). Clearning is used in conjunction with standard pruning. This paper discusses the statistical foundation of clearning, gives an interpretation in terms of a mechanical model, describes how to obtain both point predictions and conditional densities for the output, and shows how the resulting model can be used to discover properties of the data otherwise not accessible (such as the signal-to-noise ratio of the inputs). This paper uses clearning to predict foreign exchange rates, a noisy time series problem with well-known benchmark performances. On the out-of-sample 1993-1994 test period, clearning obtains an annualized return on investment above 30%, significantly better than an otherwise identical network. The final ultra-sparse network with 36 remaining non-zero input-to-hidden weights (of the 1035 initial weights between 69 inputs and 15 hidden units) is very robust against overfitting. This small network also lends itself to interpretation.",
"neighbors": [
668,
1366,
1718,
2239,
2373
],
"mask": "Test"
},
{
"node_id": 2415,
"label": 0,
"text": "Title: LEARNING MORE FROM LESS DATA: EXPERIMENTS WITH LIFELONG ROBOT LEARNING \nAbstract: Most connectionist modeling assumes noise-free inputs. This assumption is often violated. This paper introduces the idea of clearning, of simultaneously cleaning the data and learning the underlying structure. The cleaning step can be viewed as top-down processing (where the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). Clearning is used in conjunction with standard pruning. This paper discusses the statistical foundation of clearning, gives an interpretation in terms of a mechanical model, describes how to obtain both point predictions and conditional densities for the output, and shows how the resulting model can be used to discover properties of the data otherwise not accessible (such as the signal-to-noise ratio of the inputs). This paper uses clearning to predict foreign exchange rates, a noisy time series problem with well-known benchmark performances. On the out-of-sample 1993-1994 test period, clearning obtains an annualized return on investment above 30%, significantly better than an otherwise identical network. The final ultra-sparse network with 36 remaining non-zero input-to-hidden weights (of the 1035 initial weights between 69 inputs and 15 hidden units) is very robust against overfitting. This small network also lends itself to interpretation.",
"neighbors": [
1112,
2530
],
"mask": "Train"
},
{
"node_id": 2416,
"label": 3,
"text": "Title: Wavelet Thresholding via a Bayesian Approach \nAbstract: We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in non-parametric regression. A prior distribution is imposed on the wavelet coefficients of the unknown response function, designed to capture the sparseness of wavelet expansion common to most applications. For the prior specified, the posterior median yields a thresholding procedure. Our prior model for the underlying function can be adjusted to give functions falling in any specific Besov space. We establish a relation between the hyperparameters of the prior model and the parameters of those Besov spaces within which realizations from the prior will fall. Such a relation gives insight into the meaning of the Besov space parameters. Moreover, the established relation makes it possible in principle to incorporate prior knowledge about the function's regularity properties into the prior model for its wavelet coefficients. However, prior knowledge about a function's regularity properties might be hard to elicit; with this in mind, we propose a standard choise of prior hyperparameters that works well in our examples. Several simulated examples are used to illustrate our method, and comparisons are made with other thresholding methods. We also present an application to a data set collected in an anaesthesiological study. ",
"neighbors": [
1910,
2375,
2506
],
"mask": "Train"
},
{
"node_id": 2417,
"label": 3,
"text": "Title: Choice of Basis for Laplace Approximation \nAbstract: Maximum a posteriori optimization of parameters and the Laplace approximation for the marginal likelihood are both basis-dependent methods. This note compares two choices of basis for models parameterized by probabilities, showing that it is possible to improve on the traditional choice, the probability simplex, by transforming to the `softmax' basis. ",
"neighbors": [
157,
2532
],
"mask": "Train"
},
{
"node_id": 2418,
"label": 6,
"text": "Title: A Fast Algorithm for the Computation and Enumeration of Perfect Phylogenies \nAbstract: The Perfect Phylogeny Problem is a classical problem in computational evolutionary biology, in which a set of species/taxa is described by a set of qualitative characters. In recent years, the problem has been shown to be NP-Complete in general, while the different fixed parameter versions can each be solved in polynomial time. In particular, Agarwala and Fernandez-Baca have developed an O(2 3r (nk 3 +k 4 )) algorithm for the perfect phylogeny problem for n species defined by k r-state characters. Since commonly the character data is drawn from alignments of molecular sequences, k is the length of the sequences and can thus be very large (in the hundreds or thousands). Thus, it is imperative to develop algorithms which run efficiently for large values of k. In this paper we make additional observations about the structure of the problem and produce an algorithm for the problem that runs in time O(2 2r k 2 n). We also show how it is possible to efficiently build a structure that implicitly represents the set of all perfect phylogenies, and to randomly sample from that set.",
"neighbors": [
2141,
2345,
2511
],
"mask": "Test"
},
{
"node_id": 2419,
"label": 3,
"text": "Title: Adaptive probabilistic networks \nAbstract: Belief networks (or probabilistic networks) and neural networks are two forms of network representations that have been used in the development of intelligent systems in the field of artificial intelligence. Belief networks provide a concise representation of general probability distributions over a set of random variables, and facilitate exact calculation of the impact of evidence on propositions of interest. Neural networks, which represent parameterized algebraic combinations of nonlinear activation functions, have found widespread use as models of real neural systems and as function approximators because of their amenability to simple training algorithms. Furthermore, the simple, local nature of most neural network training algorithms provides a certain biological plausibility and allows for a massively parallel implementation. In this paper, we show that similar local learning algorithms can be derived for belief networks, and that these learning algorithms can operate using only information that is directly available from the normal, inferential processes of the networks. This removes the main obstacle preventing belief networks from competing with neural networks on the above-mentioned tasks. The precise, local, probabilistic interpretation of belief networks also allows them to be partially or wholly constructed by humans; allows the results of learning to be easily understood; and allows them to contribute to rational decision-making in a well-defined way. ",
"neighbors": [
492,
1268,
2323
],
"mask": "Validation"
},
{
"node_id": 2420,
"label": 3,
"text": "Title: A Parallel Learning Algorithm for Bayesian Inference Networks \nAbstract: We present a new parallel algorithm for learning Bayesian inference networks from data. Our learning algorithm exploits both properties of the MDL-based score metric, and a distributed, asynchronous, adaptive search technique called nagging. Nagging is intrinsically fault tolerant, has dynamic load balancing features, and scales well. We demonstrate the viability, effectiveness, and scalability of our approach empirically with several experiments using on the order of 20 machines. More specifically, we show that our distributed algorithm can provide optimal solutions for larger problems as well as good solutions for Bayesian networks of up to 150 variables. ",
"neighbors": [
1527,
2169
],
"mask": "Train"
},
{
"node_id": 2421,
"label": 3,
"text": "Title: On Convergence of the EM Algorithm and the Gibbs Sampler SUMMARY \nAbstract: In this article we investigate the relationship between the two popular algorithms, the EM algorithm and the Gibbs sampler. We show that the approximate rate of convergence of the Gibbs sampler by Gaussian approximation is equal to that of the corresponding EM type algorithm. This helps in implementing either of the algorithms as improvement strategies for one algorithm can be directly transported to the other. In particular, by running the EM algorithm we know approximately how many iterations are needed for convergence of the Gibbs sampler. We also obtain a result that under conditions, the EM algorithm used for finding the maximum likelihood estimates can be slower to converge than the corresponding Gibbs sampler for Bayesian inference which uses proper prior distributions. We illustrate our results in a number of realistic examples all based on the generalized linear mixed models. ",
"neighbors": [
74,
115,
263,
345,
1829,
1856,
1868,
1906,
2266,
2389,
2590,
2654
],
"mask": "Validation"
},
{
"node_id": 2422,
"label": 2,
"text": "Title: Classification of Underwater Mammals using Feature Extraction Based on Time-Frequency Analysis and BCM Theory \nAbstract: Underwater mammal sound classification is demonstrated using a novel application of wavelet time/frequency decomposition and feature extraction using a BCM unsupervised network. Different feature extraction methods and different wavelet representations are studied. The system achieves outstanding classification performance even when tested with mammal sounds recorded at very different locations (from those used for training). The improved results suggest that nonlinear feature extraction from wavelet representations outperforms different linear choices of basis functions. ",
"neighbors": [
359,
2376,
2499,
2500
],
"mask": "Train"
},
{
"node_id": 2423,
"label": 6,
"text": "Title: Error-Correcting Output Codes: A General Method for Improving Multiclass Inductive Learning Programs \nAbstract: Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k > 2 values (i.e., k \"classes\"). The definition is acquired by studying large collections of training examples of the form hx i ; f(x i )i. Existing approaches to this problem include (a) direct application of multiclass algorithms such as the decision-tree algorithms ID3 and CART, (b) application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and (c) application of binary concept learning algorithms with distributed output codes such as those employed by Sejnowski and Rosenberg in the NETtalk system. This paper compares these three approaches to a new technique in which BCH error-correcting codes are employed as a distributed output representation. We show that these output representations improve the performance of ID3 on the NETtalk task and of backpropagation on an isolated-letter speech-recognition task. These results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems. ",
"neighbors": [
550,
853,
862,
881,
1161,
1191,
1601,
1644,
1732,
2225,
2627,
2657
],
"mask": "Test"
},
{
"node_id": 2424,
"label": 5,
"text": "Title: Which Hypotheses Can Be Found with Inverse Entailment? -Extended Abstract \nAbstract: In this paper we give a completeness theorem of an inductive inference rule inverse entailment proposed by Muggleton. Our main result is that a hypothesis clause H can be derived from an example E under a background theory B with inverse entailment iff H subsumes E relative to B in Plotkin's sense. The theory B can be any clausal theory, and the example E can be any clause which is neither a tautology nor implied by B. The derived hypothesis H is a clause which is not always definite. In order to prove the result we give declarative semantics for arbitrary consistent clausal theories, and show that SB-resolution, which was originally introduced by Plotkin, is complete procedural semantics. The completeness is shown as an extension of the completeness theorem of SLD-resolution. We also show that every hypothesis H derived with saturant generalization, proposed by Rouveirol, must subsume E w.r.t. B in Buntine's sense. Moreover we show that saturant generalization can be obtained from inverse entailment by giving some restriction to its usage.",
"neighbors": [
1428,
2589
],
"mask": "Train"
},
{
"node_id": 2425,
"label": 3,
"text": "Title: Structured Arc Reversal and Simulation of Dynamic Probabilistic Networks \nAbstract: We present an algorithm for arc reversal in Bayesian networks with tree-structured conditional probability tables, and consider some of its advantages, especially for the simulation of dynamic probabilistic networks. In particular, the method allows one to produce CPTs for nodes involved in the reversal that exploit regularities in the conditional distributions. We argue that this approach alleviates some of the overhead associated with arc reversal, plays an important role in evidence integration and can be used to restrict sampling of variables in DPNs. We also provide an algorithm that detects the dynamic irrelevance of state variables in forward simulation. This algorithm exploits the structured CPTs in a reversed network to determine, in a time-independent fashion, the conditions under which a variable does or does not need to be sampled.",
"neighbors": [
62,
423,
788,
2341,
2474
],
"mask": "Test"
},
{
"node_id": 2426,
"label": 5,
"text": "Title: Inductive Constraint Logic \nAbstract: A novel approach to learning first order logic formulae from positive and negative examples is presented. Whereas present inductive logic programming systems employ examples as true and false ground facts (or clauses), we view examples as interpretations which are true or false for the target theory. This viewpoint allows to reconcile the inductive logic programming paradigm with classical attribute value learning in the sense that the latter is a special case of the former. Because of this property, we are able to adapt AQ and CN2 type algorithms in order to enable learning of full first order formulae. However, whereas classical learning techniques have concentrated on concept representations in disjunctive normal form, we will use a clausal representation, which corresponds to a conjuctive normal form where each conjunct forms a constraint on positive examples. This representation duality reverses also the role of positive and negative examples, both in the heuristics and in the algorithm. The resulting theory is incorporated in a system named ICL (Inductive Constraint Logic).",
"neighbors": [
344,
638,
1007,
1686,
1919,
2126,
2282,
2431,
2493
],
"mask": "Train"
},
{
"node_id": 2427,
"label": 2,
"text": "Title: Solving the Multiple-Instance Problem with Axis-Parallel Rectangles \nAbstract: The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple-instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89% correct predictions on a musk-odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms. ",
"neighbors": [
318,
507,
1888,
2391,
2394,
2548,
2591
],
"mask": "Train"
},
{
"node_id": 2428,
"label": 3,
"text": "Title: Bumptrees for Efficient Function, Constraint, and Classification Learning \nAbstract: A new class of data structures called bumptrees is described. These structures are useful for efficiently implementing a number of neural network related operations. An empirical comparison with radial basis functions is presented on a robot arm mapping learning task. Applications to density estimation, classification, and constraint representation and learning are also outlined. ",
"neighbors": [
44,
116,
225,
760,
1860,
2021,
2042
],
"mask": "Train"
},
{
"node_id": 2429,
"label": 1,
"text": "Title: Automatic Definition of Modular Neural Networks \nAbstract: A new class of data structures called bumptrees is described. These structures are useful for efficiently implementing a number of neural network related operations. An empirical comparison with radial basis functions is presented on a robot arm mapping learning task. Applications to density estimation, classification, and constraint representation and learning are also outlined. ",
"neighbors": [
1143,
2152,
2281,
2317,
2624,
2667
],
"mask": "Validation"
},
{
"node_id": 2430,
"label": 4,
"text": "Title: Category: Control, Navigation and Planning Preference: Oral presentation Exploiting Model Uncertainty Estimates for Safe Dynamic\nAbstract: Model learning combined with dynamic programming has been shown to be effective for learning control of continuous state dynamic systems. The simplest method assumes the learned model is correct and applies dynamic programming to it, but many approximators provide uncertainty estimates on the fit. How can they be exploited? This paper addresses the case where the system must be prevented from having catastrophic failures during learning. We propose a new algorithm adapted from the dual control literature and use Bayesian locally weighted regression models with stochastic dynamic programming. A common reinforcement learning assumption is that aggressive exploration should be encouraged. This paper addresses the converse case in which the system has to reign in exploration. The algorithm is illustrated on a 4 dimensional simulated control problem.",
"neighbors": [
294,
682,
1860,
2647
],
"mask": "Train"
},
{
"node_id": 2431,
"label": 5,
"text": "Title: Multi-class problems and discretization in ICL Extended abstract \nAbstract: Handling multi-class problems and real numbers is important in practical applications of machine learning to KDD problems. While attribute-value learners address these problems as a rule, very few ILP systems do so. The few ILP systems that handle real numbers mostly do so by trying out all real values that are applicable, thus running into efficiency or overfitting problems. This paper discusses some recent extensions of ICL that address these problems. ICL, which stands for Inductive Constraint Logic, is an ILP system that learns first order logic formulae from positive and negative examples. The main charateristic of ICL is its view on examples. These are seen as interpretations which are true or false for the clausal target theory (in CNF). We first argue that ICL can be used for learning a theory in a disjunctive normal form (DNF). With this in mind, a possible solution for handling more than two classes is given (based on some ideas from CN2). Finally, we show how to tackle problems with continuous values by adapting discretization techniques from attribute value learners. ",
"neighbors": [
426,
1919,
2426,
2591
],
"mask": "Test"
},
{
"node_id": 2432,
"label": 4,
"text": "Title: Learning the Peg-into-Hole Assembly Operation with a Connectionist Reinforcement Technique \nAbstract: The paper presents a learning controller that is capable of increasing insertion speed during consecutive peg-into-hole operations, without increasing the contact force level. Our aim is to find a better relationship between measured forces and the controlled velocity, without using a complicated (human generated) model. We followed a connectionist approach. Two learning phases are distinguished. First the learning controller is trained (or initialised) in a supervised way by a suboptimal task frame controller. Then a reinforcement learning phase follows. The controller consists of two networks: (1) the policy network and (2) the exploration network. On-line robotic exploration plays a crucial role in obtaining a better policy. Optionally, this architecture can be extended with a third network: the reinforcement network. The learning controller is implemented on a CAD-based contact force simulator. In contrast with most other related work, the experiments are simulated in 3D with 6 degrees of freedom. Performance of a peg-into-hole task is measured in insertion time and average/maximum force level. The fact that a better performance can be obtained in this way, demonstrates the importance of model-free learning techniques for repetitive robotic assembly tasks. The paper presents the approach and simulation results. Keywords: robotic assembly, peg-into-hole, artificial neural networks, reinforcement learning.",
"neighbors": [
1755
],
"mask": "Test"
},
{
"node_id": 2433,
"label": 2,
"text": "Title: Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot \nAbstract: The paper presents a learning controller that is capable of increasing insertion speed during consecutive peg-into-hole operations, without increasing the contact force level. Our aim is to find a better relationship between measured forces and the controlled velocity, without using a complicated (human generated) model. We followed a connectionist approach. Two learning phases are distinguished. First the learning controller is trained (or initialised) in a supervised way by a suboptimal task frame controller. Then a reinforcement learning phase follows. The controller consists of two networks: (1) the policy network and (2) the exploration network. On-line robotic exploration plays a crucial role in obtaining a better policy. Optionally, this architecture can be extended with a third network: the reinforcement network. The learning controller is implemented on a CAD-based contact force simulator. In contrast with most other related work, the experiments are simulated in 3D with 6 degrees of freedom. Performance of a peg-into-hole task is measured in insertion time and average/maximum force level. The fact that a better performance can be obtained in this way, demonstrates the importance of model-free learning techniques for repetitive robotic assembly tasks. The paper presents the approach and simulation results. Keywords: robotic assembly, peg-into-hole, artificial neural networks, reinforcement learning.",
"neighbors": [
2437
],
"mask": "Train"
},
{
"node_id": 2434,
"label": 3,
"text": "Title: Causal Inference from Indirect Experiments \nAbstract: Indirect experiments are studies in which randomized control is replaced by randomized encouragement, that is, subjects are encouraged, rather than forced to receive treatment programs. The purpose of this paper is to bring to the attention of experimental researchers simple mathematical results that enable us to assess, from indirect experiments, the strength with which causal influences operate among variables of interest. The results reveal that despite the laxity of the encouraging instrument, indirect experimentation can yield significant and sometimes accurate information on the impact of a program on the population as a whole, as well as on the particular individuals who participated in the program. ",
"neighbors": [
105,
1326,
1747,
2069,
2160
],
"mask": "Validation"
},
{
"node_id": 2435,
"label": 2,
"text": "Title: Identification in H 1 with Nonuniformly Spaced Frequency Response Measurements \nAbstract: In this paper, the problem of \"system identification in H 1 \" is investigated in the case when the given frequency response data is not necessarily on a uniformly spaced grid of frequencies. A large class of robustly convergent identification algorithms are derived. A particular algorithm is further examined and explicit worst case error bounds (in the H 1 norm) are derived for both discrete-time and continuous-time systems. Examples are provided to illustrate the application of the algorithms.",
"neighbors": [
2236,
2262
],
"mask": "Test"
},
{
"node_id": 2436,
"label": 5,
"text": "Title: Space-Time Scheduling of Instruction-Level Parallelism on a Raw Machine \nAbstract: Advances in VLSI technology will enable chips with over a billion transistors within the next decade. Unfortunately, the centralized-resource architectures of modern microprocessors are ill-suited to exploit such advances. Achieving a high level of parallelism at a reasonable clock speed requires distributing the processor resources a trend already visible in the dual-register-file architecture of the Alpha 21264. A Raw microprocessor takes an extreme position in this space by distributing all its resources such as instruction streams, register files, memory ports, and ALUs over a pipelined two-dimensional interconnect, and exposing them fully to the compiler. Compilation for instruction-level parallelism (ILP) on such distributed-resource machines requires both spatial instruction scheduling and traditional temporal instruction scheduling. This paper describes the techniques used by the Raw compiler to handle these issues. Preliminary results from a SUIF-based compiler for sequential programs written in C and Fortran indicate that the Raw approach to exploiting ILP can achieve speedups scalable with the number of processors for applications with such parallelism. The Raw architecture attempts to provide performance that is at least comparable to that provided by scaling an existing architecture, but that can achieve orders of magnitude improvement in performance for applications with a large amount of parallelism. This paper offers some positive results in this direction. ",
"neighbors": [
2649
],
"mask": "Train"
},
{
"node_id": 2437,
"label": 4,
"text": "Title: Embodiment and Manipulation Learning Process for a Humanoid Hand \nAbstract: Advances in VLSI technology will enable chips with over a billion transistors within the next decade. Unfortunately, the centralized-resource architectures of modern microprocessors are ill-suited to exploit such advances. Achieving a high level of parallelism at a reasonable clock speed requires distributing the processor resources a trend already visible in the dual-register-file architecture of the Alpha 21264. A Raw microprocessor takes an extreme position in this space by distributing all its resources such as instruction streams, register files, memory ports, and ALUs over a pipelined two-dimensional interconnect, and exposing them fully to the compiler. Compilation for instruction-level parallelism (ILP) on such distributed-resource machines requires both spatial instruction scheduling and traditional temporal instruction scheduling. This paper describes the techniques used by the Raw compiler to handle these issues. Preliminary results from a SUIF-based compiler for sequential programs written in C and Fortran indicate that the Raw approach to exploiting ILP can achieve speedups scalable with the number of processors for applications with such parallelism. The Raw architecture attempts to provide performance that is at least comparable to that provided by scaling an existing architecture, but that can achieve orders of magnitude improvement in performance for applications with a large amount of parallelism. This paper offers some positive results in this direction. ",
"neighbors": [
427,
745,
2433
],
"mask": "Train"
},
{
"node_id": 2438,
"label": 2,
"text": "Title: Integrating Initialization Bias and Search Bias in Neural Network Learning \nAbstract: The use of previously learned knowledge during learning has been shown to reduce the number of examples required for good generalization, and to increase robustness to noise in the examples. In reviewing various means of using learned knowledge from a domain to guide further learning in the same domain, two underlying classes are discerned. Methods which use previous knowledge to initialize a learner (as an initialization bias), and those that use previous knowledge to constrain a learner (as a search bias). We show such methods in fact exploit the same domain knowledge differently, and can complement each other. This is shown by presenting a combined approach which both initializes and constrains a learner. This combined approach is seen to outperform the individual methods under the conditions that accurate previously learned domain knowledge is available, and that there are irrelevant features in the domain representation. ",
"neighbors": [
92,
2091
],
"mask": "Train"
},
{
"node_id": 2439,
"label": 2,
"text": "Title: Analog Neural Nets with Gaussian or other Common Noise Distributions cannot Recognize Arbitrary Regular Languages \nAbstract: We consider recurrent analog neural nets where the output of each gate is subject to Gaussian noise, or any other common noise distribution that is nonzero on a large set. We show that many regular languages cannot be recognized by networks of this type, and we give a precise characterization of those languages which can be recognized. This result implies severe constraints on possibilities for constructing recurrent analog neural nets that are robust against realistic types of analog noise. On the other hand we present a method for constructing feedforward analog neural nets that are robust with regard to analog noise of this type.",
"neighbors": [
407,
1875
],
"mask": "Test"
},
{
"node_id": 2440,
"label": 2,
"text": "Title: Modifying Network Architectures for Certainty-Factor Rule-Base Revision \nAbstract: This paper describes Rapture | a system for revising probabilistic rule bases that converts symbolic rules into a connectionist network, which is then trained via connectionist techniques. It uses a modified version of backpropagation to refine the certainty factors of the rule base, and uses ID3's information-gain heuristic (Quinlan, 1986) to add new rules. Work is currently under way for finding improved techniques for modifying network architectures that include adding hidden units using the UPSTART algorithm (Frean, 1990). A case is made via comparison with fully connected connectionist techniques for keeping the rule base as close to the original as possible, adding new input units only as needed.",
"neighbors": [
2543
],
"mask": "Validation"
},
{
"node_id": 2441,
"label": 5,
"text": "Title: Distance Induction in First Order Logic used for classification via a k-nearest-neighbor process. Experiments on\nAbstract: This paper tackles the supervised induction of a distance from examples described as Horn clauses or constrained clauses. In opposition to syntax-driven approaches, this approach is discrimination-driven: it proceeds by defining a small set of complex discriminant hypotheses. These hypotheses serve as new concepts, used to redescribe the initial examples. Further, this redescription can be embedded into the space of natural integers, and a distance between examples thus naturally follows. ",
"neighbors": [
66,
344,
2585
],
"mask": "Train"
},
{
"node_id": 2442,
"label": 4,
"text": "Title: Using Temporal-Difference Reinforcement Learning to Improve Decision-Theoretic Utilities for Diagnosis \nAbstract: Probability theory represents and manipulates uncertainties, but cannot tell us how to behave. For that we need utility theory which assigns values to the usefulness of different states, and decision theory which concerns optimal rational decisions. There are many methods for probability modeling, but few for learning utility and decision models. We use reinforcement learning to find the optimal sequence of questions in a diagnosis situation while maintaining a high accuracy. Automated diagnosis on a heart-disease domain is used to demonstrate that temporal-difference learning can improve diagnosis. On the Cleveland heart-disease database our results are better than those reported from all previous methods. ",
"neighbors": [
71,
523,
565,
929,
2118
],
"mask": "Train"
},
{
"node_id": 2443,
"label": 3,
"text": "Title: Issues in the Integration of Data Mining and Data Visualization Visualizing the Simple Bayesian Classifier \nAbstract: The simple Bayesian classifier (SBC), sometimes called Naive-Bayes, is built based on a conditional independence model of each attribute given the class. The model was previously shown to be surprisingly robust to obvious violations of this independence assumption, yielding accurate classification models even when there are clear conditional dependencies. The SBC can serve as an excellent tool for initial exploratory data analysis when coupled with a visualizer that makes its structure comprehensible. We describe such a visual representation of the SBC model that has been successfully implemented. We describe the requirements we had for such a visualization and the design decisions we made to satisfy them. ",
"neighbors": [
1339,
2338,
2343
],
"mask": "Train"
},
{
"node_id": 2444,
"label": 4,
"text": "Title: Symbiotic Evolution of Neural Networks in Sequential Decision Tasks \nAbstract: The simple Bayesian classifier (SBC), sometimes called Naive-Bayes, is built based on a conditional independence model of each attribute given the class. The model was previously shown to be surprisingly robust to obvious violations of this independence assumption, yielding accurate classification models even when there are clear conditional dependencies. The SBC can serve as an excellent tool for initial exploratory data analysis when coupled with a visualizer that makes its structure comprehensible. We describe such a visual representation of the SBC model that has been successfully implemented. We describe the requirements we had for such a visualization and the design decisions we made to satisfy them. ",
"neighbors": [
500,
2257
],
"mask": "Train"
},
{
"node_id": 2445,
"label": 2,
"text": "Title: Simulation of Reduced Precision Arithmetic for Digital Neural Networks Using the RAP Machine \nAbstract: This paper describes some of our recent work in the development of computer architectures for efficient execution of artificial neural network algorithms. Our earlier system, the Ring Array Processor (RAP), was a multiprocessor based on commercial DSPs with a low-latency ring interconnection scheme. We have used the RAP to simulate variable precision arithmetic and guide us in the design of higher performance neurocomputers based on custom VLSI. The RAP system played a critical role in this study, enabling us to experiment with much larger networks than would otherwise be possible. Our study shows that back-propagation training algorithms only require moderate precision. Specifically, 16b weight values and 8b output values are sufficient to achieve training and classification results comparable to 32b floating point. Although these results were gathered for frame classification in continuous speech, we expect that they will extend to many other connectionist calculations. We have used these results as part of the design of a programmable single chip microprocessor, SPERT. The reduced precision arithmetic permits the use of multiple units per processor. Also, reduced precision operands make more efficient use of valuable processor-memory bandwidth. For our moderate-precision fixed-point arithmetic applications, SPERT represents more than an order of magnitude reduction in cost over systems based on DSP chips. ",
"neighbors": [
2268,
2275
],
"mask": "Validation"
},
{
"node_id": 2446,
"label": 1,
"text": "Title: Simulation of Reduced Precision Arithmetic for Digital Neural Networks Using the RAP Machine \nAbstract: 1] R.K. Belew, J. McInerney, and N. Schraudolph, Evolving networks: using the genetic algorithm with connectionist learning, in Artificial Life II, SFI Studies in the Science of Complexity, C.G. Langton, C. Taylor, J.D. Farmer, S. Rasmussen Eds., vol. 10, Addison-Wesley, 1991. [2] M. McInerney, and A.P. Dhawan, Use of genetic algorithms with back propagation in training of feed-forward neural networks, in IEEE International Conference on Neural Networks, vol. 1, pp. 203-208, 1993. [3] F.Z. Brill, D.E. Brown, and W.N. Martin, Fast genetic selection of features for neural network classifiers, IEEE Transactions on Neural Networks, vol. 3, no. 2, pp. 324-328, 1992. [4] F. Dellaert, and J. Vandewalle, Automatic design of cellular neural networks by means of genetic algorithms: finding a feature detector, in The Third IEEE International Workshop on Cellular Neural Networks and Their Applications, IEEE, New Jersey, pp. 189-194, 1994. [5] D.E. Moriarty, and R. Miikkulainen, Efficient reinforcement learning through symbiotic evolution, Machine Learning, vol. 22, pp. 11-33, 1996. [6] L. Davis, Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, 1991. [7] D. Whitely, The GENITOR algorithm and selective pressure, in Proceedings of the Third Interanational Conference on Genetic Algorithms, J.D. Schaffer Ed., Morgan Kauffman, San Mateo, CA, 1989, pp. 116-121. [8] van Camp, D., T. Plate and G.E. Hinton (1992). The Xerion Neural Network Simulator and Documentation. Department of Computer Science, University of Toronto, Toronto. ",
"neighbors": [
129,
247,
2451
],
"mask": "Test"
},
{
"node_id": 2447,
"label": 0,
"text": "Title: New Roles for Machine Learning in Design for Design of Educational Computing New roles for\nAbstract: Research on machine learning in design has concentrated on the use and development of techniques that can solve simple well-defined problems. Invariably, this effort, while important at the early stages of the development of the field, cannot scale up to address real design problems since all existing techniques are based on simplifying assumptions that do not hold for real design. In particular they do not address the dependence on context and multiple, often conflicting, interests that are constitutive of design. This paper analyzes the present situation and criticizes a number of prevailing views. Subsequently, the paper offers an alternative approach whose goal is to advance the use of machine learning in design practice. The approach is partially integrated into a modeling system called n-dim. The use of machine learning in n-dim is presented and open research issues are outlined. ",
"neighbors": [
378,
2010
],
"mask": "Train"
},
{
"node_id": 2448,
"label": 2,
"text": "Title: Automatic Smoothing Spline Projection Pursuit Automatic Smoothing Spline Projection Pursuit. \nAbstract: kj = 1. The standard PPR algorithm of Friedman and Stuet-zle (1981) estimates the smooth functions f j using the supersmoother nonparametric scatterplot smoother. Friedman's algorithm constructs a model with M max linear combinations, then prunes back to a simpler model of size M M max , where M and M max are specified by the user. This paper discusses an alternative algorithm in which the smooth functions are estimated using smoothing splines. The direction coefficients ff j , the amount of smoothing in each direction, and the number of terms M and M max are determined to optimize a single generalized cross-validation measure. ",
"neighbors": [
427,
519,
2311,
2526
],
"mask": "Train"
},
{
"node_id": 2449,
"label": 5,
"text": "Title: Learning by Refining Algorithm Sketches \nAbstract: In this paper we suggest a mechanism that improves significantly the performance of a top-down inductive logic programming (ILP) learning system. This improvement is achieved at the cost of giving to the system extra information that is not difficult to formulate. This information appears in the form of an algorithm sketch: an incomplete and somewhat vague representation of the computation related to a particular example. We describe which sketches are admissible, give details of the learning algorithm that exploits the information contained in the sketch. The experiments carried out with the implemented system (SKIL) have demonstrated the usefulness of the method and its potential in future applications. ",
"neighbors": [
1881,
2158,
2450
],
"mask": "Train"
},
{
"node_id": 2450,
"label": 5,
"text": "Title: Architecture for Iterative Learning of Recursive Definitions \nAbstract: In this paper we are concerned with the problem of inducing recursive Horn clauses from small sets of training examples. The method of iterative bootstrap induction is presented. In the first step, the system generates simple clauses, which can be regarded as properties of the required definition. Properties represent generalizations of the positive examples, simulating the effect of having larger number of examples. Properties are used subsequently to induce the required recursive definitions. This paper describes the method together with a series of experiments. The results support the thesis that iterative bootstrap induction is indeed an effective technique that could be of general use in ILP. ",
"neighbors": [
1498,
1881,
2449
],
"mask": "Train"
},
{
"node_id": 2451,
"label": 1,
"text": "Title: Automatic Design of Cellular Neural Networks by means of Genetic Algorithms: Finding a Feature Detector\nAbstract: This paper aims to examine the use of genetic algorithms to optimize subsystems of cellular neural network architectures. The application at hand is character recognition: the aim is to evolve an optimal feature detector in order to aid a conventional classifier network to generalize across different fonts. To this end, a performance function and a genetic encoding for a feature detector are presented. An experiment is described where an optimal feature detector is indeed found by the genetic algorithm. We are interested in the application of cellular neural networks in computer vision. Genetic algorithms (GA's) [1-3] can serve to optimize the design of cellular neural networks. Although the design of the global architecture of the system could still be done by human insight, we propose that specific sub-modules of the system are best optimized using one or other optimization method. GAs are a good candidate to fulfill this optimization role, as they are well suited to problems where the objective function is a complex function of many parameters. The specific problem we want to investigate is one of character recognition. More specifically, we would like to use the GA to find optimal feature detectors to be used in the recognition of digits . ",
"neighbors": [
129,
163,
1973,
2446
],
"mask": "Train"
},
{
"node_id": 2452,
"label": 2,
"text": "Title: A Bootstrap Evaluation of the Effect of Data Splitting on Financial Time Series \nAbstract: This article exposes problems of the commonly used technique of splitting the available data into training, validation, and test sets that are held fixed, warns about drawing too strong conclusions from such static splits, and shows potential pitfalls of ignoring variability across splits. Using a bootstrap or resampling method, we compare the uncertainty in the solution stemming from the data splitting with neural network specific uncertainties (parameter initialization, choice of number of hidden units, etc.). We present two results on data from the New York Stock Exchange. First, the variation due to different resamplings is significantly larger than the variation due to different network conditions. This result implies that it is important to not over-interpret a model (or an ensemble of models) estimated on one specific split of the data. Second, on each split, the neural network solution with early stopping is very close to a linear model; no significant nonlinearities are extracted. ",
"neighbors": [
1315,
2595
],
"mask": "Train"
},
{
"node_id": 2453,
"label": 4,
"text": "Title: Packet Routing and Reinforcement Learning: Estimating Shortest Paths in Dynamic Graphs \nAbstract: This article exposes problems of the commonly used technique of splitting the available data into training, validation, and test sets that are held fixed, warns about drawing too strong conclusions from such static splits, and shows potential pitfalls of ignoring variability across splits. Using a bootstrap or resampling method, we compare the uncertainty in the solution stemming from the data splitting with neural network specific uncertainties (parameter initialization, choice of number of hidden units, etc.). We present two results on data from the New York Stock Exchange. First, the variation due to different resamplings is significantly larger than the variation due to different network conditions. This result implies that it is important to not over-interpret a model (or an ensemble of models) estimated on one specific split of the data. Second, on each split, the neural network solution with early stopping is very close to a linear model; no significant nonlinearities are extracted. ",
"neighbors": [
427,
2666
],
"mask": "Validation"
},
{
"node_id": 2454,
"label": 2,
"text": "Title: Early Stopping but when? \nAbstract: Validation can be used to detect when overfitting starts during supervised training of a neural network; training is then stopped before convergence to avoid the overfitting (\"early stopping\"). The exact criterion used for validation-based early stopping, however, is usually chosen in an ad-hoc fashion or training is stopped interactively. This trick describes how to select a stopping criterion in a systematic fashion; it is a trick for either speeding learning procedures or improving generalization, whichever is more important in the particular situation. An empirical investigation on multi-layer perceptrons shows that there exists a tradeoff between training time and generalization: From the given mix of 1296 training runs using different 12 problems and 24 different network architectures I conclude slower stopping criteria allow for small improvements in generalization (here: about 4% on average), but cost much more training time (here: about factor 4 longer on average).",
"neighbors": [
881,
1058,
1342,
2129
],
"mask": "Train"
},
{
"node_id": 2455,
"label": 6,
"text": "Title: Learning From a Population of Hypotheses \nAbstract: We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents. ",
"neighbors": [
453,
456,
2354
],
"mask": "Train"
},
{
"node_id": 2456,
"label": 3,
"text": "Title: Markov Chain Monte Carlo in Practice: A Roundtable Discussion \nAbstract: Markov chain Monte Carlo (MCMC) methods make possible the use of flexible Bayesian models that would otherwise be computationally infeasible. In recent years, a great variety of such applications have been described in the literature. Applied statisticians who are new to these methods may have several questions and concerns, however: How much effort and expertise are needed to design and use a Markov chain sampler? How much confidence can one have in the answers that MCMC produces? How does the use of MCMC affect the rest of the model-building process? At the Joint Statistical Meetings in August, 1996, a panel of experienced MCMC users discussed these and other issues, as well as various \"tricks of the trade\". This paper is an edited recreation of that discussion. Its purpose is to offer advice and guidance to novice users of MCMC and to not-so-novice users as well. Topics include building confidence in simulation results, methods for speeding convergence, assessing standard errors, identification of models for which good MCMC algorithms exist, and the current state of software development. ",
"neighbors": [
41,
48,
1029,
1742
],
"mask": "Train"
},
{
"node_id": 2457,
"label": 2,
"text": "Title: In Proceedings of the 1997 Sian Kaan International Workshop on Neural Networks and Neurocontrol. Real-Valued\nAbstract: 2 Neural Network & Machine Learning Laboratory Computer Science Department Brigham Young University Provo, UT 84602, USA Email: martinez@cs.byu.edu WWW: http://axon.cs.byu.edu Abstract. Many neural network models must be trained by finding a set of real-valued weights that yield high accuracy on a training set. Other learning models require weights on input attributes that yield high leave-one-out classification accuracy in order to avoid problems associated with irrelevant attributes and high dimensionality. In addition, there are a variety of general problems for which a set of real values must be found which maximize some evaluation function. This paper presents an algorithm for doing a schemata search over a real-valued weight space to find a set of weights (or other real values) that yield high values for a given evaluation function. The algorithm, called the Real-Valued Schemata Search (RVSS), uses the BRACE statistical technique [Moore & Lee, 1993] to determine when to narrow the search space. This paper details the RVSS approach and gives initial empirical results. ",
"neighbors": [
690,
1980
],
"mask": "Train"
},
{
"node_id": 2458,
"label": 3,
"text": "Title: ESTIMATING THE SQUARE ROOT OF A DENSITY VIA COMPACTLY SUPPORTED WAVELETS \nAbstract: A large body of nonparametric statistical literature is devoted to density estimation. Overviews are given in Silverman (1986) and Izenman (1991). This paper addresses the problem of univariate density estimation in a novel way. Our approach falls in the class of so called projection estimators, introduced by Cencov (1962). The orthonor-mal basis used is a basis of compactly supported wavelets from Daubechies' family. Kerkyacharian and Picard (1992, 1993), Donoho et al. (1996), and Delyon and Judit-sky (1993), among others, applied wavelets in density estimation. The local nature of wavelet functions makes the wavelet estimator superior to projection estimators that use classical orthonormal bases (Fourier, Hermite, etc.) Instead of estimating the unknown density directly, we estimate the square root of the density, which enables us to control the positiveness and the L 1 norm of the density estimate. However, in that approach one needs a pre-estimator of the density to calculate sample wavelet coefficients. We describe VISUSTOP, a data-driven procedure for determining the maximum number of levels in the wavelet density estimator. Coefficients in the selected levels are thresholded to make the estimator parsimonious. ",
"neighbors": [
2242,
2366,
2506
],
"mask": "Validation"
},
{
"node_id": 2459,
"label": 2,
"text": "Title: Control of Selective Visual Attention: Modeling the \"Where\" Pathway \nAbstract: Intermediate and higher vision processes require selection of a subset of the available sensory information before further processing. Usually, this selection is implemented in the form of a spatially circumscribed region of the visual field, the so-called \"focus of attention\" which scans the visual scene dependent on the input and on the attentional state of the subject. We here present a model for the control of the focus of attention in primates, based on a saliency map. This mechanism is not only expected to model the functionality of biological vision but also to be essential for the understanding of complex scenes in machine vision.",
"neighbors": [
553,
2606
],
"mask": "Train"
},
{
"node_id": 2460,
"label": 3,
"text": "Title: Mutual Information as a Bayesian Measure of Independence \nAbstract: 0.0 Abstract. The problem of hypothesis testing is examined from both the historical and the Bayesian points of view in the case that sampling is from an underlying joint probability distribution and the hypotheses tested for are those of independence and dependence of the underlying distribution. Exact results for the Bayesian method are provided. Asymptotic Bayesian results and historical method quantities are compared, and historical method quantities are interpreted in terms of clearly defined Bayesian quantities. The asymptotic Bayesian test relies upon a statistic that is predominantly mutual information. Problems of hypothesis testing arise ubiquitously in situations where observed data is produced by an unknown process and the question is asked From what process did this observed data arise? Historically, the hypothesis testing problem is approached from the point of view of sampling, whereby several fixed hypotheses to be tested for are given, and all measures of the test and its quality are found directly from the likelihood, i.e. by what amounts to sampling the likelihood [2] [3]. (To be specific, a hypothesis is a set of possible parameter vectors, each parameter vector completely specifying a sampling distribution. A simple hypothesis is a hypothesis set that contains one parameter vector. A composite hypothesis occurs when the (nonempty) hypothesis set is not a single parameter vector.) Generally, the test procedure chooses as true the hypothesis that gives the largest test value, although the notion of procedure is not specific and may refer to any method for choosing the hypothesis given the test values. Since it is of interest to quantify the quality of the test, a level of significance is generated, this level being the probability that, under the chosen hypothesis and test procedure, an incorrect hypothesis choice is made. The significance is generated using the sampling distribution, or likelihood. For simple hypotheses the level of significance is found using the single parameter value of the hypothesis. When a test is applied in the case of a composite hypothesis, a size for the test is found that is given by the supremum probability (ranging over the parameter vectors in the hypothesis set) that under the chosen ",
"neighbors": [
2384
],
"mask": "Train"
},
{
"node_id": 2461,
"label": 3,
"text": "Title: A guide to the literature on learning probabilistic networks from data \nAbstract: This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified examples. ",
"neighbors": [
1076,
1078,
2492
],
"mask": "Validation"
},
{
"node_id": 2462,
"label": 3,
"text": "Title: Building Classifiers using Bayesian Networks \nAbstract: Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state of the art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we examine and evaluate approaches for inducing classifiers from data, based on recent results in the theory of learning Bayesian networks. Bayesian networks are factored representations of probability distributions that generalize the naive Bayes classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness which are characteristic of naive Bayes. We experimentally tested these approaches using benchmark problems from the U. C. Irvine repository, and compared them against C4.5, naive Bayes, and wrapper-based feature selection methods. ",
"neighbors": [
401,
1986
],
"mask": "Validation"
},
{
"node_id": 2463,
"label": 3,
"text": "Title: Learning Belief Networks in the Presence of Missing Values and Hidden Variables \nAbstract: In recent years there has been a flurry of works on learning probabilistic belief networks. Current state of the art methods have been shown to be successful for two learning scenarios: learning both network structure and parameters from complete data, and learning parameters for a fixed network from incomplete datathat is, in the presence of missing values or hidden variables. However, no method has yet been demonstrated to effectively learn network structure from incomplete data. In this paper, we propose a new method for learning network structure from incomplete data. This method is based on an extension of the Expectation-Maximization (EM) algorithm for model selection problems that performs search for the best structure inside the EM procedure. We prove the convergence of this algorithm, and adapt it for learning belief networks. We then describe how to learn networks in two scenarios: when the data contains missing values, and in the presence of hidden variables. We provide experimental results that show the effectiveness of our procedure in both scenarios.",
"neighbors": [
558,
1934
],
"mask": "Train"
},
{
"node_id": 2464,
"label": 3,
"text": "Title: Static Data Association with a Terrain-Based Prior Density \nAbstract: In recent years there has been a flurry of works on learning probabilistic belief networks. Current state of the art methods have been shown to be successful for two learning scenarios: learning both network structure and parameters from complete data, and learning parameters for a fixed network from incomplete datathat is, in the presence of missing values or hidden variables. However, no method has yet been demonstrated to effectively learn network structure from incomplete data. In this paper, we propose a new method for learning network structure from incomplete data. This method is based on an extension of the Expectation-Maximization (EM) algorithm for model selection problems that performs search for the best structure inside the EM procedure. We prove the convergence of this algorithm, and adapt it for learning belief networks. We then describe how to learn networks in two scenarios: when the data contains missing values, and in the presence of hidden variables. We provide experimental results that show the effectiveness of our procedure in both scenarios.",
"neighbors": [
1817
],
"mask": "Test"
},
{
"node_id": 2465,
"label": 0,
"text": "Title: A systematic description of greedy optimisation algorithms for cost sensitive generalisation \nAbstract: This paper defines a class of problems involving combinations of induction and (cost) optimisation. A framework is presented that systematically describes problems that involve construction of decision trees or rules, optimising accuracy as well as measurement- and misclassification costs. It does not present any new algorithms but shows how this framework can be used to configure greedy algorithms for constructing such trees or rules. The framework covers a number of existing algorithms. Moreover, the framework can also be used to define algorithm configurations with new functionalities, as expressed in their evaluation functions.",
"neighbors": [
228,
638,
2057
],
"mask": "Train"
},
{
"node_id": 2466,
"label": 0,
"text": "Title: FLARE: Induction with Prior Knowledge \nAbstract: This paper defines a class of problems involving combinations of induction and (cost) optimisation. A framework is presented that systematically describes problems that involve construction of decision trees or rules, optimising accuracy as well as measurement- and misclassification costs. It does not present any new algorithms but shows how this framework can be used to configure greedy algorithms for constructing such trees or rules. The framework covers a number of existing algorithms. Moreover, the framework can also be used to define algorithm configurations with new functionalities, as expressed in their evaluation functions.",
"neighbors": [
831,
1830,
2240
],
"mask": "Validation"
},
{
"node_id": 2467,
"label": 6,
"text": "Title: Learning to Reason with a Restricted View \nAbstract: The Learning to Reason framework combines the study of Learning and Reasoning into a single task. Within it, learning is done specifically for the purpose of reasoning with the learned knowledge. Computational considerations show that this is a useful paradigm; in some cases learning and reasoning problems that are intractable when studied separately become tractable when performed as a task of Learning to Reason. In this paper we study Learning to Reason problems where the interaction with the world supplies the learner only partial information in the form of partial assignments. Several natural interpretations of partial assignments are considered and learning and reasoning algorithms using these are developed. The results presented exhibit a tradeoff between learnability, the strength of the oracles used in the interface, and the range of reasoning queries the learner is guaranteed to answer correctly.",
"neighbors": [
323,
2155,
2468
],
"mask": "Train"
},
{
"node_id": 2468,
"label": 3,
"text": "Title: On the Hardness of Approximate Reasoning \nAbstract: Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider various methods used in approximate reasoning such as computing degree of belief and Bayesian belief networks, as well as reasoning techniques such as constraint satisfaction and knowledge compilation, that use approximation to avoid computational difficulties, and reduce them to model-counting problems over a propositional domain. We prove that counting satisfying assignments of propositional languages is intractable even for Horn and monotone formulae, and even when the size of clauses and number of occurrences of the variables are extremely limited. This should be contrasted with the case of deductive reasoning, where Horn theories and theories with binary clauses are distinguished by the existence of linear time satisfiability algorithms. What is even more surprising is that, as we show, even approximating the number of satisfying assignments (i.e., \"approximating\" approximate reasoning), is intractable for most of these restricted theories. We also identify some restricted classes of propositional formulae for which efficient algorithms for counting satisfying assignments can be given. fl Preliminary version of this paper appeared in the Proceedings of the 13th International Joint Conference on Artificial Intelligence, IJCAI93. y Supported by NSF grants CCR-89-02500 and CCR-92-00884 and by DARPA AFOSR-F4962-92-J-0466. ",
"neighbors": [
2467
],
"mask": "Validation"
},
{
"node_id": 2469,
"label": 2,
"text": "Title: Extended Kalman filter in recurrent neural network training and pruning \nAbstract: Recently, extended Kalman filter (EKF) based training has been demonstrated to be effective in neural network training. However, its conjunction with pruning methods such as weight decay and optimal brain damage (OBD) has not yet been studied. In this paper, we will elucidate the method of EKF training and propose a pruning method which is based on the results obtained by EKF training. These combined training pruning method is applied to a time series prediction problem.",
"neighbors": [
1789
],
"mask": "Train"
},
{
"node_id": 2470,
"label": 1,
"text": "Title: Induction and Recapitulation of Deep Musical Structure \nAbstract: We describe recent extensions to our framework for the automatic generation of music-making programs. We have previously used genetic programming techniques to produce music-making programs that satisfy user-provided critical criteria. In this paper we describe new work on the use of connectionist techniques to automatically induce musical structure from a corpus. We show how the resulting neural networks can be used as critics that drive our genetic programming system. We argue that this framework can potentially support the induction and recapitulation of deep structural features of music. We present some initial results produced using neural and hybrid symbolic/neural critics, and we discuss directions for future work.",
"neighbors": [
1230,
1277,
2101,
2643,
2646
],
"mask": "Test"
},
{
"node_id": 2471,
"label": 0,
"text": "Title: ILA: Combining Inductive Learning with Prior Knowledge and Reasoning \nAbstract: We describe recent extensions to our framework for the automatic generation of music-making programs. We have previously used genetic programming techniques to produce music-making programs that satisfy user-provided critical criteria. In this paper we describe new work on the use of connectionist techniques to automatically induce musical structure from a corpus. We show how the resulting neural networks can be used as critics that drive our genetic programming system. We argue that this framework can potentially support the induction and recapitulation of deep structural features of music. We present some initial results produced using neural and hybrid symbolic/neural critics, and we discuss directions for future work.",
"neighbors": [
2245
],
"mask": "Train"
},
{
"node_id": 2472,
"label": 4,
"text": "Title: Toward an Ideal Trainer* \nAbstract: This paper appeared in 1994 in Machine Learning, 15 (3): 251-277. Abstract This paper demonstrates how the nature of the opposition during training affects learning to play two-person, perfect information board games. It considers different kinds of competitive training, the impact of trainer error, appropriate metrics for post-training performance measurement, and the ways those metrics can be applied. The results suggest that teaching a program by leading it repeatedly through the same restricted paths, albeit high quality ones, is overly narrow preparation for the variations that appear in real-world experience. The results also demonstrate that variety introduced into training by random choice is unreliable preparation, and that a program that directs its own training may overlook important situations. The results argue for a broad variety of training experience with play at many levels. This variety may either be inherent in the game or introduced deliberately into the training. Lesson and practice training, a blend of expert guidance and knowledge-based, self-directed elaboration, is shown to be particularly effective for learning during competition.",
"neighbors": [
565,
2476
],
"mask": "Test"
},
{
"node_id": 2473,
"label": 0,
"text": "Title: A Heuristic Approach to the Discovery of Macro-operators. Machine Learning, 3, 285-317. L e a\nAbstract: The negative effect is naturally more significant in the more complex domain. The graph for the simple domain crosses the 0 line earlier than the complex domain. That means that learning starts to be useful with weight greater than 0.6 for the simple domain and 0.7 for the complex domain. As we relax the optimality requirement more s i g n i f i c a n t l y ( w i t h a W = 0.8), macro usage in the more complex domain becomes more advantageous. The purpose of the research described in this paper is to identify the parameters that effects deductive learning and to perform experiments systematically in order to understand the nature of those effects. The goal of this paper is to demonstrate the methodology of performing parametric experimental study of deductive learning. The example here include the study of two parameters: the point on the satisficing-optimizing scale that is used during the search carried out during problem solving time and during learning time. We showed that A*, which looks for optimal solutions, cannot benefit from macro learning but as the strategy comes closer to best-first (satisficing search), the utility of macros increases. We also demonstrated that deductive learners that learn offline by solving training problems are sensitive to the type of search used during the learning. We showed that in general optimizing search is best for learning. It generates macros that increase the quality solutions regardless of the search method used during problem solving. It also improves the efficiency for problem solvers that require a high level of optimality. The only drawback in using optimizing search is the increase in learning resources spent. We are aware of the fact that the results described here are not very surprising. The goal of the parametric study is not necessarily to find exciting results, but to obtain results, sometimes even previously known, in a controlled experimental environment. The work described here is only part of our research plan. We are currently in the process of extensive experimentation with all the parameters described here and also with others. We also intend to test the validity of the conclusions reached during the study by repeating some of the tests in several of the commonly known search problems. We hope that such systematic experimentation will help the research community to better understand the process of deductive learning and will serve as a demonstration of the experimental methodology that should be used in machine learning research. ",
"neighbors": [
434,
551,
1192,
2551
],
"mask": "Train"
},
{
"node_id": 2474,
"label": 3,
"text": "Title: The Frame Problem and Bayesian Network Action Representations \nAbstract: We examine a number of techniques for representing actions with stochastic effects using Bayesian networks and influence diagrams. We compare these techniques according to ease of specification and size of the representation required for the complete specification of the dynamics of a particular system, paying particular attention the role of persistence relationships. We precisely characterize two components of the frame problem for Bayes nets and stochastic actions, propose several ways to deal with these problems, and compare our solutions with Re-iter's solution to the frame problem for the situation calculus. The result is a set of techniques that permit both ease of specification and compact representation of probabilistic system dynamics that is of comparable size (and timbre) to Reiter's representation (i.e., with no explicit frame axioms).",
"neighbors": [
62,
2078,
2425
],
"mask": "Train"
},
{
"node_id": 2475,
"label": 6,
"text": "Title: Learning polynomials with queries: The highly noisy case task for the case when F \nAbstract: Given a function f mapping n-variate inputs from a finite field F into F , we consider the task of reconstructing a list of all n-variate degree d polynomials which agree with f on a tiny but non-negligible fraction, ffi, of the input space. We give a randomized algorithm for solving this task which accesses f as a black box and runs in time polynomial in 1 d=jF j). For the special case when d = 1, we solve this problem for all * def jF j > 0. In this case the running time of our algorithm is bounded by a polynomial in 1 * ; n and exponential in d. Our algorithm generalizes a previously known algorithm, due to Goldreich and Levin, that solves this",
"neighbors": [
574,
591,
640,
1363,
2246
],
"mask": "Validation"
},
{
"node_id": 2476,
"label": 6,
"text": "Title: Why Experimentation can be better than \"Perfect Guidance\" \nAbstract: Many problems correspond to the classical control task of determining the appropriate control action to take, given some (sequence of) observations. One standard approach to learning these control rules, called behavior cloning, involves watching a perfect operator operate a plant, and then trying to emulate its behavior. In the experimental learning approach, by contrast, the learner first guesses an initial operation-to-action policy and tries it out. If this policy performs sub-optimally, the learner can modify it to produce a new policy, and recur. This paper discusses the relative effectiveness of these two approaches, especially in the presence of perceptual aliasing, showing in particular that the experimental learner can often learn more effectively than the cloning one. ",
"neighbors": [
294,
2472
],
"mask": "Validation"
},
{
"node_id": 2477,
"label": 2,
"text": "Title: Neural Models for Part-Whole Hierarchies \nAbstract: We present a connectionist method for representing images that explicitly addresses their hierarchical nature. It blends data from neu-roscience about whole-object viewpoint sensitive cells in inferotem-poral cortex 8 and attentional basis-field modulation in V4 3 with ideas about hierarchical descriptions based on microfeatures. 5, 11 The resulting model makes critical use of bottom-up and top-down pathways for analysis and synthesis. 6 We illustrate the model with a simple example of representing information about faces.",
"neighbors": [
2678
],
"mask": "Train"
},
{
"node_id": 2478,
"label": 1,
"text": "Title: Culture Enhances the Evolvability of Cognition \nAbstract: This paper discusses the role of culture in the evolution of cognitive systems. We define culture as any information transmitted between individuals and between generations by non-genetic means. Experiments are presented that use genetic programming systems that include special mechanisms for cultural transmission of information. These systems evolve computer programs that perform cognitive tasks including mathematical function mapping and action selection in a virtual world. The data show that the presence of culture-supporting mechanisms can have a clear beneficial impact on the evolvability of correct programs. The implications that these results may have for cognitive science are briefly discussed. ",
"neighbors": [
2220,
2226
],
"mask": "Test"
},
{
"node_id": 2479,
"label": 1,
"text": "Title: A Transformation System for Interactive Reformulation of Design Optimization Strategies \nAbstract: Numerical design optimization algorithms are highly sensitive to the particular formulation of the optimization problems they are given. The formulation of the search space, the objective function and the constraints will generally have a large impact on the duration of the optimization process as well as the quality of the resulting design. Furthermore, the best formulation will vary from one application domain to another, and from one problem to another within a given application domain. Unfortunately, a design engineer may not know the best formulation in advance of attempting to set up and run a design optimization process. In order to attack this problem, we have developed a software environment that supports interactive formulation, testing and reformulation of design optimization strategies. Our system represents optimization strategies in terms of second-order dataflow graphs. Reformulations of strategies are implemented as transformations between dataflow graphs. The system permits the user to interactively generate and search a space of design optimization strategies, and experimentally evaluate their performance on test problems, in order to find a strategy that is suitable for his application domain. The system has been implemented in a domain independent fashion, and is being tested in the domain of racing yacht design. ",
"neighbors": [
2128,
2319
],
"mask": "Train"
},
{
"node_id": 2480,
"label": 4,
"text": "Title: Planning by Incremental Dynamic Programming \nAbstract: This paper presents the basic results and ideas of dynamic programming as they relate most directly to the concerns of planning in AI. These form the theoretical basis for the incremental planning methods used in the integrated architecture Dyna. These incremental planning methods are based on continually updating an evaluation function and the situation-action mapping of a reactive system. Actions are generated by the reactive system and thus involve minimal delay, while the incremental planning process guarantees that the actions and evaluation function will eventually be optimal|no matter how extensive a search is required. These methods are well suited to stochastic tasks and to tasks in which a complete and accurate model is not available. For tasks too large to implement the situation-action mapping as a table, supervised-learning methods must be used, and their capabilities remain a significant limitation of the approach.",
"neighbors": [
523,
565,
566,
653,
2485
],
"mask": "Train"
},
{
"node_id": 2481,
"label": 6,
"text": "Title: The Design and Evaluation of a Rule Induction Algorithm \nAbstract: technical report BYU-CS-93-11 June 1993 ",
"neighbors": [
1858
],
"mask": "Test"
},
{
"node_id": 2482,
"label": 0,
"text": "Title: CBR for Document Retrieval: The FAllQ Project \nAbstract: This paper reports about a project on document retrieval in an industrial setting. The objective is to provide a tool that helps finding documents related to a given query, such as answers in Frequently Asked Questions databases. A CBR approach has been used to develop a running prototypical system which is currently under practical evaluation. ",
"neighbors": [
1854,
1855,
2123,
2645
],
"mask": "Train"
},
{
"node_id": 2483,
"label": 6,
"text": "Title: How Many Queries are Needed to Learn? \nAbstract: We investigate the query complexity of exact learning in the membership and (proper) equivalence query model. We give a complete characterization of concept classes that are learnable with a polynomial number of polynomial sized queries in this model. We give applications of this characterization, including results on learning a natural subclass of DNF formulas, and on learning with membership queries alone. Query complexity has previously been used to prove lower bounds on the time complexity of exact learning. We show a new relationship between query complexity and time complexity in exact learning: If any \"honest\" class is exactly and properly learnable with polynomial query complexity, but not learnable in polynomial time, then P 6= NP. In particular, we show that an honest class is exactly polynomial-query learnable if and only if it is learnable using an oracle for p ",
"neighbors": [
1003,
1004,
1848
],
"mask": "Test"
},
{
"node_id": 2484,
"label": 0,
"text": "Title: The evaluation of Anapron: A case study in evaluating a case-based system \nAbstract: This paper presents a case study in evaluating a case-based system. It describes the evaluation of Anapron, a system that pronounces names by a combination of rule-based and case-based reasoning. Three sets of experiments were run on Anapron: a set of exploratory measurements to profile the system's operation; a comparison between Anapron and other name-pronunciation systems; and a set of studies that modified various parts of the system to isolate the contribution of each. Lessons learned from these experiments for CBR evaluation methodology and for CBR theory are discussed. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories of Cambridge, Massachusetts; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories. All rights reserved. ",
"neighbors": [
986,
1644,
2614,
2616
],
"mask": "Test"
},
{
"node_id": 2485,
"label": 4,
"text": "Title: Tight Performance Bounds on Greedy Policies Based on Imperfect Value Functions \nAbstract: Consider a given value function on states of a Markov decision problem, as might result from applying a reinforcement learning algorithm. Unless this value function equals the corresponding optimal value function, at some states there will be a discrepancy, which is natural to call the Bellman residual, between what the value function specifies at that state and what is obtained by a one-step lookahead along the seemingly best action at that state using the given value function to evaluate all succeeding states. This paper derives a tight bound on how far from optimal the discounted return for a greedy policy based on the given value function will be as a function of the maximum norm magnitude of this Bellman residual. A corresponding result is also obtained for value functions defined on state-action pairs, as are used in Q-learning. One significant application of these results is to problems where a function approximator is used to learn a value function, with training of the approxi-mator based on trying to minimize the Bellman residual across states or state-action pairs. When control is based on the use of the resulting value function, this result provides a link between how well the objectives of function approximator training are met and the quality of the resulting control. ",
"neighbors": [
173,
565,
566,
575,
749,
1378,
1816,
2480
],
"mask": "Train"
},
{
"node_id": 2486,
"label": 2,
"text": "Title: The Canonical Distortion Measure for Vector Quantization and Function Approximation \nAbstract: To measure the quality of a set of vector quantization points a means of measuring the distance between a random point and its quantization is required. Common metrics such as the Hamming and Euclidean metrics, while mathematically simple, are inappropriate for comparing natural signals such as speech or images. In this paper it is shown how an environment of functions on an input space X induces a canonical distortion measure (CDM) on X. The depiction canonical is justified because it is shown that optimizing the reconstruction error of X with respect to the CDM gives rise to optimal piecewise constant approximations of the functions in the environment. The CDM is calculated in closed form for several different function classes. An algorithm for training neural networks to implement the CDM is presented along with some en couraging experimental results. ",
"neighbors": [
1970,
2586
],
"mask": "Train"
},
{
"node_id": 2487,
"label": 6,
"text": "Title: An Optimized Theory Revision Module \nAbstract: Theory revision systems typically use a set of theory-to-theory transformations f k g to hill-climb from a given initial theory to a new theory whose empirical accuracy, over a given set of labeled training instances fc j g, is a local maximum. At the heart of each such process is an \"evaluator\", which compares the accuracy of the current theory KB with that of each of its \"neighbors\" f k (KB)g, with the goal of determining which neighbor has the highest accuracy. The obvious \"wrapper\" evaluator simply evaluates each individual neighbor theory KB k = k (KB) on each instance c j . As it can be very expensive to evaluate a single theory on a single instance, and there can be a great many training instances and a huge number of neighbors, this approach can be prohibitively slow. We present an alternative system which employs a smarter evaluator that quickly computes the accuracy of a transformed theory k (KB) by \"looking inside\" KB and reasoning about the effects of the k transformation. We compare the performance of with the naive wrapper system on real-world theories obtained from a fielded expert system, and find that runs over 35 times faster than , while attaining the same accuracy. This paper also discusses 's source of power. Keywords: theory revision, efficient algorithm, hill-climbing system Multiple Submissions: We have submited a related version of this paper to AAAI96. fl We gratefully acknowledge the many helpful comments on this report from George Drastal, Chandra Mouleeswaran and Geoff Towell. ",
"neighbors": [
52,
136,
430,
1823
],
"mask": "Train"
},
{
"node_id": 2488,
"label": 3,
"text": "Title: Density and hazard rate estimation for right censored data using wavelet methods \nAbstract: This paper describes a wavelet method for the estimation of density and hazard rate functions from randomly right censored data. We adopt a nonparametric approach in assuming that the density and hazard rate have no specific parametric form. The method is based on dividing the time axis into a dyadic number of intervals and then counting the number of events within each interval. The number of events and the survival function of the observations are then separately smoothed over time via linear wavelet smoothers, and then the hazard rate function estimators are obtained by taking the ratio. We prove that the estimators possess pointwise and global mean square consistency, obtain the best possible asymptotic MISE convergence rate and are also asymptotically normally distributed. We also describe simulation experiments that show these estimators are reasonably reliable in practice. The method is illustrated with two real examples. The first uses survival time data for patients with liver metastases from a colorectal primary tumour without other distant metastases. The second is concerned with times of unemployment for women and the wavelet estimate, through its flexibility, provides a new and interesting interpretation. ",
"neighbors": [
1910
],
"mask": "Train"
},
{
"node_id": 2489,
"label": 0,
"text": "Title: BECOMING AN EXPERT CASE-BASED REASONER: LEARNING TO ADAPT PRIOR CASES \nAbstract: Experience plays an important role in the development of human expertise. One computational model of how experience affects expertise is provided by research on case-based reasoning, which examines how stored cases encapsulating traces of specific prior problem-solving episodes can be retrieved and re-applied to facilitate new problem-solving. Much progress has been made in methods for accessing relevant cases, and case-based reasoning is receiving wide acceptance both as a technology for developing intelligent systems and as a cognitive model of a human reasoning process. However, one important aspect of case-based reasoning remains poorly understood: the process by which retrieved cases are adapted to fit new situations. The difficulty of encoding effective adaptation rules by hand is widely recognized as a serious impediment to the development of fully autonomous case-based reasoning systems. Consequently, an important question is how case-based reasoning systems might learn to improve their expertise at case adaptation. We present a framework for acquiring this expertise by using a combination of general adaptation rules, introspective reasoning, and case-based reasoning about the case adaptation task itself. ",
"neighbors": [
643,
1126,
2371,
2372
],
"mask": "Train"
},
{
"node_id": 2490,
"label": 1,
"text": "Title: Computer Evolution of Buildable Objects for Evolutionary Design by Computers \nAbstract: Experience plays an important role in the development of human expertise. One computational model of how experience affects expertise is provided by research on case-based reasoning, which examines how stored cases encapsulating traces of specific prior problem-solving episodes can be retrieved and re-applied to facilitate new problem-solving. Much progress has been made in methods for accessing relevant cases, and case-based reasoning is receiving wide acceptance both as a technology for developing intelligent systems and as a cognitive model of a human reasoning process. However, one important aspect of case-based reasoning remains poorly understood: the process by which retrieved cases are adapted to fit new situations. The difficulty of encoding effective adaptation rules by hand is widely recognized as a serious impediment to the development of fully autonomous case-based reasoning systems. Consequently, an important question is how case-based reasoning systems might learn to improve their expertise at case adaptation. We present a framework for acquiring this expertise by using a combination of general adaptation rules, introspective reasoning, and case-based reasoning about the case adaptation task itself. ",
"neighbors": [
1807
],
"mask": "Train"
},
{
"node_id": 2491,
"label": 2,
"text": "Title: Prosopagnosia in Modular Neural Network Models \nAbstract: There is strong evidence that face processing in the brain is localized. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent neural mechanisms. In this chapter, we use computational models to show how the face processing specialization apparently underlying prosopagnosia and visual object agnosia could be attributed to 1) a relatively simple competitive selection mechanism that, during development, devotes neural resources to the tasks they are best at performing, 2) the developing infant's need to perform subordinate classification (identification) of faces early on, and 3) the infant's low visual acuity at birth. Inspired by de Schonen and Mancini's (1998) arguments that factors like these could bias the visual system to develop a specialized face processor, and Jacobs and Kosslyn's (1994) experiments in the mixtures of experts (ME) modeling paradigm, we provide a preliminary computational demonstration of how this theory accounts for the double dissociation between face and object processing. We present two feed-forward computational models of visual processing. In both models, the selection mechanism is a gating network that mediates a competition between modules attempting to classify input stimuli. In Model I, when the modules are simple unbiased classifiers, the competition is sufficient to achieve enough of a specialization that damaging one module impairs the model's face recognition more than its object recognition, and damaging the other module impairs the model's object recognition more than its face recognition. In Model II, however, we bias the modules by providing one with low spatial frequency information and the other with high spatial frequency information. In this case, when the model's task is subordinate classification of faces and superordinate classification of objects, the low spatial frequency network shows an even stronger specialization for faces. No other combination of tasks and inputs shows this strong specialization. We take these results as support for the idea that something resembling a face processing \"module\" could arise as a natural consequence of the infant's developmental environment without being innately specified. ",
"neighbors": [
1981
],
"mask": "Train"
},
{
"node_id": 2492,
"label": 3,
"text": "Title: Robust Parameter Learning in Bayesian Networks with Missing Data \nAbstract: There is strong evidence that face processing in the brain is localized. The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent neural mechanisms. In this chapter, we use computational models to show how the face processing specialization apparently underlying prosopagnosia and visual object agnosia could be attributed to 1) a relatively simple competitive selection mechanism that, during development, devotes neural resources to the tasks they are best at performing, 2) the developing infant's need to perform subordinate classification (identification) of faces early on, and 3) the infant's low visual acuity at birth. Inspired by de Schonen and Mancini's (1998) arguments that factors like these could bias the visual system to develop a specialized face processor, and Jacobs and Kosslyn's (1994) experiments in the mixtures of experts (ME) modeling paradigm, we provide a preliminary computational demonstration of how this theory accounts for the double dissociation between face and object processing. We present two feed-forward computational models of visual processing. In both models, the selection mechanism is a gating network that mediates a competition between modules attempting to classify input stimuli. In Model I, when the modules are simple unbiased classifiers, the competition is sufficient to achieve enough of a specialization that damaging one module impairs the model's face recognition more than its object recognition, and damaging the other module impairs the model's object recognition more than its face recognition. In Model II, however, we bias the modules by providing one with low spatial frequency information and the other with high spatial frequency information. In this case, when the model's task is subordinate classification of faces and superordinate classification of objects, the low spatial frequency network shows an even stronger specialization for faces. No other combination of tasks and inputs shows this strong specialization. We take these results as support for the idea that something resembling a face processing \"module\" could arise as a natural consequence of the infant's developmental environment without being innately specified. ",
"neighbors": [
577,
1900,
2461,
2547
],
"mask": "Test"
},
{
"node_id": 2493,
"label": 5,
"text": "Title: Relational Knowledge Discovery in Databases \nAbstract: In this paper, we indicate some possible applications of ILP or similar techniques in the knowledge discovery field, and then discuss several methods for adapting and linking ILP-systems to relational database systems. The proposed methods range from \"pure ILP\" to \"based on techniques originating in ILP\". We show that it is both easy and advantageous to adapt ILP-systems in this way.",
"neighbors": [
1428,
2426
],
"mask": "Train"
},
{
"node_id": 2494,
"label": 4,
"text": "Title: Incremental Pruning: A Simple, Fast, Exact Algorithm for Partially Observable Markov Decision Processes \nAbstract: Most exact algorithms for general pomdps use a form of dynamic programming in which a piecewise-linear and convex representation of one value function is transformed into another. We examine variations of the \"incremental pruning\" approach for solving this problem and compare them to earlier algorithms from theoretical and empirical perspectives. We find that incremental pruning is presently the most efficient algorithm for solving pomdps.",
"neighbors": [
2063
],
"mask": "Train"
},
{
"node_id": 2495,
"label": 6,
"text": "Title: Similar Classifiers and VC Error Bounds \nAbstract: We improve error bounds based on VC analysis for classes with sets of similar classifiers. We apply the new error bounds to separating planes and artificial neural networks. Key words machine learning, learning theory, generalization, Vapnik-Chervonenkis, separating planes, neural networks. ",
"neighbors": [
58,
571,
1762,
2694
],
"mask": "Train"
},
{
"node_id": 2496,
"label": 2,
"text": "Title: Gene Structure Prediction by Linguistic Methods \nAbstract: The higher-order structure of genes and other features of biological sequences can be described by means of formal grammars. These grammars can then be used by general-purpose parsers to detect and assemble such structures by means of syntactic pattern recognition. We describe a grammar and parser for eukaryotic protein-encoding genes, which by some measures is as effective as current connectionist and combinatorial algorithms in predicting gene structures for sequence database entries. Parameters on the grammar rules are optimized for several different species, and mixing experiments performed to determine the degree of species specificity and the relative importance of compositional, signal-based, and syntactic components in gene prediction.",
"neighbors": [
613,
2107,
2571
],
"mask": "Test"
},
{
"node_id": 2497,
"label": 2,
"text": "Title: Learning a Specialization for Face Recognition: The Effect of Spatial Frequency \nAbstract: The double dissociation between prosopagnosia, a face recognition deficit occurring after brain damage, and visual object agnosia, difficulty recognizing other kinds of complex objects, indicates that face and non-face object recognition may be served by partially independent mechanisms in the brain. Such a dissociation could be the result of a competitive learning mechanism that, during development, devotes neural resources to the tasks they are best at performing. Studies of normal adult performance on face and object recognition tasks seem to indicate that face recognition is primarily configural, involving the low spatial frequency information present in a stimulus over relatively large distances, whereas object recognition is primarily featural, involving analysis of the object's parts using local, high spatial frequency information. In a feed-forward computational model of visual processing, two modules compete to classify input stimuli; when one module receives low spatial frequency information and the other receives high spatial frequency information, the low-frequency module shows a strong specialization for face recognition in a combined face identification/object classification task. The series of experiments shows that the fine discrimination necessary for distinguishing members of a visually homoge neous class such as faces relies heavily on the low spatial frequencies present in a stimulus.",
"neighbors": [
1915
],
"mask": "Train"
},
{
"node_id": 2498,
"label": 2,
"text": "Title: Combining Exploratory Projection Pursuit And Projection Pursuit Regression With Application To Neural Networks \nAbstract: We present a novel classification and regression method that combines exploratory projection pursuit (unsupervised training) with projection pursuit regression (supervised training), to yield a new family of cost/complexity penalty terms. Some improved generalization properties are demonstrated on real world problems.",
"neighbors": [
359,
2147,
2322,
2499,
2500,
2567
],
"mask": "Test"
},
{
"node_id": 2499,
"label": 2,
"text": "Title: Objective Function Formulation of the BCM Theory of Visual Cortical Plasticity: Statistical Connections, Stability Conditions \nAbstract: In this paper, we present an objective function formulation of the BCM theory of visual cortical plasticity that permits us to demonstrate the connection between the unsupervised BCM learning procedure and various statistical methods, in particular, that of Projection Pursuit. This formulation provides a general method for stability analysis of the fixed points of the theory and enables us to analyze the behavior and the evolution of the network under various visual rearing conditions. It also allows comparison with many existing unsupervised methods. This model has been shown successful in various applications such as phoneme and 3D object recognition. We thus have the striking and possibly highly significant result that a biological neuron is performing a sophisticated statistical procedure. ",
"neighbors": [
203,
359,
863,
1068,
1418,
1787,
1871,
1935,
2147,
2322,
2357,
2376,
2385,
2422,
2498,
2500,
2505
],
"mask": "Train"
},
{
"node_id": 2500,
"label": 2,
"text": "Title: Face Recognition using a Hybrid Supervised/Unsupervised Neural Network \nAbstract: A system for automatic face recognition is presented. It consists of several steps; Automatic detection of the eyes and mouth is followed by a spatial normalization of the images. The classification of the normalized images is carried out by a hybrid (supervised and unsupervised) Neural Network. Two methods for reducing the overfitting a common problem in high dimensional classification schemes are presented, and the superiority of their combination is demonstrated. ",
"neighbors": [
1068,
2322,
2376,
2422,
2498,
2499
],
"mask": "Validation"
},
{
"node_id": 2501,
"label": 2,
"text": "Title: EMRBF: A Statistical Basis for Using Radial Basis Functions for Process Control \nAbstract: Radial Basis Function (RBF) neural networks offer an attractive equation form for use in model-based control because they can approximate highly nonlinear plants and yet are well suited for linear adaptive control. We show how interpreting RBFs as mixtures of Gaussians allows the application of many statistical tools including the EM algorithm for parameter estimation. The resulting EMRBF models give uncertainty estimates and warn when they are extrapolating beyond the region where training data was available. ",
"neighbors": [
611,
2260
],
"mask": "Train"
},
{
"node_id": 2502,
"label": 0,
"text": "Title: Modeling Ill-Structured Optimization Tasks through Cases \nAbstract: CABINS is a framework of modeling an optimization task in ill-structured domains. In such domains, neither systems nor human experts possess the exact model for guiding optimization. And the user's model of optimality is subjective and situation-dependent. CABINS optimizes a solution through iterative revision using case-based reasoning. In CABINS, task structure analysis was adopted for creating an initial model of the optimization task. Generic vocabularies found in the analysis were specialized into case feature descriptions for application problems. Extensive experimentation on job shop scheduling problems has shown that CABINS can operationalize and improve the model through the accumulation of cases. ",
"neighbors": [
717,
2605
],
"mask": "Train"
},
{
"node_id": 2503,
"label": 2,
"text": "Title: `Balancing' of conductances may explain irregular cortical spiking. \nAbstract: Five related factors are identified which enable single compartment Hodgkin-Huxley model neurons to convert random synaptic input into irregular spike trains similar to those seen in in vivo cortical recordings. We suggest that cortical neurons may operate in a narrow parameter regime where synaptic and intrinsic conductances are balanced to re flect, through spike timing, detailed correlations in the inputs. fl Please send comments to tony@salk.edu. The reference for this paper is: Technical Report no. INC-9502, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523. ",
"neighbors": [
2358
],
"mask": "Train"
},
{
"node_id": 2504,
"label": 1,
"text": "Title: Genetic Encoding Strategies for Neural Networks \nAbstract: The application of genetic algorithms to neural network optimization (GANN) has produced an active field of research. This paper proposes a classification of the encoding strategies and it also gives a critical analysis of the current state of development. The idea of evolving artificial neural networks (NN) by genetic algorithms (GA) is based on a powerful metaphor: the evolution of the human brain. This mechanism has developed the highest form of intelligence known from scratch. The metaphor has inspired a great deal of research activities that can be traced to the late 1980s (for instance [15]). An increasing amount of research reports, journal papers and theses have been published on the topic, generating a conti-nously growing field. Researchers have devoloped a variety of different techniques to encode neural networks for the GA, with increasing complexity. This young field is driven mostly by small, independet research groups that scarcely cooperate with each other. This paper will attempt to analyse and to structure the already performed work, and to point out the shortcomings of the approaches. ",
"neighbors": [
1536,
1663,
2667
],
"mask": "Train"
},
{
"node_id": 2505,
"label": 2,
"text": "Title: Three-Dimensional Object Recognition Using an Unsupervised BCM Network: The Usefulness of Distinguishing Features \nAbstract: We propose an object recognition scheme based on a method for feature extraction from gray level images that corresponds to recent statistical theory, called projection pursuit, and is derived from a biologically motivated feature extracting neuron. To evaluate the performance of this method we use a set of very detailed psychophysical 3D object recognition experiments (Bulthoff and Edelman, 1992). ",
"neighbors": [
359,
611,
2499
],
"mask": "Test"
},
{
"node_id": 2506,
"label": 3,
"text": "Title: Nonlinear wavelet shrinkage with Bayes rules and Bayes factors 1 \nAbstract: Wavelet shrinkage,the method proposed by seminal work of Donohoand Johnstone is a disarmingly simple and efficient way of de-noising data. Shrinking wavelet coefficients was proposed from several optimality criteria. The most notable are the asymptotic minimax and cross-validation criteria. In this paper a wavelet shrinkage by imposing natural properties of Bayesian models on data is proposed. The performance of methods are tested on standard Donoho-Johnstone test functions. Key Words and Phrases: Wavelets, Discrete Wavelet Transform, Thresholding, Bayes Model. 1991 AMS Subject Classification: 42A06, 62G07. ",
"neighbors": [
1910,
2366,
2416,
2458,
2575,
2661
],
"mask": "Train"
},
{
"node_id": 2507,
"label": 2,
"text": "Title: The Observer-Observation Dilemma in Neuro-Forecasting: Reliable Models From Unreliable Data Through CLEARNING \nAbstract: This paper introduces the idea of clearning, of simultaneously cleaning data and learning the underlying structure. The cleaning step can be viewed as top-down processing (the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). After discussing the statistical foundation of the proposed method from a maximum likelihood perspective, we apply clearning to a notoriously hard problem where benchmark performances are very well known: the prediction of foreign exchange rates. On the difficult 1993-1994 test period, clearning in conjunction with pruning yields an annualized return between 35 and 40% (out-of-sample), significantly better than an otherwise identical network trained without cleaning. The network was started with 69 inputs and 15 hidden units and ended up with only 39 non-zero weights between inputs and hidden units. The resulting ultra-sparse final architectures obtained with clearning and pruning are immune against overfitting, even on very noisy problems since the cleaned data allow for a simpler model. Apart from the very competitive performance, clearning gives insight into the data: we show how to estimate the overall signal-to-noise ratio of each input variable, and we show that error estimates for each pattern can be used to detect and remove outliers, and to replace missing or corrupted data by cleaned values. Clearning can be used in any nonlinear regression or classification problem.",
"neighbors": [
371,
2239,
2373,
2374
],
"mask": "Train"
},
{
"node_id": 2508,
"label": 6,
"text": "Title: WRAPPERS FOR PERFORMANCE ENHANCEMENT AND OBLIVIOUS DECISION GRAPHS \nAbstract: This paper introduces the idea of clearning, of simultaneously cleaning data and learning the underlying structure. The cleaning step can be viewed as top-down processing (the model modifies the data), and the learning step can be viewed as bottom-up processing (where the data modifies the model). After discussing the statistical foundation of the proposed method from a maximum likelihood perspective, we apply clearning to a notoriously hard problem where benchmark performances are very well known: the prediction of foreign exchange rates. On the difficult 1993-1994 test period, clearning in conjunction with pruning yields an annualized return between 35 and 40% (out-of-sample), significantly better than an otherwise identical network trained without cleaning. The network was started with 69 inputs and 15 hidden units and ended up with only 39 non-zero weights between inputs and hidden units. The resulting ultra-sparse final architectures obtained with clearning and pruning are immune against overfitting, even on very noisy problems since the cleaned data allow for a simpler model. Apart from the very competitive performance, clearning gives insight into the data: we show how to estimate the overall signal-to-noise ratio of each input variable, and we show that error estimates for each pattern can be used to detect and remove outliers, and to replace missing or corrupted data by cleaned values. Clearning can be used in any nonlinear regression or classification problem.",
"neighbors": [
322,
632,
1235,
2137,
2577
],
"mask": "Test"
},
{
"node_id": 2509,
"label": 6,
"text": "Title: Applying Winnow to Context-Sensitive Spelling Correction \nAbstract: Multiplicative weight-updating algorithms such as Winnow have been studied extensively in the COLT literature, but only recently have people started to use them in applications. In this paper, we apply a Winnow-based algorithm to a task in natural language: context-sensitive spelling correction. This is the task of fixing spelling errors that happen to result in valid words, such as substituting to for too, casual for causal, and so on. Previous approaches to this problem have been statistics-based; we compare Winnow to one of the more successful such approaches, which uses Bayesian classifiers. We find that: (1) When the standard (heavily-pruned) set of features is used to describe problem instances, Winnow performs comparably to the Bayesian method; (2) When the full (unpruned) set of features is used, Winnow is able to exploit the new features and convincingly outperform Bayes; and (3) When a test set is encountered that is dissimilar to the training set, Winnow is better than Bayes at adapting to the unfamiliar test set, using a strategy we will present for combining learning on the training set with unsupervised learning on the (noisy) test set. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Information Technology Center America; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Information Technology Center America. All rights reserved. ",
"neighbors": [
517,
1962,
2618
],
"mask": "Train"
},
{
"node_id": 2510,
"label": 3,
"text": "Title: Geometric Ergodicity and Hybrid Markov Chains \nAbstract: Various notions of geometric ergodicity for Markov chains on general state spaces exist. In this paper, we review certain relations and implications among them. We then apply these results to a collection of chains commonly used in Markov chain Monte Carlo simulation algorithms, the so-called hybrid chains. We prove that under certain conditions, a hybrid chain will \"inherit\" the geometric ergodicity of its constituent parts. Acknowledgements. We thank Charlie Geyer for a number of very useful comments regarding spectral theory and central limit theorems. We thank Alison Gibbs, Phil Reiss, Peter Rosenthal, and Richard Tweedie for very helpful discussions. We thank the referee and the editor for many excellent suggestions. ",
"neighbors": [
416,
1713,
1977,
1978,
1991,
2002,
2362
],
"mask": "Train"
},
{
"node_id": 2511,
"label": 6,
"text": "Title: A Faster Algorithm for the Perfect Phylogeny Problem when the number of Characters is Fixed TR94-05 \nAbstract: Various notions of geometric ergodicity for Markov chains on general state spaces exist. In this paper, we review certain relations and implications among them. We then apply these results to a collection of chains commonly used in Markov chain Monte Carlo simulation algorithms, the so-called hybrid chains. We prove that under certain conditions, a hybrid chain will \"inherit\" the geometric ergodicity of its constituent parts. Acknowledgements. We thank Charlie Geyer for a number of very useful comments regarding spectral theory and central limit theorems. We thank Alison Gibbs, Phil Reiss, Peter Rosenthal, and Richard Tweedie for very helpful discussions. We thank the referee and the editor for many excellent suggestions. ",
"neighbors": [
2083,
2141,
2320,
2345,
2418
],
"mask": "Train"
},
{
"node_id": 2512,
"label": 1,
"text": "Title: A Methodology for Strategy Optimization Under Uncertainty in the Extended Two-Dimensional Pursuer/Evader Problem \nAbstract: ",
"neighbors": [
1930
],
"mask": "Train"
},
{
"node_id": 2513,
"label": 2,
"text": "Title: Avoiding overfitting by locally matching the noise level of the data gating network discovers the\nAbstract: When trying to forecast the future behavior of a real-world system, two of the key problems are nonstationarity of the process (e.g., regime switching) and overfitting of the model (particularly serious for noisy processes). This articles shows how gated experts can point to solutions to these problems. The architecture, also called society of experts and mixture of experts consists of a (nonlinear) gating network and several (nonlinear) competing experts. Each expert learns a conditional mean (as usual), but each expert also has its own adaptive width. The gating network learns to assign a probability to each expert that depends on the input. This article first discusses the assumptions underlying this architecture and derives the weight update rules. It then evaluates the performance of gated experts in comparison to that of single networks, as well as to networks with two outputs, one predicting the mean, the other one the local error bar. This article also investigates the ability of gated experts to discover and characterize underlying the regimes. The results are: * there is significantly less overfitting compared to single nets, for two reasons: only subsets of the potential inputs are given to the experts and gating network (less of a curse of dimensionality), and the experts learn to match their variances to the (local) noise levels, thus only learning as This article focuses on the architecture and the overfitting problem. Applications to a computer-generated toy problem and the laser data from Santa Fe Competition are given in [Mangeas and Weigend, 1995], and the application to the real-world problem of predicting the electricity demand of France are given in [Mangeas et al., 1995]. much as the data support.",
"neighbors": [
74,
310,
1366,
2239,
2374
],
"mask": "Validation"
},
{
"node_id": 2514,
"label": 3,
"text": "Title: Learning Bayesian Prototype Trees by Simulated Annealing \nAbstract: Given a set of samples of an unknown probability distribution, we study the problem of constructing a good approximative Bayesian network model of the probability distribution in question. This task can be viewed as a search problem, where the goal is to find a maximal probability network model, given the data. In this work, we do not make an attempt to learn arbitrarily complex multi-connected Bayesian network structures, since such resulting models can be unsuitable for practical purposes due to the exponential amount of time required for the reasoning task. Instead, we restrict ourselves to a special class of simple tree-structured Bayesian networks called Bayesian prototype trees, for which a polynomial time algorithm for Bayesian reasoning exists. We show how the probability of a given Bayesian prototype tree model can be evaluated, given the data, and how this evaluation criterion can be used in a stochastic simulated annealing algorithm for searching the model space. The simulated annealing algorithm provably finds the maximal probability model, provided that a sufficient amount of time is used. ",
"neighbors": [
485,
1838,
1908,
2380,
2558
],
"mask": "Train"
},
{
"node_id": 2515,
"label": 1,
"text": "Title: Forward-Tracking: A Technique for Searching Beyond Failure \nAbstract: In many applications, such as decision support, negotiation, planning, scheduling, etc., one needs to express requirements that can only be partially satisfied. In order to express such requirements, we propose a technique called forward-tracking. Intuitively, forward-tracking is a kind of dual of chronological back-tracking: if a program globally fails to find a solution, then a new execution is started from a program point and a state `forward' in the computation tree. This search technique is applied to constraint logic programming, obtaining a powerful extension that preserves all the useful properties of the original scheme. We report on the successful practical application of forward-tracking to the evolutionary training of (constrained) neural networks. ",
"neighbors": [
1999,
2003
],
"mask": "Train"
},
{
"node_id": 2516,
"label": 1,
"text": "Title: When Gravity Fails: Local Search Topology \nAbstract: Local search algorithms for combinatorial search problems frequently encounter a sequence of states in which it is impossible to improve the value of the objective function; moves through these regions, called plateau moves, dominate the time spent in local search. We analyze and characterize plateaus for three different classes of randomly generated Boolean Satisfiability problems. We identify several interesting features of plateaus that impact the performance of local search algorithms. We show that local minima tend to be small but occasionally may be very large. We also show that local minima can be escaped without unsatisfying a large number of clauses, but that systematically searching for an escape route may be computationally expensive if the local minimum is large. We show that plateaus with exits, called benches, tend to be much larger than minima, and that some benches have very few exit states which local search can use to escape. We show that the solutions (i.e., global minima) of randomly generated problem instances form clusters, which behave similarly to local minima. We revisit several enhancements of local search algorithms and explain their performance in light of our results. Finally we discuss strategies for creating the next generation of local search algorithms.",
"neighbors": [
1946
],
"mask": "Train"
},
{
"node_id": 2517,
"label": 2,
"text": "Title: Solving the Temporal Binding Problem: A Neural Theory for Constructing and Updating Object Files \nAbstract: Visual objects are perceived only if their parts are correctly identified and integrated. A neural network theory is proposed that seeks to explain how the human visual system binds together visual properties, dispersed over space and time, of multiple objects a problem known as the temporal binding problem [49, 30]. The proposed theory is based upon neural mechanisms that construct and update object representations through the interactions of a serial attentional mechanism for location and object-based selection, preattentive Gestalt-based grouping mechanisms, and an associative memory structure that binds together object identity, form, and spatial information. A working model is presented that provides a unified quantitative explanation of results from psychophysical experiments on object review, object integration and multielement tracking. ",
"neighbors": [
2533
],
"mask": "Test"
},
{
"node_id": 2518,
"label": 1,
"text": "Title: Tracing the Behavior of Genetic Algorithms Using Expected Values of Bit and Walsh Products \nAbstract: We consider two methods for tracing genetic algorithms. The first method is based on the expected values of bit products and the second method on the expected values of Walsh products. We treat proportional selection, mutation and uniform and one-point crossover. As applications, we obtain results on stable points and fitness of schemata.",
"neighbors": [
163,
2298
],
"mask": "Train"
},
{
"node_id": 2519,
"label": 1,
"text": "Title: An Evolutionary Approach to Time Constrained Routing Problems \nAbstract: Routing problems are an important class of planning problems. Usually there are many different constraints and optimization criteria involved, and it is difficult to find general methods for solving routing problems. We propose an evolutionary solver for such planning problems. An instance of this solver has been tested on a specific routing problem with time constraints. The performance of this evolutionary solver is compared to a biased random solver and a biased hillclimber solver. Results show that the evolutionary solver performs significantly better than the other two solvers. ",
"neighbors": [
2264
],
"mask": "Validation"
},
{
"node_id": 2520,
"label": 0,
"text": "Title: Cooperative Case-Based Reasoning \nAbstract: We are investigating possible modes of cooperation among homogeneous agents with learning capabilities. In this paper we will be focused on agents that learn and solve problems using Case-based Reasoning (CBR), and we will present two modes of cooperation among them: Distributed Case-based Reasoning (DistCBR) and Collective Case-based Reasoning (ColCBR). We illustrate these modes with an application where different CBR agents able to recommend chromatography techniques for protein purification cooperate. The approach taken is to extend Noos, the representation language being used by the CBR agents. Noos is knowledge modeling framework designed to integrate learning methods and based on the task/method decomposition principle. The extension we present, Plural Noos, allows communication and cooperation among agents implemented in Noos by means of three basic constructs: alien references, foreign method evaluation, and mobile methods.",
"neighbors": [
66,
2359
],
"mask": "Test"
},
{
"node_id": 2521,
"label": 1,
"text": "Title: Case-Based Probability Factoring in Bayesian Belief Networks \nAbstract: Bayesian network inference can be formulated as a combinatorial optimization problem, concerning in the computation of an optimal factoring for the distribution represented in the net. Since the determination of an optimal factoring is a computationally hard problem, heuristic greedy strategies able to find approximations of the optimal factoring are usually adopted. In the present paper we investigate an alternative approach based on a combination of genetic algorithms (GA) and case-based reasoning (CBR). We show how the use of genetic algorithms can improve the quality of the computed factoring in case a static strategy is used (as for the MPE computation), while the combination of GA and CBR can still provide advantages in the case of dynamic strategies. Some preliminary results on different kinds of nets are then reported. ",
"neighbors": [
145,
163,
2164,
2529
],
"mask": "Validation"
},
{
"node_id": 2522,
"label": 2,
"text": "Title: A Symbolic Complexity Analysis of Connectionist Algorithms for Distributed-Memory Machines \nAbstract: This paper attempts to rigorously determine the computation and communication requirements of connectionist algorithms running on a distributed-memory machine. The strategy involves (1) specifying key connectionist algorithms in a high-level object-oriented language, (2) extracting their running times as polynomials, and (3) analyzing these polynomials to determine the algorithms' space and time complexity. Results are presented for various implementations of the back-propagation algorithm [4]. ",
"neighbors": [
2275
],
"mask": "Train"
},
{
"node_id": 2523,
"label": 2,
"text": "Title: ADAPTIVE LOOK-AHEAD PLANNING problem of finding good initial plans is solved by the use of\nAbstract: We present a new adaptive connectionist planning method. By interaction with an environment a world model is progressively constructed using the backpropagation learning algorithm. The planner constructs a look-ahead plan by iteratively using this model to predict future reinforcements. Future reinforcement is maximized to derive suboptimal plans, thus determining good actions directly from the knowledge of the model network (strategic level). This is done by gradient descent in action space. ",
"neighbors": [
2684
],
"mask": "Train"
},
{
"node_id": 2524,
"label": 3,
"text": "Title: ADAPTIVE LOOK-AHEAD PLANNING problem of finding good initial plans is solved by the use of\nAbstract: In the Proceedings of the Conference on Uncertainty in Artificial Intelli- gence (UAI-94), Seattle, WA, 46-54, July 29-31, 1994. Technical Report R-213-B April, 1994 Abstract Evaluation of counterfactual queries (e.g., \"If A were true, would C have been true?\") is important to fault diagnosis, planning, and determination of liability. In this paper we present methods for computing the probabilities of such queries using the formulation proposed in [Balke and Pearl, 1994], where the antecedent of the query is interpreted as an external action that forces the proposition A to be true. When a prior probability is available on the causal mechanisms governing the domain, counterfactual probabilities can be evaluated precisely. However, when causal knowledge is specified as conditional probabilities on the observables, only bounds can computed. This paper develops techniques for evaluating these bounds, and demonstrates their use in two applications: (1) the determination of treatment efficacy from studies in which subjects may choose their own treatment, and (2) the determination of liability in product-safety litigation. ",
"neighbors": [
260,
1527,
1894,
2088,
2166
],
"mask": "Validation"
},
{
"node_id": 2525,
"label": 3,
"text": "Title: Bayesian Networks \nAbstract: In the Proceedings of the Conference on Uncertainty in Artificial Intelli- gence (UAI-94), Seattle, WA, 46-54, July 29-31, 1994. Technical Report R-213-B April, 1994 Abstract Evaluation of counterfactual queries (e.g., \"If A were true, would C have been true?\") is important to fault diagnosis, planning, and determination of liability. In this paper we present methods for computing the probabilities of such queries using the formulation proposed in [Balke and Pearl, 1994], where the antecedent of the query is interpreted as an external action that forces the proposition A to be true. When a prior probability is available on the causal mechanisms governing the domain, counterfactual probabilities can be evaluated precisely. However, when causal knowledge is specified as conditional probabilities on the observables, only bounds can computed. This paper develops techniques for evaluating these bounds, and demonstrates their use in two applications: (1) the determination of treatment efficacy from studies in which subjects may choose their own treatment, and (2) the determination of liability in product-safety litigation. ",
"neighbors": [
1527,
2088
],
"mask": "Train"
},
{
"node_id": 2526,
"label": 2,
"text": "Title: Logistic Response Projection Pursuit \nAbstract: In the Proceedings of the Conference on Uncertainty in Artificial Intelli- gence (UAI-94), Seattle, WA, 46-54, July 29-31, 1994. Technical Report R-213-B April, 1994 Abstract Evaluation of counterfactual queries (e.g., \"If A were true, would C have been true?\") is important to fault diagnosis, planning, and determination of liability. In this paper we present methods for computing the probabilities of such queries using the formulation proposed in [Balke and Pearl, 1994], where the antecedent of the query is interpreted as an external action that forces the proposition A to be true. When a prior probability is available on the causal mechanisms governing the domain, counterfactual probabilities can be evaluated precisely. However, when causal knowledge is specified as conditional probabilities on the observables, only bounds can computed. This paper develops techniques for evaluating these bounds, and demonstrates their use in two applications: (1) the determination of treatment efficacy from studies in which subjects may choose their own treatment, and (2) the determination of liability in product-safety litigation. ",
"neighbors": [
2448
],
"mask": "Train"
},
{
"node_id": 2527,
"label": 5,
"text": "Title: 248 Efficient Superscalar Performance Through Boosting \nAbstract: The foremost goal of superscalar processor design is to increase performance through the exploitation of instruction-level parallelism (ILP). Previous studies have shown that speculative execution is required for high instruction per cycle (IPC) rates in non-numerical applications. The general trend has been toward supporting speculative execution in complicated, dynamically-scheduled processors. Performance, though, is more than just a high IPC rate; it also depends upon instruction count and cycle time. Boosting is an architectural technique that supports general speculative execution in simpler, statically-scheduled processors. Boosting labels speculative instructions with their control dependence information. This labelling eliminates control dependence constraints on instruction scheduling while still providing full dependence information to the hardware. We have incorporated boosting into a trace-based, global scheduling algorithm that exploits ILP without adversely affecting the instruction count of a program. We use this algorithm and estimates of the boosting hardware involved to evaluate how much speculative execution support is really necessary to achieve good performance. We find that a statically-scheduled superscalar processor using a minimal implementation of boosting can easily reach the performance of a much more complex dynamically-scheduled superscalar processor. ",
"neighbors": [
735,
1956,
1961,
2100
],
"mask": "Train"
},
{
"node_id": 2528,
"label": 6,
"text": "Title: The Minimum Feature Set Problem \nAbstract: This paper appeared in Neural Networks 7 (1994), no. 3, pp. 491-494. ",
"neighbors": [
1858
],
"mask": "Train"
},
{
"node_id": 2529,
"label": 3,
"text": "Title: Decision-Theoretic Case-Based Reasoning \nAbstract: Technical Report MSR-TR-95-03 ",
"neighbors": [
2294,
2521
],
"mask": "Train"
},
{
"node_id": 2530,
"label": 2,
"text": "Title: Clustering Learning Tasks and the Selective Cross-Task Transfer of Knowledge \nAbstract: This research is sponsored in part by the National Science Foundation under award IRI-9313367, and by the Wright Laboratory, Aeronautical Systems Center, Air Force Materiel Command, USAF, and the Advanced Research Projects Agency (ARPA) under grant number F33615-93-1-1330. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing official policies or endorsements, either expressed or implied, of NSF, Wright Laboratory or the United States Government. ",
"neighbors": [
2090,
2415
],
"mask": "Validation"
},
{
"node_id": 2531,
"label": 3,
"text": "Title: Utility Elicitation as a Classification Problem \nAbstract: We investigate the application of classification techniques to utility elicitation. In a decision problem, two sets of parameters must generally be elicited: the probabilities and the utilities. While the prior and conditional probabilities in the model do not change from user to user, the utility models do. Thus it is necessary to elicit a utility model separately for each new user. Elicitation is long and tedious, particularly if the outcome space is large and not decomposable. There are two common approaches to utility function elicitation. The first is to base the determination of the user's utility function solely on elicitation of qualitative preferences. The second makes assumptions about the form and decomposability of the utility function. Here we take a different approach: we attempt to identify the new user's utility function based on classification relative to a database of previously collected utility functions. We do this by identifying clusters of utility functions that minimize an appropriate distance measure. Having identified the clusters, we develop a classification scheme that requires many fewer and simpler assessments than full utility elicitation and is more robust than utility elicitation based solely on preferences. We have tested our algorithm on a small database of utility functions in a prenatal diagnosis domain and the results are quite promising. ",
"neighbors": [
2566
],
"mask": "Validation"
},
{
"node_id": 2532,
"label": 3,
"text": "Title: Ensemble Learning for Hidden Markov Models \nAbstract: The standard method for training Hidden Markov Models optimizes a point estimate of the model parameters. This estimate, which can be viewed as the maximum of a posterior probability density over the model parameters, may be susceptible to over-fitting, and contains no indication of parameter uncertainty. Also, this maximum may be unrepresentative of the posterior probability distribution. In this paper we study a method in which we optimize an ensemble which approximates the entire posterior probability distribution. The ensemble learning algorithm requires the same resources as the traditional Baum-Welch algorithm. The traditional training algorithm for hidden Markov models is an expectation-maximization (EM) algorithm (Dempster et al. 1977) known as the Baum-Welch algorithm. It is a maximum likelihood method, or, with a simple modification, a penalized maximum likelihood method, which can be viewed as maximizing a posterior probability density over the model parameters. Recently, Hinton and van Camp (1993) developed a technique known as ensemble learning (see also MacKay (1995) for a review). Whereas maximum a posteriori methods optimize a point estimate of the parameters, in ensemble learning an ensemble is optimized, so that it approximates the entire posterior probability distribution over the parameters. The objective function that is optimized is a variational free energy (Feynman 1972) which measures the relative entropy between the approximating ensemble and the true distribution. In this paper we derive and test an ensemble learning algorithm for hidden Markov models, building on Neal ",
"neighbors": [
76,
518,
766,
2417
],
"mask": "Validation"
},
{
"node_id": 2533,
"label": 2,
"text": "Title: An Object-Based Neural Model of Serial Processing in Visual Multielement Tracking \nAbstract: A quantitative model is provided for psychophysical data on the tracking of multiple visual elements (multielement tracking). The model employs an object-based attentional mechanism for constructing and updating object representations. The model selectively enhances neural activations to serially construct and update the internal representations of objects through correlation-based changes in synaptic weights. The correspondence problem between items in memory and elements in the visual input is resolved through a combination of top-down prediction signals and bottom-up grouping processes. Simulations of the model on image sequences used in multielement tracking experiments show that reported results are consistent with a serial tracking mechanism that is based on psychophysical and neurobiological findings. In addition, simulations show that observed effects of perceptual grouping on tracking accuracy may result from the interactions between attention-guided predictions of object location and motion and grouping processes involved in solving the motion correspondence problem. ",
"neighbors": [
2517
],
"mask": "Train"
},
{
"node_id": 2534,
"label": 5,
"text": "Title: Data Value Prediction Methods and Performance \nAbstract: A quantitative model is provided for psychophysical data on the tracking of multiple visual elements (multielement tracking). The model employs an object-based attentional mechanism for constructing and updating object representations. The model selectively enhances neural activations to serially construct and update the internal representations of objects through correlation-based changes in synaptic weights. The correspondence problem between items in memory and elements in the visual input is resolved through a combination of top-down prediction signals and bottom-up grouping processes. Simulations of the model on image sequences used in multielement tracking experiments show that reported results are consistent with a serial tracking mechanism that is based on psychophysical and neurobiological findings. In addition, simulations show that observed effects of perceptual grouping on tracking accuracy may result from the interactions between attention-guided predictions of object location and motion and grouping processes involved in solving the motion correspondence problem. ",
"neighbors": [
2009
],
"mask": "Train"
},
{
"node_id": 2535,
"label": 2,
"text": "Title: Adaptive Wavelet Control of Nonlinear Systems \nAbstract: This paper considers the design and analysis of adaptive wavelet control algorithms for uncertain nonlinear dynamical systems. The Lyapunov synthesis approach is used to develop a state-feedback adaptive control scheme based on nonlinearly parametrized wavelet network models. Semi-global stability results are obtained under the key assumption that the system uncertainty satisfies a \"matching\" condition. The localization properties of adaptive networks are discussed and formal definitions of interference and localization measures are proposed. ",
"neighbors": [
1668,
2176
],
"mask": "Train"
},
{
"node_id": 2536,
"label": 4,
"text": "Title: Truncating Temporal Differences: On the Efficient Implementation of TD() for Reinforcement Learning \nAbstract: Temporal difference (TD) methods constitute a class of methods for learning predictions in multi-step prediction problems, parameterized by a recency factor . Currently the most important application of these methods is to temporal credit assignment in reinforcement learning. Well known reinforcement learning algorithms, such as AHC or Q-learning, may be viewed as instances of TD learning. This paper examines the issues of the efficient and general implementation of TD() for arbitrary , for use with reinforcement learning algorithms optimizing the discounted sum of rewards. The traditional approach, based on eligibility traces, is argued to suffer from both inefficiency and lack of generality. The TTD (Truncated Temporal Differences) procedure is proposed as an alternative, that indeed only approximates TD(), but requires very little computation per action and can be used with arbitrary function representation methods. The idea from which it is derived is fairly simple and not new, but probably unexplored so far. Encouraging experimental results are presented, suggesting that using > 0 with the TTD procedure allows one to obtain a significant learning speedup at essentially the same cost as usual TD(0) learning.",
"neighbors": [
502,
1957,
2629
],
"mask": "Test"
},
{
"node_id": 2537,
"label": 2,
"text": "Title: Toward Learning Systems That Integrate Different Strategies and Representations TR93-22 \nAbstract: Temporal difference (TD) methods constitute a class of methods for learning predictions in multi-step prediction problems, parameterized by a recency factor . Currently the most important application of these methods is to temporal credit assignment in reinforcement learning. Well known reinforcement learning algorithms, such as AHC or Q-learning, may be viewed as instances of TD learning. This paper examines the issues of the efficient and general implementation of TD() for arbitrary , for use with reinforcement learning algorithms optimizing the discounted sum of rewards. The traditional approach, based on eligibility traces, is argued to suffer from both inefficiency and lack of generality. The TTD (Truncated Temporal Differences) procedure is proposed as an alternative, that indeed only approximates TD(), but requires very little computation per action and can be used with arbitrary function representation methods. The idea from which it is derived is fairly simple and not new, but probably unexplored so far. Encouraging experimental results are presented, suggesting that using > 0 with the TTD procedure allows one to obtain a significant learning speedup at essentially the same cost as usual TD(0) learning.",
"neighbors": [
451,
1846,
1927,
2198
],
"mask": "Train"
},
{
"node_id": 2538,
"label": 2,
"text": "Title: INCREMENTAL POLYNOMIAL CONTROLLER NETWORKS: two self-organising non-linear controllers \nAbstract: Temporal difference (TD) methods constitute a class of methods for learning predictions in multi-step prediction problems, parameterized by a recency factor . Currently the most important application of these methods is to temporal credit assignment in reinforcement learning. Well known reinforcement learning algorithms, such as AHC or Q-learning, may be viewed as instances of TD learning. This paper examines the issues of the efficient and general implementation of TD() for arbitrary , for use with reinforcement learning algorithms optimizing the discounted sum of rewards. The traditional approach, based on eligibility traces, is argued to suffer from both inefficiency and lack of generality. The TTD (Truncated Temporal Differences) procedure is proposed as an alternative, that indeed only approximates TD(), but requires very little computation per action and can be used with arbitrary function representation methods. The idea from which it is derived is fairly simple and not new, but probably unexplored so far. Encouraging experimental results are presented, suggesting that using > 0 with the TTD procedure allows one to obtain a significant learning speedup at essentially the same cost as usual TD(0) learning.",
"neighbors": [
2325
],
"mask": "Train"
},
{
"node_id": 2539,
"label": 5,
"text": "Title: Mining for Causes of Cancer: Machine Learning Experiments at Various Levels of Detail \nAbstract: This paper presents first results of an interdisciplinary project in scientific data mining. We analyze data about the carcinogenicity of chemicals derived from the carcinogenesis bioassay program performed by the US National Institute of Environmental Health Sciences. The database contains detailed descriptions of 6823 tests performed with more than 330 compounds and animals of different species, strains and sexes. The chemical structures are described at the atom and bond level, and in terms of various relevant structural properties. The goal of this paper is to investigate the effects that various levels of detail and amounts of information have on the resulting hypotheses, both quantitatively and qualitatively. We apply relational and propositional machine learning algorithms to learning problems formulated as regression or as classification tasks. In addition, these experiments have been conducted with two learning problems which are at different levels of detail. Quantitatively, our experiments indicate that additional information not necessarily improves accuracy. Qualitatively, a number of potential discoveries have been made by the algorithm for Relational Regression because it can utilize all the information contained in the relations of the database as well as in the numerical dependent variable. ",
"neighbors": [
1322,
1428,
2213
],
"mask": "Train"
},
{
"node_id": 2540,
"label": 2,
"text": "Title: Efficient Implementation of Gaussian Processes \nAbstract: Neural networks and Bayesian inference provide a useful framework within which to solve regression problems. However their parameterization means that the Bayesian analysis of neural networks can be difficult. In this paper, we investigate a method for regression using Gaussian process priors which allows exact Bayesian analysis using matrix manipulations. We discuss the workings of the method in detail. We will also detail a range of mathematical and numerical techniques that are useful in applying Gaussian processes to general problems including efficient approximate matrix inversion methods developed by Skilling.",
"neighbors": [
157,
160,
611,
1857
],
"mask": "Train"
},
{
"node_id": 2541,
"label": 1,
"text": "Title: PLEASE: A prototype learning system using genetic algorithms \nAbstract: Prototypes have been proposed as representation of concepts that are used effectively by humans. Developing computational schemes for generating prototypes from examples, however, has proved to be a difficult problem. We present a novel genetic algorithm based prototype learning system, PLEASE, for constructing appropriate prototypes from classified training instances. After constructing a set of prototypes for each of the possible classes, the class of a new input instance is determined by the nearest prototype to this instance. Attributes are assumed to be ordinal in nature and prototypes are represented as sets of feature-value pairs. A genetic algorithm is used to evolve the number of prototypes per class and their positions on the input space. We present experimental results on a series of artificial problems of varying complexity. PLEASE performs competitively with several nearest neighbor classification algorithms on the problem set. An analysis of the strengths and weaknesses of the initial version of our system motivates the need for additional operators. The inclusion of these operators substantially improves the performance of the system on particularly difficult problems.",
"neighbors": [
638,
686,
2673
],
"mask": "Validation"
},
{
"node_id": 2542,
"label": 2,
"text": "Title: Worst-Case Identification of Nonlinear Fading Memory Systems \nAbstract: In this paper, the problem of asymptotic identification for a class of fading memory systems in the presence of bounded noise is studied. For any experiment, the worst-case error is characterized in terms of the diameter of the worst-case uncertainty set. Optimal inputs that minimize the radius of uncertainty are studied and characterized. Finally, a convergent algorithm that does not require knowledge of the noise upper bound is furnished. The algorithm is based on interpolating data with spline functions, which are shown to be well suited for identification in the presence of bounded noise; more so than other basis functions such as polynomials. The methods as well as the results are quite general and are applicable to a larger variety of settings. ",
"neighbors": [
2236,
2262
],
"mask": "Validation"
},
{
"node_id": 2543,
"label": 3,
"text": "Title: Combining Connectionist and Symbolic Learning to Refine Certainty-Factor Rule Bases \nAbstract: This paper describes Rapture | a system for revising probabilistic knowledge bases that combines connectionist and symbolic learning methods. Rapture uses a modified version of backpropagation to refine the certainty factors of a probabilistic rule base and it uses ID3's information-gain heuristic to add new rules. Results on refining three actual expert knowledge bases demonstrate that this combined approach generally performs better than previous methods. ",
"neighbors": [
136,
159,
1479,
1776,
2409,
2440,
2674
],
"mask": "Validation"
},
{
"node_id": 2544,
"label": 0,
"text": "Title: Choosing Learning Strategies to Achieve Learning Goals \nAbstract: In open world applications a number of machine-learning techniques may potentially apply to a given learning situation. The research presented here illustrates the complexity involved in automatically choosing an appropriate technique in a multistrategy learning system. It also constitutes a step toward a general computational solution to the learning-strategy selection problem. The approach is to treat learning-strategy selection as a separate planning problem with its own set of goals, as is the case with ordinary problem-solvers. Therefore, the management and pursuit of these learning goals becomes a central issue in learning, similar to the goal-management problems associated with traditional planning systems. This paper explores some issues, problems, and possible solutions in such a framework. Examples are presented from a multistrategy learning system called Meta-AQUA.",
"neighbors": [
2568
],
"mask": "Validation"
},
{
"node_id": 2545,
"label": 2,
"text": "Title: Volatility of Volatility of Financial Markets \nAbstract: We present empirical evidence for considering volatility of Eurodollar futures as a stochastic process, requiring a generalization of the standard Black-Scholes (BS) model which treats volatility as a constant. We use a previous development of a statistical mechanics of financial markets (SMFM) to model these issues. ",
"neighbors": [
1773,
1794,
1795,
2082,
2178
],
"mask": "Train"
},
{
"node_id": 2546,
"label": 3,
"text": "Title: Plausibility Measures: A User's Guide \nAbstract: We examine a new approach to modeling uncertainty based on plausibility measures, where a plausibility measure just associates with an event its plausibility, an element is some partially ordered set. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. The lack of structure in a plausibility measure makes it easy for us to add structure on an as needed basis, letting us examine what is required to ensure that a plausibility measure has certain properties of interest. This gives us insight into the essential features of the properties in question, while allowing us to prove general results that apply to many approaches to reasoning about uncertainty. Plausibility measures have already proved useful in analyzing default reasoning. In this paper, we examine their algebraic properties, analogues to the use of + and fi in probability theory. An understanding of such properties will be essential if plausibility measures are to be used in practice as a representation tool.",
"neighbors": [
276,
342,
1993
],
"mask": "Train"
},
{
"node_id": 2547,
"label": 0,
"text": "Title: Temporal abstractions for pre-processing and interpreting diabetes monitoring time series \nAbstract: In this paper we describe a number of intelligent data analysis techniques to pre-process and analyze data coming from home monitoring of diabetic patients. In particular, we show how the combination of temporal abstractions with statistical and probabilistic techniques may be applied to derive useful summaries of patients' behaviour over a certain monitoring period. Finally, we describe how Intelligent Data Analysis methods may be used to index past cases to perform a case-based re trieval in a data-base of past cases.",
"neighbors": [
2492
],
"mask": "Test"
},
{
"node_id": 2548,
"label": 6,
"text": "Title: A Framework for Multiple-Instance Learning \nAbstract: Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem.",
"neighbors": [
507,
2391,
2427
],
"mask": "Validation"
},
{
"node_id": 2549,
"label": 2,
"text": "Title: A Generalized Approximate Cross Validation for Smoothing Splines with Non-Gaussian Data 1 \nAbstract: Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem.",
"neighbors": [
193,
280,
519,
2608
],
"mask": "Train"
},
{
"node_id": 2550,
"label": 5,
"text": "Title: Language Series Revisited: The Complexity of Hypothesis Spaces in ILP \nAbstract: Restrictions on the number and depth of existential variables as defined in the language series of Clint [Rae92] are widely used in ILP and expected to produce a considerable reduction in the size of the hypothesis space. In this paper we show that this is generally not the case. The lower bounds we present lead to intractable hypothesis spaces except for toy domains. We argue that the parameters chosen in Clint are unsuitable for sensible bias shift operations, and propose alternative approaches resulting in the desired reduction of the hypothesis space and allowing for a natural integration of the shift of bias.",
"neighbors": [
2045
],
"mask": "Train"
},
{
"node_id": 2551,
"label": 4,
"text": "Title: The Role of Forgetting in Learning \nAbstract: This paper is a discussion of the relationship between learning and forgetting. An analysis of the economics of learning is carried out and it is argued that knowledge can sometimes have a negative value. A series of experiments involving a program which learns to traverse state spaces is described. It is shown that most of the knowledge acquired is of negative value even though it is correct and was acquired solving similar problems. It is shown that the value of the knowledge depends on what else is known and that random forgetting can sometimes lead to substantial improvements in performance. It is concluded that research into knowledge acquisition should take seriously the possibility that knowledge may sometimes be harmful. The view is taken that learning and forgetting are complementary processes which construct and maintain useful representations of experience. ",
"neighbors": [
523,
2473
],
"mask": "Train"
},
{
"node_id": 2552,
"label": 2,
"text": "Title: Inferring sparse, overcomplete image codes using an efficient coding framework \nAbstract: We apply a general technique for learning overcomplete bases to the problem of finding efficient image codes. The bases learned by the algorithm are localized, oriented, and bandpass, consistent with earlier results obtained using related methods. We show that the learned bases are Gabor-like in structure and that higher degrees of overcompleteness produce greater sampling density in position, orientation, and scale. The efficient coding framework provides a method for comparing different bases objectively by calculating their probability given the observed data or by measuring the entropy of the basis function coefficients. Compared to complete and overcomplete Fourier and wavelet bases, the learned bases have much better coding efficiency. We demonstrate the improvement in the representation of the learned bases by showing superior performance in image denoising and filling-in of missing pixels. ",
"neighbors": [
576,
2026
],
"mask": "Validation"
},
{
"node_id": 2553,
"label": 2,
"text": "Title: TURING COMPUTABILITY WITH NEURAL NETS \nAbstract: This paper shows the existence of a finite neural network, made up of sigmoidal neurons, which simulates a universal Turing machine. It is composed of less than 10 5 synchronously evolving processors, interconnected linearly. High-order connections are not required.",
"neighbors": [
1875
],
"mask": "Train"
},
{
"node_id": 2554,
"label": 1,
"text": "Title: Genetic Programming Estimates of Kolmogorov Complexity \nAbstract: In this paper the problem of the Kolmogorov complexity related to binary strings is faced. We propose a Genetic Programming approach which consists in evolving a population of Lisp programs looking for the optimal program that generates a given string. This evolutionary approach has permited to overcome the intractable space and time difficulties occurring in methods which perform an approximation of the Kolmogorov complexity function. The experimental results are quite significant and also show interesting computational strategies so proving the effectiveness of the implemented technique.",
"neighbors": [
163,
1850
],
"mask": "Train"
},
{
"node_id": 2555,
"label": 6,
"text": "Title: Knowing What Doesn't Matter: Exploiting Omitted Superfluous Data \nAbstract: Most inductive inference algorithms (i.e., \"learners\") work most effectively when their training data contain completely specified labeled samples. In many diagnostic tasks, however, the data will include the values of only some of the attributes; we model this as a blocking process that hides the values of those attributes from the learner. While blockers that remove the values of critical attributes can handicap a learner, this paper instead focuses on blockers that remove only superfluous attribute values, i.e., values that are not needed to classify an instance, given the values of the other unblocked attributes. We first motivate and formalize this model of \"superfluous-value blocking,\" and then demonstrate that these omissions can be useful, by showing that certain classes that seem hard to learn in the general PAC model | viz., decision trees | are trivial to learn in this setting, and can even be learned in a manner that is very robust to classification noise. We also discuss how this model can be extended to deal with (1) theory revision (i.e., modifying an existing decision tree); (2) \"complex\" attributes (which correspond to combinations of other atomic attributes); (3) blockers that occasionally include superfluous values or exclude re quired values; and (4) other hypothesis classes (e.g., DNF formulae). Declaration: This paper has not already been accepted by and is not currently under review for a journal or another conference, nor will it be submitted for such during IJCAI's review period. fl This is an extended version of a paper that appeared in working notes of the 1994 AAAI Fall Symposium on \"Relevance\", New Orleans, November 1994. y Authors listed alphabetically. We gratefully acknowledge receiving helpful comments from Dale Schuurmans and George Drastal. ",
"neighbors": [
2560
],
"mask": "Train"
},
{
"node_id": 2556,
"label": 0,
"text": "Title: A Case-Based Approach to Reactive Control for Autonomous Robots \nAbstract: We propose a case-based method of selecting behavior sets as an addition to traditional reactive robotic control systems. The new system (ACBARR | A Case BAsed Reactive Robotic system) provides more flexible performance in novel environments, as well as overcoming a standard \"hard\" problem for reactive systems, the box canyon. Additionally, ACBARR is designed in a manner which is intended to remain as close to pure reactive control as possible. Higher level reasoning and memory functions are intentionally kept to a minimum. As a result, the new reasoning does not significantly slow the system down from pure reactive speeds. ",
"neighbors": [
858,
1904
],
"mask": "Train"
},
{
"node_id": 2557,
"label": 2,
"text": "Title: Growing Simpler Decision Trees to Facilitate Knowledge Discovery \nAbstract: When using machine learning techniques for knowledge discovery, output that is comprehensible to a human is as important as predictive accuracy. We introduce a new algorithm, SET-Gen, that improves the comprehensibility of decision trees grown by standard C4.5 without reducing accuracy. It does this by using genetic search to select the set of input features C4.5 is allowed to use to build its tree. We test SET-Gen on a wide variety of real-world datasets and show that SET-Gen trees are significantly smaller and reference significantly fewer features than trees grown by C4.5 without using SET-Gen. Statistical significance tests show that the accuracies of SET-Gen's trees are either not distinguishable from or are more accurate than those of the original C4.5 trees on all ten datasets tested. ",
"neighbors": [
163,
430,
686,
1947
],
"mask": "Validation"
},
{
"node_id": 2558,
"label": 3,
"text": "Title: Using Bayesian networks for incorporating probabilistic a priori knowledge into Boltzmann machines \nAbstract: We present a method for automatically determining the structure and the connection weights of a Boltzmann machine corresponding to a given Bayesian network representation of a probability distribution on a set of discrete variables. The resulting Boltzmann machine structure can be implemented efficiently on massively parallel hardware, since the structure can be divided into two separate clusters where all the nodes in one cluster can be updated simultaneously. The updating process of the Boltzmann machine approximates a Gibbs sampling process of the original Bayesian network in the sense that the Boltzmann machine converges to the same final state as the Gibbs sampler does. The mapping from a Bayesian network to a Boltzmann machine can be seen as a method for incorporating probabilistic a priori information into a neural network architecture, which can then be trained further with existing learning algorithms. ",
"neighbors": [
450,
2514
],
"mask": "Test"
},
{
"node_id": 2559,
"label": 3,
"text": "Title: \"Linear Dependencies Represented by Chain Graphs,\" \"Graphical Modelling With MIM,\" Manual. \"Identifying Independence in Bayesian\nAbstract: 8] Dori, D. and Tarsi, M., \"A Simple Algorithm to Construct a Consistent Extension of a Partially Oriented Graph,\" Computer Science Department, Tel-Aviv University. Also Technical Report R-185, UCLA, Cognitive Systems Laboratory, October 1992. [14] Pearl, J. and Wermuth, N., \"When Can Association Graphs Admit a Causal Interpretation?,\" UCLA, Cognitive Systems Laboratory, Technical Report R-183-L, November 1992. [17] Verma, T.S. and Pearl, J., \"Deciding Morality of Graphs is NP-complete,\" Technical Report R-188, UCLA, Cognitive Systems Laboratory, October 1992. ",
"neighbors": [
51,
260,
841,
1241,
1747,
2076
],
"mask": "Train"
},
{
"node_id": 2560,
"label": 6,
"text": "Title: Learning Default Concepts \nAbstract: Classical concepts, based on necessary and sufficient defining conditions, cannot classify logically insufficient object descriptions. Many reasoning systems avoid this limitation by using \"default concepts\" to classify incompletely described objects. We address the task of learning such default concepts from observational data. We model the underlying performance task | classifying incomplete examples | by a probabilistic process where random test examples are passed through a \"blocker\" that can hide object attributes from the classifier. We then address the task of learning accurate default concepts from random training examples. We survey the learning techniques that have been proposed for this task in the machine learning and knowledge representation literatures, investigate the relative merits of each, and show that a superior learning technique can be developed from well known statistical principles. Finally, we extend Valiant's pac-learning framework to this context and obtain a number of useful learnability results. ",
"neighbors": [
251,
323,
649,
865,
1505,
2555
],
"mask": "Train"
},
{
"node_id": 2561,
"label": 3,
"text": "Title: MDL Learning of Probabilistic Neural Networks for Discrete Problem Domains \nAbstract: Given a problem, a case-based reasoning (CBR) system will search its case memory and use the stored cases to find the solution, possibly modifying retrieved cases to adapt to the required input specifications. In discrete domains CBR reasoning can be based on a rigorous Bayesian probability propagation algorithm. Such a Bayesian CBR system can be implemented as a probabilistic feedforward neural network with one of the layers representing the cases. In this paper we introduce a Minimum Description Length (MDL) based learning algorithm to obtain the proper network structure with the associated conditional probabilities. This algorithm together with the resulting neural network implementation provide a massively parallel architecture for solving the efficiency bottleneck in case-based reasoning. ",
"neighbors": [
485,
719,
1527,
1908,
2380
],
"mask": "Train"
},
{
"node_id": 2562,
"label": 2,
"text": "Title: NONLINEAR TRADING MODELS THROUGH SHARPE RATIO MAXIMIZATION \nAbstract: Working Paper IS-97-005, Leonard N. Stern School of Business, New York University. In: Decision Technologies for Financial Engineering (Proceedings of the Fourth International Conference on Neural Networks in the Capital Markets, NNCM-96), pp. 3-22. Edited by A.S.Weigend, Y.S.Abu-Mostafa, and A.-P.N.Refenes. Singapore: World Scientific, 1997. http://www.stern.nyu.edu/~aweigend/Research/Papers/SharpeRatio While many trading strategies are based on price prediction, traders in financial markets are typically interested in risk-adjusted performance such as the Sharpe Ratio, rather than price predictions themselves. This paper introduces an approach which generates a nonlinear strategy that explicitly maximizes the Sharpe Ratio. It is expressed as a neural network model whose output is the position size between a risky and a risk-free asset. The iterative parameter update rules are derived and compared to alternative approaches. The resulting trading strategy is evaluated and analyzed on both computer-generated data and real world data (DAX, the daily German equity index). Trading based on Sharpe Ratio maximization compares favorably to both profit optimization and probability matching (through cross-entropy optimization). The results show that the goal of optimizing out-of-sample risk-adjusted profit can be achieved with this nonlinear approach.",
"neighbors": [
668,
1366,
2595
],
"mask": "Train"
},
{
"node_id": 2563,
"label": 1,
"text": "Title: Analysis of Neurocontrollers Designed by Simulated Evolution \nAbstract: Randomized, adaptive, greedy search using evolutionary algorithms offers a powerful and versatile approach to the automated design of neural network architectures for a variety of tasks in artificial intelligence and robotics. In this paper we present results from the evolutionary design of a neuro-controller for a robotic bulldozer. This robot is given the task of clearing an arena littered with boxes by pushing boxes to the sides. Through a careful analysis of the evolved networks we show how evolution exploits the design constraints and properties of the environment to produce network structures of high fitness. We conclude with a brief summary of related ongoing research examining the intricate interplay between environment and evolutionary processes in determining the structure and function of the resulting neural architectures.",
"neighbors": [
163,
219,
1583,
2220,
2396
],
"mask": "Test"
},
{
"node_id": 2564,
"label": 1,
"text": "Title: Embedding of a sequential procedure within an evolutionary algorithm for coloring problems in graphs \nAbstract: Randomized, adaptive, greedy search using evolutionary algorithms offers a powerful and versatile approach to the automated design of neural network architectures for a variety of tasks in artificial intelligence and robotics. In this paper we present results from the evolutionary design of a neuro-controller for a robotic bulldozer. This robot is given the task of clearing an arena littered with boxes by pushing boxes to the sides. Through a careful analysis of the evolved networks we show how evolution exploits the design constraints and properties of the environment to produce network structures of high fitness. We conclude with a brief summary of related ongoing research examining the intricate interplay between environment and evolutionary processes in determining the structure and function of the resulting neural architectures.",
"neighbors": [
163,
1159,
1558,
1785
],
"mask": "Validation"
},
{
"node_id": 2565,
"label": 0,
"text": "Title: Defining and Combining Symmetric and Asymmetric Similarity Measures \nAbstract: In this paper, we present a framework for the definition of similarity measures using lattice-valued functions. We show their strengths (particularly for combining similarity measures). Then we investigate a particular instantiation of the framework, in which sets are used both to represent objects and to denote degrees of similarity. The paper con cludes by suggesting some generalisations of the findings. ",
"neighbors": [
2157
],
"mask": "Validation"
},
{
"node_id": 2566,
"label": 3,
"text": "Title: A Constraint-Based Approach to Preference Elicitation and Decision Making \nAbstract: We investigate the solution of constraint-based configuration problems in which the preference function over outcomes is unknown or incompletely specified. The aim is to configure a system, such as a personal computer, so that it will be optimal for a given user. The goal of this project is to develop algorithms that generate the most preferred feasible configuration by posing preference queries to the user. In order to minimize the number and the complexity of preference queries posed to the user, the algorithm reasons about the user's preferences while taking into account constraints over the set of feasible configurations. We assume that the user can structure their preferences in a particular way that, while natural in many settings, can be exploited during the optimization process. We also address in a preliminary fashion the trade-offs between computational effort in the solution of a problem and the degree of interaction with the user. ",
"neighbors": [
62,
2531
],
"mask": "Train"
},
{
"node_id": 2567,
"label": 2,
"text": "Title: On the Combination of Supervised and Unsupervised Learning reducing the overall error measure of a classifier. \nAbstract: ",
"neighbors": [
359,
2498
],
"mask": "Validation"
},
{
"node_id": 2568,
"label": 0,
"text": "Title: Abstract \nAbstract: Self-selection of input examples on the basis of performance failure is a powerful bias for learning systems. The definition of what constitutes a learning bias, however, has been typically restricted to bias provided by the input language, hypothesis language, and preference criteria between competing concept hypotheses. But if bias is taken in the broader context as any basis that provides a preference for one concept change over another, then the paradigm of failure-driven processing indeed provides a bias. Bias is exhibited by the selection of examples from an input stream that are examples of failure; successful performance is filtered out. We show that the degrees of freedom are less in failure-driven learning than in success-driven learning and that learning is facilitated because of this constraint. We also broaden the definition of failure, provide a novel taxonomy of failure causes, and illustrate the interaction of both in a multistrategy learning system called Meta-AQUA. ",
"neighbors": [
289,
583,
612,
717,
2544
],
"mask": "Train"
},
{
"node_id": 2569,
"label": 2,
"text": "Title: The Gamma MLP Using Multiple Temporal Resolutions for Improved Classification \nAbstract: We have previously introduced the Gamma MLP which is defined as an MLP with the usual synaptic weights replaced by gamma filters and associated gain terms throughout all layers. In this paper we apply the Gamma MLP to a larger scale speech phoneme recognition problem, analyze the operation of the network, and investigate why the Gamma MLP can perform better than alternatives. The Gamma MLP is capable of employing multiple temporal resolutions (the temporal resolution is defined here, as per de Vries and Principe, as the number of parameters of freedom (i.e. the number of tap variables) per unit of time in the gamma memory this is equal to the gamma memory parameter as detailed in the paper). Multiple temporal resolutions may be advantageous for certain problems, e.g. different resolutions may be optimal for extracting different features from the input data. For the problem in this paper, the Gamma MLP is observed to use a large range of temporal resolutions. In comparison, TDNN networks typically use only a single temporal resolution. Further motivation for the Gamma MLP is related to the curse of dimensionality and the ability of the Gamma MLP to trade off temporal resolution for memory depth, and therefore increase memory depth without increasing the dimensionality of the network. The IIR MLP is a more general version of the Gamma MLP however the IIR MLP performs poorly for the problem in this paper. Investigation suggests that the error surface of the Gamma MLP is more suitable for gradient descent training than the error surface of the IIR MLP. ",
"neighbors": [
1820
],
"mask": "Train"
},
{
"node_id": 2570,
"label": 2,
"text": "Title: In Fast Non-Linear Dimension Reduction \nAbstract: We present a fast algorithm for non-linear dimension reduction. The algorithm builds a local linear model of the data by merging PCA with clustering based on a new distortion measure. Experiments with speech and image data indicate that the local linear algorithm produces encodings with lower distortion than those built by five layer auto-associative networks. The local linear algorithm is also more than an order of magnitude faster to train.",
"neighbors": [
480,
667,
1806,
1928
],
"mask": "Validation"
},
{
"node_id": 2571,
"label": 2,
"text": "Title: Non-Deterministic, Constraint-Based Parsing of Human Genes \nAbstract: We present a fast algorithm for non-linear dimension reduction. The algorithm builds a local linear model of the data by merging PCA with clustering based on a new distortion measure. Experiments with speech and image data indicate that the local linear algorithm produces encodings with lower distortion than those built by five layer auto-associative networks. The local linear algorithm is also more than an order of magnitude faster to train.",
"neighbors": [
268,
613,
1878,
2107,
2496
],
"mask": "Train"
},
{
"node_id": 2572,
"label": 2,
"text": "Title: Negative observations concerning approximations from spaces generated by scattered shifts of functions vanishing at 1 \nAbstract: Approximation by scattered shifts f( ff)g ff2A of a basis function are considered, and different methods for localizing these translates are compared. It is argued in the note that the superior localization processes are those that employ the original translates only. ",
"neighbors": [
365,
2112
],
"mask": "Train"
},
{
"node_id": 2573,
"label": 3,
"text": "Title: An Optimum Decision Rule for Pattern Recognition \nAbstract: ",
"neighbors": [
1942
],
"mask": "Train"
},
{
"node_id": 2574,
"label": 2,
"text": "Title: Identification of Protein Coding Regions In Genomic DNA Molecular, Cellular and Developmental Biology, Keywords: gene\nAbstract: ",
"neighbors": [
427,
2107
],
"mask": "Validation"
},
{
"node_id": 2575,
"label": 3,
"text": "Title: The Stationary Wavelet Transform and some Statistical Applications \nAbstract: Wavelets are of wide potential use in statistical contexts. The basics of the discrete wavelet transform are reviewed using a filter notation that is useful subsequently in the paper. A `stationary wavelet transform', where the coefficient sequences are not decimated at each stage, is described. Two different approaches to the construction of an inverse of the stationary wavelet transform are set out. The application of the stationary wavelet transform as an exploratory statistical method is discussed, together with its potential use in nonparametric regression. A method of local spectral density estimation is developed. This involves extensions to the wavelet context of standard time series ideas such as the periodogram and spectrum. The technique is illustrated by its application to data sets from astronomy and veterinary anatomy.",
"neighbors": [
1910,
2366,
2388,
2506
],
"mask": "Validation"
},
{
"node_id": 2576,
"label": 2,
"text": "Title: A Neural Model of the Cortical Representation of Egocentric Distance \nAbstract: Wavelets are of wide potential use in statistical contexts. The basics of the discrete wavelet transform are reviewed using a filter notation that is useful subsequently in the paper. A `stationary wavelet transform', where the coefficient sequences are not decimated at each stage, is described. Two different approaches to the construction of an inverse of the stationary wavelet transform are set out. The application of the stationary wavelet transform as an exploratory statistical method is discussed, together with its potential use in nonparametric regression. A method of local spectral density estimation is developed. This involves extensions to the wavelet context of standard time series ideas such as the periodogram and spectrum. The technique is illustrated by its application to data sets from astronomy and veterinary anatomy.",
"neighbors": [
1051,
2678
],
"mask": "Train"
},
{
"node_id": 2577,
"label": 3,
"text": "Title: Targeting Business Users with Decision Table Classifiers \nAbstract: Business users and analysts commonly use spreadsheets and 2D plots to analyze and understand their data. On-line Analytical Processing (OLAP) provides these users with added flexibility in pivoting data around different attributes and drilling up and down the multi-dimensional cube of aggregations. Machine learning researchers, however, have concentrated on hypothesis spaces that are foreign to most users: hyper-planes (Perceptrons), neural networks, Bayesian networks, decision trees, nearest neighbors, etc. In this paper we advocate the use of decision table classifiers that are easy for line-of-business users to understand. We describe several variants of algorithms for learning decision tables, compare their performance, and describe a visualization mechanism that we have implemented in MineSet. The performance of decision tables is comparable to other known algorithms, such as C4.5/C5.0, yet the resulting classifiers use fewer attributes and are more comprehensible. ",
"neighbors": [
1020,
2180,
2342,
2367,
2508
],
"mask": "Test"
},
{
"node_id": 2578,
"label": 3,
"text": "Title: Analysis of hospital quality monitors using hierarchical time series models \nAbstract: The VA management services department invests considerably in the collection and assessment of data to inform on hospital and care-area specific levels of quality of care. Resulting time series of quality monitors provide information relevant to evaluating patterns of variability in hospital-specific quality of care over time and across care areas, and to compare and assess differences across hospitals. In collaboration with the VA management services group we have developed various models for evaluating such patterns of dependencies and combining data across the VA hospital system. This paper provides a brief overview of resulting models, some summary examples on three monitor time series, and discussion of data, modelling and inference issues. This work introduces new models for multivariate non-Gaussian time series. The framework combines cross-sectional, hierarchical models of the population of hospitals with time series structure to allow and measure time-variations in the associated hierarchical model parameters. In the VA study, the within-year components of the models describe patterns of heterogeneity across the population of hospitals and relationships among several such monitors, while the time series components describe patterns of variability through time in hospital-specific effects and their relationships across quality monitors. Additional model components isolate unpredictable aspects of variability in quality monitor outcomes, by hospital and care areas. We discuss model assessment, residual analysis and MCMC algorithms developed to fit these models, which will be of interest in related applications in other socio-economic areas.",
"neighbors": [
99,
2679
],
"mask": "Train"
},
{
"node_id": 2579,
"label": 2,
"text": "Title: SPERT-II: A Vector Microprocessor System and its Application to Large Problems in Backpropagation Training \nAbstract: We report on our development of a high-performance system for neural network and other signal processing applications. We have designed and implemented a vector microprocessor and packaged it as an attached processor for a conventional workstation. We present performance comparisons with commercial workstations on neural network backpropagation training. The SPERT-II system demonstrates significant speedups over extensively hand optimization code running on the workstations.",
"neighbors": [
2279,
2336
],
"mask": "Train"
},
{
"node_id": 2580,
"label": 6,
"text": "Title: The Challenge of Revising an Impure Theory \nAbstract: A pure rule-based program will return a set of answers to each query; and will return the same answer set even if its rules are re-ordered. However, an impure program, which includes the Prolog cut \"!\" and not() operators, can return different answers if the rules are re-ordered. There are also many reasoning systems that return only the first answer found for each query; these first answers, too, depend on the rule order, even in pure rule-based systems. A theory revision algorithm, seeking a revised rule-base whose expected accuracy, over the distribution of queries, is optimal, should therefore consider modifying the order of the rules. This paper first shows that a polynomial number of training \"labeled queries\" (each a query coupled with its correct answer) provides the distribution information necessary to identify the optimal ordering. It then proves, however, that the task of determining which ordering is optimal, once given this information, is intractable even in trivial situations; e.g., even if each query is an atomic literal, we are seeking only a \"perfect\" theory, and the rule base is propositional. We also prove that this task is not even approximable: Unless P = N P , no polynomial time algorithm can produce an ordering of an n-rule theory whose accuracy is within n fl of optimal, for some fl > 0. We also prove similar hardness, and non-approximatability, results for the related tasks of determining, in these impure contexts, (1) the optimal ordering of the antecedents; (2) the optimal set of rules to add or (3) to delete; and (4) the optimal priority values for a set of defaults. ",
"neighbors": [
52,
136,
1819,
1823
],
"mask": "Train"
},
{
"node_id": 2581,
"label": 0,
"text": "Title: Four Challenges for a Computational Model of Legal Precedent \nAbstract: Identifying the open research issues in a field is a necessary step for progress in that field. This paper describes four open research problems in computational models of precedent-based legal reasoning: relating case representation to precedent use; modeling the selection and construction of both arguments based on pairwise case comparison and multiple-precedent arguments; modeling the process whereby purposes, policies, and principles are used in case similarity assessment; and extending the applicability of precedents to tasks other than classification. ",
"neighbors": [
649,
2403
],
"mask": "Validation"
},
{
"node_id": 2582,
"label": 2,
"text": "Title: Noisy Time Series Prediction using Symbolic Representation and Recurrent Neural Network Grammatical Inference \nAbstract: Financial forecasting is an example of a signal processing problem which is challenging due to small sample sizes, high noise, non-stationarity, and non-linearity. Neural networks have been very successful in a number of signal processing applications. We discuss fundamental limitations and inherent difficulties when using neural networks for the processing of high noise, small sample size signals. We introduce a new intelligent signal processing method which addresses the difficulties. The method uses conversion into a symbolic representation with a self-organizing map, and grammatical inference with recurrent neural networks. We apply the method to the prediction of daily foreign exchange rates, addressing difficulties with non-stationarity, overfitting, and unequal a priori class probabilities, and we find significant predictability in comprehensive experiments covering 5 different foreign exchange rates. The method correctly predicts the direction of change for the next day with an error rate of 47.1%. The error rate reduces to around 40% when rejecting examples where the system has low confidence in its prediction. The symbolic representation aids the extraction of symbolic knowledge from the recurrent neural networks in the form of deterministic finite state automata. These automata explain the operation of the system and are often relatively simple. Rules related to well known behavior such as trend following and mean reversal are extracted. ",
"neighbors": [
409,
411,
462,
1718,
2178
],
"mask": "Train"
},
{
"node_id": 2583,
"label": 6,
"text": "Title: Dynamic Automatic Model Selection \nAbstract: COINS Technical Report 92-30 February 1992 Abstract The problem of how to learn from examples has been studied throughout the history of machine learning, and many successful learning algorithms have been developed. A problem that has received less attention is how to select which algorithm to use for a given learning task. The ability of a chosen algorithm to induce a good generalization depends on how appropriate the model class underlying the algorithm is for the given task. We define an algorithm's model class to be the representation language it uses to express a generalization of the examples. Supervised learning algorithms differ in their underlying model class and in how they search for a good generalization. Given this characterization, it is not surprising that some algorithms find better generalizations for some, but not all tasks. Therefore, in order to find the best generalization for each task, an automated learning system must search for the appropriate model class in addition to searching for the best generalization within the chosen class. This thesis proposal investigates the issues involved in automating the selection of the appropriate model class. The presented approach has two facets. Firstly, the approach combines different model classes in the form of a model combination decision tree, which allows the best representation to be found for each subconcept of the learning task. Secondly, which model class is the most appropriate is determined dynamically using a set of heuristic rules. Explicit in each rule are the conditions in which a particular model class is appropriate and if it is not, what should be done next. In addition to describing the approach, this proposal describes how the approach will be evaluated in order to demonstrate that it is both an efficient and effective method for automatic model selection. ",
"neighbors": [
102,
378,
1173,
1423,
2135,
2310,
2333
],
"mask": "Train"
},
{
"node_id": 2584,
"label": 2,
"text": "Title: Presynaptic and Postsynaptic Competition in Models for the Development of Neuromuscular Connections \nAbstract: The development of the nervous system involves in many cases interactions on a local scale rather than the execution of a fully specified genetic blueprint. The problem is to discover the nature of these interactions and the factors on which they depend. The withdrawal of polyinnervation in developing muscle is an example where such competitive interactions play an important role. We examine the possible types of competition in formal ",
"neighbors": [
2632
],
"mask": "Test"
},
{
"node_id": 2585,
"label": 0,
"text": "Title: Rule Induction and Instance-Based Learning: A Unified Approach \nAbstract: This paper presents a new approach to inductive learning that combines aspects of instance-based learning and rule induction in a single simple algorithm. The RISE system searches for rules in a specific-to-general fashion, starting with one rule per training example, and avoids some of the difficulties of separate-and-conquer approaches by evaluating each proposed induction step globally, i.e., through an efficient procedure that is equivalent to checking the accuracy of the rule set as a whole on every training example. Classification is performed using a best-match strategy, and reduces to nearest-neighbor if all generalizations of instances were rejected. An extensive empirical study shows that RISE consistently achieves higher accuracies than state-of-the-art representatives of its \"parent\" paradigms (PEBLS and CN2), and also outperforms a decision-tree learner (C4.5) in 13 out of 15 test domains (in ",
"neighbors": [
1263,
1809,
1830,
2441
],
"mask": "Train"
},
{
"node_id": 2586,
"label": 4,
"text": "Title: Learning One More Thing \nAbstract: Most research on machine learning has focused on scenarios in which a learner faces a single, isolated learning task. The lifelong learning framework assumes that the learner encounters a multitude of related learning tasks over its lifetime, providing the opportunity for the transfer of knowledge among these. This paper studies lifelong learning in the context of binary classification. It presents the invariance approach, in which knowledge is transferred via a learned model of the invariances of the domain. Results on learning to recognize objects from color images demonstrate superior generalization capabilities if invariances are learned and used to bias subsequent learning.",
"neighbors": [
1647,
1889,
2113,
2162,
2486
],
"mask": "Test"
},
{
"node_id": 2587,
"label": 5,
"text": "Title: Predicate Invention and Learning from Positive Examples Only \nAbstract: Previous bias shift approaches to predicate invention are not applicable to learning from positive examples only, if a complete hypothesis can be found in the given language, as negative examples are required to determine whether new predicates should be invented or not. One approach to this problem is presented, MERLIN 2.0, which is a successor of a system in which predicate invention is guided by sequences of input clauses in SLD-refutations of positive and negative examples w.r.t. an overly general theory. In contrast to its predecessor which searches for the minimal finite-state automaton that can generate all positive and no negative sequences, MERLIN 2.0 uses a technique for inducing Hidden Markov Models from positive sequences only. This enables the system to invent new predicates without being triggered by negative examples. Another advantage of using this induction technique is that it allows for incremental learning. Experimental results are presented comparing MERLIN 2.0 with the positive only learning framework of Progol 4.2 and comparing the original induction technique with a new version that produces deterministic Hidden Markov Models. The results show that predicate invention may indeed be both necessary and possible when learning from positive examples only as well as it can be beneficial to keep the induced model deterministic.",
"neighbors": [
2312
],
"mask": "Train"
},
{
"node_id": 2588,
"label": 3,
"text": "Title: Some recent ideas on utility (and probability) (not for distribution or reference) \nAbstract: ",
"neighbors": [
2301
],
"mask": "Train"
},
{
"node_id": 2589,
"label": 5,
"text": "Title: Pac-Learning Recursive Logic Programs: Efficient Algorithms \nAbstract: We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional \"basecase\" oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a compu-tationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.",
"neighbors": [
344,
2329,
2424
],
"mask": "Train"
},
{
"node_id": 2590,
"label": 3,
"text": "Title: Backfitting in Smoothing Spline ANOVA \nAbstract: A scheme to compute smoothing spline ANOVA estimates for large data sets with a (near) tensor-product structure is proposed. Such data sets are common in spatial-temporal analysis and image analysis. This scheme combines backfitting algorithm with iterative imputation algorithm in order to save both computational space and time. The convergence of this algorithm and various ways to further speed it up, such as collapsing component functions and successive over-relaxation, are discussed. Issues related to its application in spatial-temporal analysis are discussed too. An application to a global analysis of historical surface temperature data is described. ",
"neighbors": [
388,
420,
519,
2421
],
"mask": "Test"
},
{
"node_id": 2591,
"label": 5,
"text": "Title: Lookahead and Discretization in ILP \nAbstract: We present and evaluate two methods for improving the performance of ILP systems. One of them is discretization of numerical attributes, based on Fayyad and Irani's text [9], but adapted and extended in such a way that it can cope with some aspects of discretization that only occur in relational learning problems (when indeterminate literals occur). The second technique is lookahead. It is a well-known problem in ILP that a learner cannot always assess the quality of a refinement without knowing which refinements will be enabled afterwards, i.e. without looking ahead in the refinement lattice. We present a simple method for specifying when lookahead is to be used, and what kind of lookahead is interesting. Both the discretization and lookahead techniques are evaluated experimentally. The results show that both techniques improve the quality of the induced theory, while computational costs are acceptable.",
"neighbors": [
2126,
2253,
2427,
2431
],
"mask": "Test"
},
{
"node_id": 2592,
"label": 3,
"text": "Title: FILTERING VIA SIMULATION: AUXILIARY PARTICLE FILTERS \nAbstract: This paper analyses the recently suggested particle approach to filtering time series. We suggest that the algorithm is not robust to outliers for two reasons: the design of the simulators and the use of the discrete support to represent the sequentially updating prior distribution. Both problems are tackled in this paper. We believe we have largely solved the first problem and have reduced the order of magnitude of the second. In addition we introduce the idea of stratification into the particle filter which allows us to perform on-line Bayesian calculations about the parameters which index the models and maximum likelihood estimation. The new methods are illustrated by using a stochastic volatility model and a time series model of angles. ",
"neighbors": [
99,
1852
],
"mask": "Test"
},
{
"node_id": 2593,
"label": 0,
"text": "Title: Induction of Condensed Determinations \nAbstract: In this paper we suggest determinations as a representation of knowledge that should be easy to understand. We briefly review determinations, which can be displayed in a tabular format, and their use in prediction, which involves a simple matching process. We describe ConDet, an algorithm that uses feature selection to construct determinations from training data, augmented by a condensation process that collapses rows to produce simpler structures. We report experiments that show condensation reduces complexity with no loss of accuracy, then discuss ConDet's relation to other work and outline directions for future studies. ",
"neighbors": [
430,
634,
2342
],
"mask": "Train"
},
{
"node_id": 2594,
"label": 2,
"text": "Title: Can Recurrent Neural Networks Learn Natural Language Grammars? W&Z recurrent neural networks are able to\nAbstract: Recurrent neural networks are complex parametric dynamic systems that can exhibit a wide range of different behavior. We consider the task of grammatical inference with recurrent neural networks. Specifically, we consider the task of classifying natural language sentences as grammatical or ungrammatical can a recurrent neural network be made to exhibit the same kind of discriminatory power which is provided by the Principles and Parameters linguistic framework, or Government and Binding theory? We attempt to train a network, without the bifurcation into learned vs. innate components assumed by Chomsky, to produce the same judgments as native speakers on sharply grammatical/ungrammatical data. We consider how a recurrent neural network could possess linguistic capability, and investigate the properties of Elman, Narendra & Parthasarathy (N&P) and Williams & Zipser (W&Z) recurrent networks, and Frasconi-Gori-Soda (FGS) locally recurrent networks in this setting. We show that both ",
"neighbors": [
411,
2306
],
"mask": "Train"
},
{
"node_id": 2595,
"label": 2,
"text": "Title: TO IMPROVE FORECASTING \nAbstract: Working Paper IS-97-007, Leonard N. Stern School of Business, New York University. In: Journal of Computational Intelligence in Finance 6 (1998) 14-23. (Special Issue on \"Improving Generalization of Nonlinear Financial Forecasting Models\".) http://www.stern.nyu.edu/~aweigend/Research/Papers/InteractionLayer Abstract. Predictive models for financial data are often based on a large number of plausible inputs that are potentially nonlinearly combined to yield the conditional expectation of a target, such as a daily return of an asset. This paper introduces a new architecture for this task: On the output side, we predict dynamical variables such as first derivatives and curvatures on different time spans. These are subsequently combined in an interaction output layer to form several estimates of the variable of interest. Those estimates are then averaged to yield the final prediction. Independently from this idea, on the input side, we propose a new internal preprocessing layer connected with a diagonal matrix of positive weights to a layer of squashing functions. These weights adapt for each input individually and learn to squash outliers in the input. We apply these two ideas to the real world example of the daily predictions of the German stock index DAX (Deutscher Aktien Index), and compare the results to a network with a single output. The new six layer architecture is more stable in training due to two facts: (1) More information is flowing back from the outputs to the input in the backward pass; (2) The constraint of predicting first and second derivatives focuses the learning on the relevant variables for the dynamics. The architectures are compared from both the training perspective (squared errors, robust errors), and from the trading perspective (annualized returns, percent correct, Sharpe ratio). ",
"neighbors": [
1315,
2452,
2562
],
"mask": "Train"
},
{
"node_id": 2596,
"label": 3,
"text": "Title: Regression shrinkage and selection via the lasso \nAbstract: We propose a new method for estimation in linear models. The \"lasso\" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly zero and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described. ",
"neighbors": [
2669,
2680,
2686
],
"mask": "Train"
},
{
"node_id": 2597,
"label": 0,
"text": "Title: Improved Heterogeneous Distance Functions \nAbstract: Instance-based learning techniques typically handle continuous and linear input values well, but often do not handle nominal input attributes appropriately. The Value Difference Metric (VDM) was designed to find reasonable distance values between nominal attribute values, but it largely ignores continuous attributes, requiring discretization to map continuous values into nominal values. This paper proposes three new heterogeneous distance functions, called the Heterogeneous Value Difference Metric (HVDM), the Interpolated Value Difference Metric (IVDM), and the Windowed Value Difference Metric (WVDM). These new distance functions are designed to handle applications with nominal attributes, continuous attributes, or both. In experiments on 48 applications the new distance metrics achieve higher classification accuracy on average than three previous distance functions on those datasets that have both nominal and continuous attributes.",
"neighbors": [
1698,
2256
],
"mask": "Test"
},
{
"node_id": 2598,
"label": 1,
"text": "Title: Duplication of Coding Segments in Genetic Programming \nAbstract: Research into the utility of non-coding segments, or introns, in genetic-based encodings has shown that they expedite the evolution of solutions in domains by protecting building blocks against destructive crossover. We consider a genetic programming system where non-coding segments can be removed, and the resultant chromosomes returned into the population. This parsimonious repair leads to premature convergence, since as we remove the naturally occurring non-coding segments, we strip away their protective backup feature. We then duplicate the coding segments in the repaired chromosomes, and place the modified chromosomes into the population. The duplication method significantly improves the learning rate in the domain we have considered. We also show that this method can be applied to other domains.",
"neighbors": [
854,
956,
1230,
1232,
1631,
2330
],
"mask": "Train"
},
{
"node_id": 2599,
"label": 2,
"text": "Title: Recognizing Handwritten Digit Strings Using Modular Spatio-temporal Connectionist Networks \nAbstract: Research into the utility of non-coding segments, or introns, in genetic-based encodings has shown that they expedite the evolution of solutions in domains by protecting building blocks against destructive crossover. We consider a genetic programming system where non-coding segments can be removed, and the resultant chromosomes returned into the population. This parsimonious repair leads to premature convergence, since as we remove the naturally occurring non-coding segments, we strip away their protective backup feature. We then duplicate the coding segments in the repaired chromosomes, and place the modified chromosomes into the population. The duplication method significantly improves the learning rate in the domain we have considered. We also show that this method can be applied to other domains.",
"neighbors": [
2162
],
"mask": "Test"
},
{
"node_id": 2600,
"label": 1,
"text": "Title: Evolution of Iteration in Genetic Programming D a v d A The solution to many\nAbstract: This paper introduces the new operation of restricted iteration creation that automatically Genetic programming extends Holland's genetic algorithm to the task of automatic programming. Early work on genetic programming demonstrated that it is possible to evolve a sequence of work-performing steps in a single result-producing branch (that is, a one-part \"main\" program). The book Genetic Programming: On the Programming of Computers by Means of Natural Selection (Koza 1992) describes an extension of Holland's genetic algorithm in which the genetic population consists of computer programs (that is, compositions of primitive functions and terminals). See also Koza and Rice (1992). In the most basic form of genetic programming (where only a single result-producing branch is evolved), genetic programming demonstrated the capability to discover a sequence (as to both its length and its content) of work-performing steps that is sufficient to produce a satisfactory solution to several problems, including many problems that have been used over the years as benchmarks in machine learning and artificial intelligence. Before applying genetic programming to a problem, the user must perform five major preparatory steps, namely identifying the terminals (inputs) of the to-be-evolved programs, identifying the primitive functions (operations) contained in the to-be-evolved programs, creating the fitness measure for evaluating how well a given program does at solving the problem at hand, choosing certain control parameters (notably population size and number of generations to be run), and determining the termination criterion and method of result designation (typically the best-so-far individual from the populations produced during the run). creates a restricted iteration-performing",
"neighbors": [
163,
523,
1409,
2220
],
"mask": "Train"
},
{
"node_id": 2601,
"label": 2,
"text": "Title: Stability and Chaos in an Inertial Two Neuron System in Statistical Mechanics and Complex Systems \nAbstract: Inertia is added to a continuous-time, Hopfield [1], effective neuron system. We explore the effects on the stability of the fixed points of the system. A two neuron system with one or two inertial terms added is shown to exhibit chaos. The chaos is confirmed by Lyapunov exponents, power spectra, and phase space plots. Key words: chaos, Hopfield model, effective neurons, Lyapunov exponent, inertia. ",
"neighbors": [
2631
],
"mask": "Train"
},
{
"node_id": 2602,
"label": 5,
"text": "Title: A Method for Partial-Memory Incremental Learning and its Application to Computer Intrusion Detection Machine Learning\nAbstract: This paper describes a partial-memory incremental learning method based on the AQ15c inductive learning system. The method maintains a representative set of past training examples that are used together with new examples to appropriately modify the currently held hypotheses. Incremental learning is evoked by feedback from the environment or from the user. Such a method is useful in applications involving intelligent agents acting in a changing environment, active vision, and dynamic knowledge-bases. For this study, the method is applied to the problem of computer intrusion detection in which symbolic profiles are learned for a computer systems users. In the experiments, the proposed method yielded significant gains in terms of learning time and memory requirements at the expense of slightly lower predictive accuracy and higher concept complexity, when compared to batch learning, in which all examples are given at once. ",
"neighbors": [
2070
],
"mask": "Train"
},
{
"node_id": 2603,
"label": 2,
"text": "Title: Pointer Adaptation and Pruning of Min-Max Fuzzy Inference and Estimation \nAbstract: This paper describes a partial-memory incremental learning method based on the AQ15c inductive learning system. The method maintains a representative set of past training examples that are used together with new examples to appropriately modify the currently held hypotheses. Incremental learning is evoked by feedback from the environment or from the user. Such a method is useful in applications involving intelligent agents acting in a changing environment, active vision, and dynamic knowledge-bases. For this study, the method is applied to the problem of computer intrusion detection in which symbolic profiles are learned for a computer systems users. In the experiments, the proposed method yielded significant gains in terms of learning time and memory requirements at the expense of slightly lower predictive accuracy and higher concept complexity, when compared to batch learning, in which all examples are given at once. ",
"neighbors": [
1756
],
"mask": "Train"
},
{
"node_id": 2604,
"label": 1,
"text": "Title: Empirical studies of the genetic algorithm with non-coding segments \nAbstract: The genetic algorithm (GA) is a problem solving method that is modelled after the process of natural selection. We are interested in studying a specific aspect of the GA: the effect of non-coding segments on GA performance. Non-coding segments are segments of bits in an individual that provide no contribution, positive or negative, to the fitness of that individual. Previous research on non-coding segments suggests that including these structures in the GA may improve GA performance. Understanding when and why this improvement occurs will help us to use the GA to its full potential. In this article, we discuss our hypotheses on non-coding segments and describe the results of our experiments. The experiments may be separated into two categories: testing our program on problems from previous related studies, and testing new hypotheses on the effect of non-coding segments. ",
"neighbors": [
163,
168,
1631,
2330
],
"mask": "Train"
},
{
"node_id": 2605,
"label": 0,
"text": "Title: Case-based Acquisition of User Preferences for Solution Improvement in Ill-Structured Domains \nAbstract: 1 We have developed an approach to acquire complicated user optimization criteria and use them to guide ",
"neighbors": [
951,
986,
1401,
2502
],
"mask": "Train"
},
{
"node_id": 2606,
"label": 2,
"text": "Title: Computational modeling of spatial attention \nAbstract: 1 We have developed an approach to acquire complicated user optimization criteria and use them to guide ",
"neighbors": [
527,
2337,
2459,
2611,
2662
],
"mask": "Validation"
},
{
"node_id": 2607,
"label": 0,
"text": "Title: Concept Learning and Flexible Weighting \nAbstract: We previously introduced an exemplar model, named GCM-ISW, that exploits a highly flexible weighting scheme. Our simulations showed that it records faster learning rates and higher asymptotic accuracies on several artificial categorization tasks than models with more limited abilities to warp input spaces. This paper extends our previous work; it describes experimental results that suggest human subjects also invoke such highly flexible schemes. In particular, our model provides significantly better fits than models with less flexibility, and we hypothesize that humans selectively weight attributes depending on an item's location in the input space. We need more flexible models Many theories of human concept learning posit that concepts are represented by prototypes (Reed, 1972) or exemplars (Medin & Schaffer, 1978). Prototype models represent concepts by the \"best example\" or \"central tendency\" of the concept. 1 A new item belongs in a category C if it is relatively similar to C's prototype. Prototype models are relatively inflexible; they discard a great deal of information that people use during concept learning (e.g., the number of exemplars in a concept (Homa & Cultice, 1984), the variability of features (Fried & Holyoak, 1984), correlations between features (Medin et al., 1982), and the particular exemplars used (Whittlesea, 1987)). of concept learning",
"neighbors": [
1987,
2074,
2310,
2369
],
"mask": "Test"
},
{
"node_id": 2608,
"label": 2,
"text": "Title: Testing the Generalized Linear Model Null Hypothesis versus `Smooth' Alternatives 1 \nAbstract: We previously introduced an exemplar model, named GCM-ISW, that exploits a highly flexible weighting scheme. Our simulations showed that it records faster learning rates and higher asymptotic accuracies on several artificial categorization tasks than models with more limited abilities to warp input spaces. This paper extends our previous work; it describes experimental results that suggest human subjects also invoke such highly flexible schemes. In particular, our model provides significantly better fits than models with less flexibility, and we hypothesize that humans selectively weight attributes depending on an item's location in the input space. We need more flexible models Many theories of human concept learning posit that concepts are represented by prototypes (Reed, 1972) or exemplars (Medin & Schaffer, 1978). Prototype models represent concepts by the \"best example\" or \"central tendency\" of the concept. 1 A new item belongs in a category C if it is relatively similar to C's prototype. Prototype models are relatively inflexible; they discard a great deal of information that people use during concept learning (e.g., the number of exemplars in a concept (Homa & Cultice, 1984), the variability of features (Fried & Holyoak, 1984), correlations between features (Medin et al., 1982), and the particular exemplars used (Whittlesea, 1987)). of concept learning",
"neighbors": [
519,
2549
],
"mask": "Train"
},
{
"node_id": 2609,
"label": 5,
"text": "Title: ILP with Noise and Fixed Example Size: A Bayesian Approach \nAbstract: Current inductive logic programming systems are limited in their handling of noise, as they employ a greedy covering approach to constructing the hypothesis one clause at a time. This approach also causes difficulty in learning recursive predicates. Additionally, many current systems have an implicit expectation that the cardinality of the positive and negative examples reflect the \"proportion\" of the concept to the instance space. A framework for learning from noisy data and fixed example size is presented. A Bayesian heuristic for finding the most probable hypothesis in this general framework is derived. This approach evaluates a hypothesis as a whole rather than one clause at a time. The heuristic, which has nice theoretical properties, is incorporated in an ILP system, Lime. Experimental results show that Lime handles noise better than FOIL and PROGOL. It is able to learn recursive definitions from noisy data on which other systems do not perform well. Lime is also capable of learning from only positive data and also from only negative data.",
"neighbors": [
344,
2079,
2080
],
"mask": "Train"
},
{
"node_id": 2610,
"label": 2,
"text": "Title: Lending Direction to Neural Networks \nAbstract: We present a general formulation for a network of stochastic directional units. This formulation is an extension of the Boltzmann machine in which the units are not binary, but take on values on a cyclic range, between 0 and 2 radians. This measure is appropriate to many domains, representing cyclic or angular values, e.g., wind direction, days of the week, phases of the moon. The state of each unit in a Directional-Unit Boltzmann Machine (DUBM) is described by a complex variable, where the phase component specifies a direction; the weights are also complex variables. We associate a quadratic energy function, and corresponding probability, with each DUBM configuration. The conditional distribution of a unit's stochastic state is a circular version of the Gaussian probability distribution, known as the von Mises distribution. In a mean-field approximation to a stochastic dubm, the phase component of a unit's state represents its mean direction, and the magnitude component specifies the degree of certainty associated with this direction. This combination of a value and a certainty provides additional representational power in a unit. We present a proof that the settling dynamics for a mean-field DUBM cause convergence to a free energy minimum. Finally, we describe a learning algorithm and simulations that demonstrate a mean-field DUBM's ability to learn interesting mappings. fl To appear in: Neural Networks.",
"neighbors": [
2337
],
"mask": "Train"
},
{
"node_id": 2611,
"label": 2,
"text": "Title: The end of the line for a brain-damaged model of unilateral neglect \nAbstract: For over a century, it has been known that damage to the right hemisphere of the brain can cause patients to be unaware of the contralesional side of space. This condition, known as unilateral neglect, represents a collection of clinically related spatial disorders characterized by the failure in free vision to respond, explore, or orient to stimuli predominantly located on the side of space opposite the damaged hemisphere. Recent studies using the simple task of line bisection, a conventional diagnostic test, have proved surprisingly revealing with respect to the spatial and attentional impairments involved in neglect. In line bisection, the patient is asked to mark the midpoint of a thin horizontal line on a sheet of paper. Neglect patients generally transect far to the right of the center. Extensive studies of line bisection have been conducted, manipulating|among other factors|line length, orientation, and position. We have simulated the pattern of results using an existing computational model of visual perception and selective attention called morsel (Mozer, 1991). morsel has already been used to model data in a related disorder, neglect dyslexia (Mozer & Behrmann, 1990). In this earlier work, morsel was \"lesioned\" in accordance with the damage we suppose to have occurred in the brains of ",
"neighbors": [
1763,
2606
],
"mask": "Train"
},
{
"node_id": 2612,
"label": 2,
"text": "Title: Models of Parallel Adaptive Logic \nAbstract: This paper overviews a proposed architecture for adaptive parallel logic referred to as ASOCS (Adaptive Self-Organizing Concurrent System). The ASOCS approach is based on an adaptive network composed of many simple computing elements which operate in a parallel asynchronous fashion. Problem specification is given to the system by presenting if-then rules in the form of boolean conjunctions. Rules are added incrementally and the system adapts to the changing rule-base. Adaptation and data processing form two separate phases of operation. During processing the system acts as a parallel hardware circuit. The adaptation process is distributed amongst the computing elements and efficiently exploits parallelism. Adaptation is done in a self-organizing fashion and takes place in time linear with the depth of the network. This paper summarizes the overall ASOCS concept and overviews three specific architectures. ",
"neighbors": [
26,
724,
1903,
2625
],
"mask": "Test"
},
{
"node_id": 2613,
"label": 1,
"text": "Title: Genetic Algorithms for Automated Tuning of Fuzzy Controllers: A Transportation Application \nAbstract: We describe the design and tuning of a controller for enforcing compliance with a prescribed velocity profile for a rail-based transportation system. This requires following a trajectory, rather than fixed set-points (as in automobiles). We synthesize a fuzzy controller for tracking the velocity profile, while providing a smooth ride and staying within the prescribed speed limits. We use a genetic algorithm to tune the fuzzy controller's performance by adjusting its parameters (the scaling factors and the membership functions) in a sequential order of significance. We show that this approach results in a controller that is superior to the manually designed one, and with only modest computational effort. This makes it possible to customize automated tuning to a variety of different configurations of the route, the terrain, the power configuration, and the cargo. ",
"neighbors": [
1756
],
"mask": "Test"
},
{
"node_id": 2614,
"label": 0,
"text": "Title: PRONOUNCING NAMES BY A COMBINATION OF RULE-BASED AND CASE-BASED REASONING \nAbstract: We describe the design and tuning of a controller for enforcing compliance with a prescribed velocity profile for a rail-based transportation system. This requires following a trajectory, rather than fixed set-points (as in automobiles). We synthesize a fuzzy controller for tracking the velocity profile, while providing a smooth ride and staying within the prescribed speed limits. We use a genetic algorithm to tune the fuzzy controller's performance by adjusting its parameters (the scaling factors and the membership functions) in a sequential order of significance. We show that this approach results in a controller that is superior to the manually designed one, and with only modest computational effort. This makes it possible to customize automated tuning to a variety of different configurations of the route, the terrain, the power configuration, and the cargo. ",
"neighbors": [
986,
1644,
2484,
2616
],
"mask": "Validation"
},
{
"node_id": 2615,
"label": 2,
"text": "Title: A Patient-Adaptive Neural Network ECG Patient Monitoring Algorithm \nAbstract: The patient-adaptive classifier was compared with a well-established baseline algorithm on six major databases, consisting of over 3 million heartbeats. When trained on an initial 77 records and tested on an additional 382 records, the patient-adaptive algorithm was found to reduce the number of Vn errors on one channel by a factor of 5, and the number of Nv errors by a factor of 10. We conclude that patient adaptation provides a significant advance in classifying normal vs. ventricular beats for ECG Patient Monitoring. ",
"neighbors": [
1647,
2074,
2084
],
"mask": "Train"
},
{
"node_id": 2616,
"label": 0,
"text": "Title: A comparison of Anapron with seven other name-pronunciation systems \nAbstract: This paper presents an experiment comparing a new name-pronunciation system, Anapron, with seven existing systems: three state-of-the-art commercial systems (from Bellcore, Bell Labs, and DEC), two variants of a machine-learning system (NETtalk), and two humans. Anapron works by combining rule-based and case-based reasoning. It is based on the idea that it is much easier to improve a rule-based system by adding case-based reasoning to it than by tuning the rules to deal with every exception. In the experiment described here, Anapron used a set of rules adapted from MITalk and elementary foreign-language textbooks, and a case library of 5000 names. With these components | which required relatively little knowledge engineering | Anapron was found to perform almost at the level of the commercial systems, and significantly better than the two versions of NETtalk. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories of Cambridge, Massachusetts; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories. All rights reserved. ",
"neighbors": [
986,
1644,
2484,
2614
],
"mask": "Train"
},
{
"node_id": 2617,
"label": 5,
"text": "Title: Predicting Ordinal Classes in ILP \nAbstract: This paper is devoted to the problem of learning to predict ordinal (i.e., ordered discrete) classes in an ILP setting. We start with a relational regression algorithm named SRT (Structural Regression Trees) and study various ways of transforming it into a first-order learner for ordinal classification tasks. Combinations of these algorithm variants with several data preprocessing methods are compared on two ILP benchmark data sets to verify the relative strengths and weaknesses of the strategies and to study the trade-off between optimal categorical classification accuracy (hit rate) and minimum distance-based error. Preliminary results indicate that this is a promising avenue towards algorithms that combine aspects of classification and regression in relational learning.",
"neighbors": [
228,
344,
1275,
1428,
2091
],
"mask": "Validation"
},
{
"node_id": 2618,
"label": 6,
"text": "Title: Mistake-Driven Learning in Text Categorization \nAbstract: Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistake-driven learning algorithms for a typical task of this nature - text categorization. We argue that these algorithms which categorize documents by learning a linear separator in the feature space have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set. ",
"neighbors": [
453,
1269,
2509
],
"mask": "Train"
},
{
"node_id": 2619,
"label": 2,
"text": "Title: An Efficient Implementation of Sigmoidal Neural Nets in Temporal Coding with Noisy Spiking Neurons \nAbstract: We show that networks of relatively realistic mathematical models for biological neurons can in principle simulate arbitrary feedforward sigmoidal neural nets in a way which has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing of synchronous firing in pools of neurons), rather than on the traditional interpretation of analog variables in terms of firing rates. The resulting new simulation is substantially faster and hence more consistent with experimental results about the maximal speed of information processing in cortical neural systems. As a consequence we can show that networks of noisy spiking neurons are \"universal approximators\" in the sense that they can approximate with regard to temporal coding any given continuous function of several variables. This result holds for a fairly large class of schemes for coding analog variables by firing times of spiking neurons. Our new proposal for the possible organization of computations in networks of spiking neurons systems has some interesting consequences for the type of learning rules that would be needed to explain the self-organization of such networks. Finally, our fast and noise-robust implementation of sigmoidal neural nets via temporal coding points to possible new ways of implementing feedforward and recurrent sigmoidal neural nets with pulse stream VLSI. ",
"neighbors": [
328,
1774,
1968
],
"mask": "Test"
},
{
"node_id": 2620,
"label": 3,
"text": "Title: Monte Carlo Approach to Bayesian Regression Modeling \nAbstract: In the framework of a functional response model (i.e. a regression model, or a feedforward neural network) an estimator of a nonlinear response function is constructed from a set of functional units. The parameters defining these functional units are estimated using the Bayesian approach. A sample representing the Bayesian posterior distribution is obtained by applying the Markov chain Monte Carlo procedure, namely the combination of Gibbs and Metropolis-Hastings algorithms. The method is described for histogram, B-spline and radial basis function estimators of a response function. In general, the proposed approach is suitable for finding Bayes-optimal values of parameters in a complicated parameter space. We illustrate the method on numerical examples. ",
"neighbors": [
1972
],
"mask": "Test"
},
{
"node_id": 2621,
"label": 2,
"text": "Title: Information Processing in Primate Retinal Cone Pathways: A Model \nAbstract: In the framework of a functional response model (i.e. a regression model, or a feedforward neural network) an estimator of a nonlinear response function is constructed from a set of functional units. The parameters defining these functional units are estimated using the Bayesian approach. A sample representing the Bayesian posterior distribution is obtained by applying the Markov chain Monte Carlo procedure, namely the combination of Gibbs and Metropolis-Hastings algorithms. The method is described for histogram, B-spline and radial basis function estimators of a response function. In general, the proposed approach is suitable for finding Bayes-optimal values of parameters in a complicated parameter space. We illustrate the method on numerical examples. ",
"neighbors": [
2105
],
"mask": "Train"
},
{
"node_id": 2622,
"label": 0,
"text": "Title: Feature Selection by Means of a Feature Weighting Approach \nAbstract: Selecting a set of features which is optimal for a given classification task is one of the central problems in machine learning. We address the problem using the flexible and robust filter technique EUBAFES. EUBAFES is based on a feature weighting approach which computes binary feature weights and therefore a solution in the feature selection sense and also gives detailed information about feature relevance by continuous weights. Moreover the user gets not only one but several potentially optimal feature subsets which is important for filter-based feature selection algorithms since it gives the flexibility to use even complex classifiers by the application of a combined filter/wrapper approach. We applied EUBAFES on a number of artificial and real world data sets and used radial basis function networks to examine the impact of the feature subsets to classifier accuracy and complexity.",
"neighbors": [
2033
],
"mask": "Train"
},
{
"node_id": 2623,
"label": 6,
"text": "Title: Theoretical Models of Learning to Learn Editor: \nAbstract: A Machine can only learn if it is biased in some way. Typically the bias is supplied by hand, for example through the choice of an appropriate set of features. However, if the learning machine is embedded within an environment of related tasks, then it can learn its own bias by learning sufficiently many tasks from the environment [4, 6]. In this paper two models of bias learning (or equivalently, learning to learn) are introduced and the main theoretical results presented. The first model is a PAC-type model based on empirical process theory, while the second is a hierarchical Bayes model. ",
"neighbors": [
2113
],
"mask": "Test"
},
{
"node_id": 2624,
"label": 1,
"text": "Title: A Comparison between Cellular Encoding and Direct Encoding for Genetic Neural Networks \nAbstract: This paper compares the efficiency of two encoding schemes for Artificial Neural Networks optimized by evolutionary algorithms. Direct Encoding encodes the weights for an a priori fixed neural network architecture. Cellular Encoding encodes both weights and the architecture of the neural network. In previous studies, Direct Encoding and Cellular Encoding have been used to create neural networks for balancing 1 and 2 poles attached to a cart on a fixed track. The poles are balanced by a controller that pushes the cart to the left or the right. In some cases velocity information about the pole and cart is provided as an input; in other cases the network must learn to balance a single pole without velocity information. A careful study of the behavior of these systems suggests that it is possible to balance a single pole with velocity information as an input and without learning to compute the velocity. A new fitness function is introduced that forces the neural network to compute the velocity. By using this new fitness function and tuning the syntactic constraints used with cellular encoding, we achieve a tenfold speedup over our previous study and solve a more difficult problem: balancing two poles when no information about the velocity is provided as input.",
"neighbors": [
1204,
1878,
1931,
2277,
2317,
2429,
2702
],
"mask": "Train"
},
{
"node_id": 2625,
"label": 2,
"text": "Title: Digital Neural Networks \nAbstract: Demands for applications requiring massive parallelism in symbolic environments have given rebirth to research in models labeled as neura l networks. These models are made up of many simple nodes which are highly interconnected such that computation takes place as data flows amongst the nodes of the network. To present, most models have proposed nodes based on simple analog functions, where inputs are multiplied by weights and summed, the total then optionally being transformed by an arbitrary function at the node. Learning in these systems is accomplished by adjusting the weights on the input lines. This paper discusses the use of digital (boolean) nodes as a primitive building block in connectionist systems. Digital nodes naturally engender new paradigms and mechanisms for learning and processing in connectionist networks. The digital nodes are used as the basic building block of a class of models called ASOCS (Adaptive Self-Organizing Concurrent Systems). These models combine massive parallelism with the ability to adapt in a self-organizing fashion. Basic features of standard neural network learning algorithms and those proposed using digital nodes are compared and contrasted. The latter mechanisms can lead to vastly improved efficiency for many applications. ",
"neighbors": [
2612
],
"mask": "Validation"
},
{
"node_id": 2626,
"label": 0,
"text": "Title: Focusing Construction and Selection of Abductive Hypotheses \nAbstract: Many abductive understanding systems explain novel situations by a chaining process that is neutral to explainer needs beyond generating some plausible explanation for the event being explained. This paper examines the relationship of standard models of abductive understanding to the case-based explanation model. In case-based explanation, construction and selection of abductive hypotheses are focused by specific explanations of prior episodes and by goal-based criteria reflecting current information needs. The case-based method is inspired by observations of human explanation of anomalous events during everyday understanding, and this paper focuses on the method's contributions to the problems of building good explanations in everyday domains. We identify five central issues, compare how those issues are addressed in traditional and case-based explanation models, and discuss motivations for using the case-based approach to facilitate generation of plausible and useful explanations in domains that are complex and imperfectly un derstood.",
"neighbors": [
1843,
2399,
2656
],
"mask": "Test"
},
{
"node_id": 2627,
"label": 6,
"text": "Title: Probability Estimation via Error-Correcting Output Coding \nAbstract: Previous research has shown that a technique called error-correcting output coding (ECOC) can dramatically improve the classification accuracy of supervised learning algorithms that learn to classify data points into one of k 2 classes. In this paper, we will extend the technique so that ECOC can also provide class probability information. ECOC is a method of converting k-class supervised learning problem into a large number L of two-class supervised learning problems and then combining the results of these L evaluations. The underlying two-class supervised learning algorithms are assumed to provide L probability estimates. The problem of computing class probabilities is formulated as an over-constrained system of L linear equations. Least squares methods are applied to solve these equations. Accuracy and reliability of the probability estimates are demonstrated.",
"neighbors": [
2423
],
"mask": "Train"
},
{
"node_id": 2628,
"label": 4,
"text": "Title: Generalizing in TD() learning \nAbstract: Previous research has shown that a technique called error-correcting output coding (ECOC) can dramatically improve the classification accuracy of supervised learning algorithms that learn to classify data points into one of k 2 classes. In this paper, we will extend the technique so that ECOC can also provide class probability information. ECOC is a method of converting k-class supervised learning problem into a large number L of two-class supervised learning problems and then combining the results of these L evaluations. The underlying two-class supervised learning algorithms are assumed to provide L probability estimates. The problem of computing class probabilities is formulated as an over-constrained system of L linear equations. Least squares methods are applied to solve these equations. Accuracy and reliability of the probability estimates are demonstrated.",
"neighbors": [
565,
738,
2629
],
"mask": "Train"
},
{
"node_id": 2629,
"label": 4,
"text": "Title: Towards a Reactive Critic \nAbstract: In this paper we propose a reactive critic, that is able to respond to changing situations. We will explain why this is usefull in reinforcement learning, where the critic is used to improve the control strategy. We take a problem for which we can derive the solution analytically. This enables us to investigate the relation between the parameters and the resulting approximations of the critic. We will also demonstrate how the reactive critic reponds to changing situations.",
"neighbors": [
565,
738,
2536,
2628
],
"mask": "Validation"
},
{
"node_id": 2630,
"label": 1,
"text": "Title: Simulating Quadratic Dynamical Systems is PSPACE-complete (preliminary version) \nAbstract: Quadratic Dynamical Systems (QDS), whose definition extends that of Markov chains, are used to model phenomena in a variety of fields like statistical physics and natural evolution. Such systems also play a role in genetic algorithms, a widely-used class of heuristics that are notoriously hard to analyze. Recently Rabinovich et al. took an important step in the study of QDS's by showing, under some technical assumptions, that such systems converge to a stationary distribution (similar theorems for Markov Chains are well-known). We show, however, that the following sampling problem for QDS's is PSPACE-hard: Given an initial distribution, produce a random sample from the t'th generation. The hardness result continues to hold for very restricted classes of QDS's with very simple initial distributions, thus suggesting that QDS's are intrinsically more complicated than Markov chains. ",
"neighbors": [
1826
],
"mask": "Train"
},
{
"node_id": 2631,
"label": 2,
"text": "Title: Nonlinear Resonance in Neuron Dynamics in Statistical Mechanics and Complex Systems \nAbstract: Hubler's technique using aperiodic forces to drive nonlinear oscillators to resonance is analyzed. The oscillators being examined are effective neurons that model Hopfield neural networks. The method is shown to be valid under several different circumstances. It is verified through analysis of the power spectrum, force, resonance, and energy transfer of the system. ",
"neighbors": [
2601
],
"mask": "Validation"
},
{
"node_id": 2632,
"label": 2,
"text": "Title: The Role of Activity in Synaptic Competition at the Neuromuscular Junction \nAbstract: An extended version of the dual constraint model of motor end-plate morphogenesis is presented that includes activity dependent and independent competition. It is supported by a wide range of recent neurophysiological evidence that indicates a strong relationship between synaptic efficacy and survival. The computational model is justified at the molecular level and its predictions match the developmental and regenerative behaviour of real synapses.",
"neighbors": [
2584
],
"mask": "Train"
},
{
"node_id": 2633,
"label": 6,
"text": "Title: Learning Using Group Representations (Extended Abstract) \nAbstract: We consider the problem of learning functions over a fixed distribution. An algorithm by Kushilevitz and Mansour [7] learns any boolean function over f0; 1g n in time polynomial in the L 1 -norm of the Fourier transform of the function. We show that the KM-algorithm is a special case of a more general class of learning algorithms. This is achieved by extending their ideas using representations of finite groups. We introduce some new classes of functions which can be learned using this generalized KM algorithm. ",
"neighbors": [
2011,
2182
],
"mask": "Validation"
},
{
"node_id": 2634,
"label": 3,
"text": "Title: Bayesian Model Averaging \nAbstract: Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to over-confident inferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent mechanism for accounting for this model uncertainty. Several methods for implementing BMA have recently emerged. We discuss these methods and present a number of examples. In these examples, BMA provides improved out-of-sample predictive performance. We also provide a catalogue of currently available BMA software. ",
"neighbors": [
1197,
1876
],
"mask": "Train"
},
{
"node_id": 2635,
"label": 0,
"text": "Title: Utilising Explanation to Assist the Refinement of Knowledge-Based Systems \nAbstract: Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to over-confident inferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent mechanism for accounting for this model uncertainty. Several methods for implementing BMA have recently emerged. We discuss these methods and present a number of examples. In these examples, BMA provides improved out-of-sample predictive performance. We also provide a catalogue of currently available BMA software. ",
"neighbors": [
136,
2231
],
"mask": "Train"
},
{
"node_id": 2636,
"label": 5,
"text": "Title: EMERALD: An Integrated System of Machine Learning and Discovery Programs to Support AI Education and\nAbstract: With the rapid expansion of machine learning methods and applications, there is a strong need for computer-based interactive tools that support education in this area. The EMERALD system was developed to provide hands-on experience and an interactive demonstration of several machine learning and discovery capabilities for students in AI and cognitive science, and for AI professionals. The current version of EMERALD integrates five programs that exhibit different types of machine learning and discovery: learning rules from examples, determining structural descriptions of object classes, inventing conceptual clusterings of entities, predicting sequences of objects, and discovering equations characterizing collections of quantitative and qualitative data. EMERALD extensively uses color graphic capabilities, voice synthesis, and a natural language representation of the knowledge acquired by the learning programs. Each program is presented as a \"learning robot,\" which has its own \"personality,\" expressed by its icon, its voice, the comments it generates during the learning process, and the results of learning presented as natural language text and/or voice output. Users learn about the capabilities of each \"robot\" both by being challenged to perform some learning tasks themselves, and by creating their own similar tasks to challenge the \"robot.\" EMERALD is an extension of ILLIAN, an initial, much smaller version that toured eight major US Museums of Science, and was seen by over half a million visitors. EMERALD's architecture allows it to incorporate new programs and new capabilities. The system runs on SUN workstations, and is available to universities and educational institutions. ",
"neighbors": [
479,
2300
],
"mask": "Train"
},
{
"node_id": 2637,
"label": 1,
"text": "Title: A Computational Environment for Exhaust Nozzle Design \nAbstract: The Nozzle Design Associate (NDA) is a computational environment for the design of jet engine exhaust nozzles for supersonic aircraft. NDA may be used either to design new aircraft or to design new nozzles that adapt existing aircraft so they may be reutilized for new missions. NDA was developed in a collaboration between computer scientists at Rut-gers University and exhaust nozzle designers at General Electric Aircraft Engines and General Electric Corporate Research and Development. The NDA project has two principal goals: to provide a useful engineering tool for exhaust nozzle design, and to explore fundamental research issues that arise in the application of automated design optimization methods to realistic engineering problems. ",
"neighbors": [
2652
],
"mask": "Train"
},
{
"node_id": 2638,
"label": 1,
"text": "Title: An Evolutionary Heuristic for the Minimum Vertex Cover Problem \nAbstract: The Nozzle Design Associate (NDA) is a computational environment for the design of jet engine exhaust nozzles for supersonic aircraft. NDA may be used either to design new aircraft or to design new nozzles that adapt existing aircraft so they may be reutilized for new missions. NDA was developed in a collaboration between computer scientists at Rut-gers University and exhaust nozzle designers at General Electric Aircraft Engines and General Electric Corporate Research and Development. The NDA project has two principal goals: to provide a useful engineering tool for exhaust nozzle design, and to explore fundamental research issues that arise in the application of automated design optimization methods to realistic engineering problems. ",
"neighbors": [
163,
2202
],
"mask": "Train"
},
{
"node_id": 2639,
"label": 2,
"text": "Title: New Modes of Generalization in Perceptual Learning \nAbstract: The learning of many visual perceptual tasks, such as motion discrimination, has been shown to be specific to the practiced stimulus, and new stimuli require re-learning from scratch [1-6]. This specificity, found in so many different tasks, supports the hypothesis that perceptual learning takes place in early visual cortical areas. In contrast, using a novel paradigm in motion discrimination where learning has been shown to be specific, we found generalization: We trained subjects to discriminate the directions of moving dots, and verified that learning does not transfer from the trained direction to a new one. However, by tracking the subjects' performance across time in the new direction, we found that their rate of learning doubled. Moreover, after mastering the task with an easy stimulus, subjects who had practiced briefly to discriminate the easy stimulus in a new direction generalized to a difficult stimulus in that direction. This generalization demanded both the mastering and the brief practice. Thus learning in motion discrimination always generalizes to new stimuli. Learning is manifested in various forms: acceleration of learning rate, indirect transfer, or direct transfer [7, 8]. These results challenge existing theories of perceptual learning, and suggest a more complex picture in which learning takes place at multiple levels. Learning in biological systems is of great importance. But while cognitive learning (or \"problem solving\") is abrupt and generalizes to analogous problems, we appear to acquire our perceptual skills gradually and specifically: human subjects cannot generalize a perceptual discrimination skill to solve similar problems with different attributes. For example, in a discrimination task as described in Fig. 1, a subject who is trained to discriminate motion directions between 43:5 ffi and 46:5 ffi cannot use this skill to discriminate 133:5 ffi from 136:5 ffi . 1 Such specificity supports the hypothesis that perceptual learning embodies neuronal modifications in the brain's stimulus-specific cortical areas (e.g., visual area MT) [1-6]. In contrast to previous results of specificity, we will show, in three experiments, that learning in motion discrimination always generalizes. (1) When the task is easy, it generalizes to all directions after training in ",
"neighbors": [
2117
],
"mask": "Train"
},
{
"node_id": 2640,
"label": 5,
"text": "Title: Learning Evolving Concepts Using Partial-Memory Approach Machine Learning and Inference Laboratory \nAbstract: This paper addresses the problem of learning evolving concepts, that is, concepts whose meaning gradually evolves in time. Solving this problem is important to many applications, for example, building intelligent agents for helping users in Internet search, active vision, automatically updating knowledge-bases, or acquiring profiles of users of telecommunication networks. Requirements for a learning architecture supporting such applications include the ability to incrementally modify concept definitions to accommodate new information, fast learning and recognition rates, low memory needs, and the understandability of computer-created concept descriptions. To address these requirements, we propose a learning architecture based on Variable-Valued Logic, the Star Methodology, and the AQ algorithm. The method uses a partial-memory approach, which means that in each step of learning, the system remembers the current concept descriptions and specially selected representative examples from the past experience. The developed method has been experimentally applied to the problem of computer system intrusion detection. The results show significant advantages of the method in learning speed and memory requirements with only slight decreases in predictive accuracy and concept simplicity when compared to traditional batch-style learning in which all training examples are provided at once. ",
"neighbors": [
2070
],
"mask": "Train"
},
{
"node_id": 2641,
"label": 1,
"text": "Title: Toward Simulated Evolution of Machine-Language Iteration \nAbstract: We use a simulated evolution search (genetic programming) for the automatic synthesis of small iterative machine-language programs. For an integer register machine with an addition instruction as its sole arithmetic operator, we show that genetic programming can produce exact and general multiplication routines by synthesizing the necessary iterative control structures from primitive machine-language instructions. Our program representation is a virtual register machine that admits arbitrary control flow. Our evolution strategy furthermore does not artificially restrict the synthesis of any control structure; we only place an upper bound on program evaluation time. A program's fitness is the distance between the output produced by a test case and the desired output (multiplication). The test cases exhaustively cover multiplication over a finite subset of the natural numbers (N 10 ); yet the derived solutions constitute general multiplication for the positive integers. For this problem, simulated evolution with a two-point crossover operator examines significantly fewer individuals in finding a solution than random search. Introduction of a small rate of mutation fur ther increases the number of solutions.",
"neighbors": [
380,
1745
],
"mask": "Validation"
},
{
"node_id": 2642,
"label": 2,
"text": "Title: Learning to Play Games From Experience: An Application of Artificial Neural Networks and Temporal Difference Learning \nAbstract: We use a simulated evolution search (genetic programming) for the automatic synthesis of small iterative machine-language programs. For an integer register machine with an addition instruction as its sole arithmetic operator, we show that genetic programming can produce exact and general multiplication routines by synthesizing the necessary iterative control structures from primitive machine-language instructions. Our program representation is a virtual register machine that admits arbitrary control flow. Our evolution strategy furthermore does not artificially restrict the synthesis of any control structure; we only place an upper bound on program evaluation time. A program's fitness is the distance between the output produced by a test case and the desired output (multiplication). The test cases exhaustively cover multiplication over a finite subset of the natural numbers (N 10 ); yet the derived solutions constitute general multiplication for the positive integers. For this problem, simulated evolution with a two-point crossover operator examines significantly fewer individuals in finding a solution than random search. Introduction of a small rate of mutation fur ther increases the number of solutions.",
"neighbors": [
523,
565,
2368
],
"mask": "Validation"
},
{
"node_id": 2643,
"label": 1,
"text": "Title: GP-Music: An Interactive Genetic Programming System for Music Generation with Automated Fitness Raters \nAbstract: Technical Report CSRP-98-13 Abstract In this paper we present the GP-Music System, an interactive system which allows users to evolve short musical sequences using interactive genetic programming, and its extensions aimed at making the system fully automated. The basic GP-system works by using a genetic programming algorithm, a small set of functions for creating musical sequences, and a user interface which allows the user to rate individual sequences. With this user interactive technique it was possible to generate pleasant tunes over runs of 20 individuals over 10 generations. As the user is the bottleneck in interactive systems, the system takes rating data from a users run and uses it to train a neural network based automatic rater, or auto rater, which can replace the user in bigger runs. Using this auto rater we were able to make runs of up to 50 generations with 500 individuals per generation. The best of run pieces generated by the auto raters were pleasant but were not, in general, as nice as those generated in user interactive runs. ",
"neighbors": [
2470,
2646
],
"mask": "Train"
},
{
"node_id": 2644,
"label": 6,
"text": "Title: Bayesian Induction of Features in Temporal Domains \nAbstract: Most concept induction algorithms process concept instances described in terms of properties that remain constant over time. In temporal domains, instances are best described in terms of properties whose values vary with time. Data engineering is called upon in temporal domains to transform the raw data into an appropriate form for concept induction. I investigate a method for inducing features suitable for classifying finite, univariate, time series that are governed by unknown deterministic processes contaminated by noise. In a supervised setting, I induce piecewise polynomials of appropriate complexity to characterize the data in each class, using Bayesian model induction principles. In this study, I evaluate the proposed method empirically in a semi-deterministic domain: the waveform classification problem, originally presented in the CART book. I compared the classification accuracy of the proposed algorithm to the accuracy attained by C4.5 under various noise levels. Feature induction improved the classification accuracy in noisy situations, but degraded it when there was no noise. The results demonstrate the value of the proposed method in the presence of noise, and reveal a weakness shared by all classifiers using generative rather than discriminative models: sensitivity to model inaccuracies. ",
"neighbors": [
2134
],
"mask": "Train"
},
{
"node_id": 2645,
"label": 0,
"text": "Title: CBR on Semi-structured Documents: The ExperienceBook and the FAllQ Project \nAbstract: In this article, we present a case-based approach on flexible query answering systems in two different application areas: The ExperienceBook supports technical diagnosis in the field of system administration. In the FAllQ project we use our CBR system for document retrieval in an industrial setting. The objective of these systems is to manage knowledge stored in less structured documents. The internal case memory is implemented as a Case Retrieval Net. This allows to handle large case bases with an efficient retrieval process. In order to provide multi user access we chose the client server model combined with a web interface.",
"neighbors": [
1854,
2482
],
"mask": "Train"
},
{
"node_id": 2646,
"label": 1,
"text": "Title: Automated Fitness Raters for the GP-Music System \nAbstract: ",
"neighbors": [
2470,
2643
],
"mask": "Train"
},
{
"node_id": 2647,
"label": 4,
"text": "Title: Using Local Trajectory Optimizers To Speed Up Global Optimization In Dynamic Programming \nAbstract: Dynamic programming provides a methodology to develop planners and controllers for nonlinear systems. However, general dynamic programming is computationally intractable. We have developed procedures that allow more complex planning and control problems to be solved. We use second order local trajectory optimization to generate locally optimal plans and local models of the value function and its derivatives. We maintain global consistency of the local models of the value function, guaranteeing that our locally optimal plans are actually globally optimal, up to the resolution of our search procedures.",
"neighbors": [
2430,
2658
],
"mask": "Test"
},
{
"node_id": 2648,
"label": 2,
"text": "Title: The Task Rehearsal Method of Sequential Learning \nAbstract: An hypothesis of functional transfer of task knowledge is presented that requires the development of a measure of task relatedness and a method of sequential learning. The task rehearsal method (TRM) is introduced to address the issues of sequential learning, namely retention and transfer of knowledge. TRM is a knowledge based inductive learning system that uses functional domain knowledge as a source of inductive bias. The representations of successfully learned tasks are stored within domain knowledge. Virtual examples generated by domain knowledge are rehearsed in parallel with the each new task using either the standard multiple task learning (MTL) or the MTL neural network methods. The results of experiments conducted on a synthetic domain of seven tasks demonstrate the method's ability to retain and transfer task knowledge. TRM is shown to be effective in developing hypothesis for tasks that suffer from impoverished training sets. Difficulties encountered during sequential learning over the diverse domain reinforce the need for a more robust measure of task relatedness. ",
"neighbors": [
1889
],
"mask": "Test"
},
{
"node_id": 2649,
"label": 5,
"text": "Title: Limits of Control Flow on Parallelism \nAbstract: This paper discusses three techniques useful in relaxing the constraints imposed by control flow on parallelism: control dependence analysis, executing multiple flows of control simultaneously, and speculative execution. We evaluate these techniques by using trace simulations to find the limits of parallelism for machines that employ different combinations of these techniques. We have three major results. First, local regions of code have limited parallelism, and control dependence analysis is useful in extracting global parallelism from different parts of a program. Second, a superscalar processor is fundamentally limited because it cannot execute independent regions of code concurrently. Higher performance can be obtained with machines, such as multiprocessors and dataflow machines, that can simultaneously follow multiple flows of control. Finally, without speculative execution to allow instructions to execute before their control dependences are resolved, only modest amounts of parallelism can be obtained for programs with complex control flow. ",
"neighbors": [
249,
735,
1956,
2106,
2436
],
"mask": "Validation"
},
{
"node_id": 2650,
"label": 5,
"text": "Title: Learning Search-Control Heuristics for Logic Programs: Applications to Speedup Learning and Language Acquisition \nAbstract: This paper presents a general framework, learning search-control heuristics for logic programs, which can be used to improve both the efficiency and accuracy of knowledge-based systems expressed as definite-clause logic programs. The approach combines techniques of explanation-based learning and recent advances in inductive logic programming to learn clause-selection heuristics that guide program execution. Two specific applications of this framework are detailed: dynamic optimization of Prolog programs (improving efficiency) and natural language acquisition (improving accuracy). In the area of program optimization, a prototype system, Dolphin is able to transform some intractable specifications into polynomial-time algorithms, and outperforms competing approaches in several benchmark speedup domains. A prototype language acquisition system, Chill is also described. It is capable of automatically acquiring semantic grammars, which uniformly incorprate syntactic and semantic constraints to parse sentences into case-role representations. Initial experiments show that this approach is able to construct accurate parsers which generalize well to novel sentences and significantly outperform previous approaches to learning case-role mapping based on connectionist techniques. Planned extensions of the general framework and the specific applications as well as plans for further evaluation are also discussed.",
"neighbors": [
204,
2215
],
"mask": "Test"
},
{
"node_id": 2651,
"label": 6,
"text": "Title: Relative Loss Bounds for Multidimensional Regression Problems \nAbstract: We study on-line generalized linear regression with multidimensional outputs, i.e., neural networks with multiple output nodes but no hidden nodes. We allow at the final layer transfer functions such as the softmax function that need to consider the linear activations to all the output neurons. We also use a parameterization function which transforms parameter vectors maintained by the algorithm into the actual weights. The on-line algorithm we consider updates the parameters in an additive manner, analogous to the delta rule, but because the actual weights are obtained via the possibly nonlinear parameterization function they may behave in a very different manner. Our approach is based on applying the notion of a matching loss function in two different contexts. First, we measure the loss of the algorithm in terms of the loss that matches the transfer function used to produce the outputs. Second, the loss function that matches the parameterization function can be used both as a measure of distance between models in motivating the update rule of the algorithm and as a potential function in analyzing its relative performance compared to an arbitrary fixed model. As a result, we have a unified treatment that generalizes earlier results for the gradient descent and exponentiated gradient algorithms to multidimensional outputs, including multiclass logistic regression.",
"neighbors": [
1062,
2059
],
"mask": "Train"
},
{
"node_id": 2652,
"label": 0,
"text": "Title: Knowledge-Based Re-engineering of Legacy Programs for Robustness in Automated Design \nAbstract: Systems for automated design optimization of complex real-world objects can, in principle, be constructed by combining domain-independent numerical routines with existing domain-specific analysis and simulation programs. Unfortunately, such legacy analysis codes are frequently unsuitable for use in automated design. They may crash for large classes of input, be numerically unstable or locally non-smooth, or be highly sensitive to control parameters. To be useful, analysis programs must be modified to reduce or eliminate only the undesired behaviors, without altering the desired computation. To do this by direct modification of the programs is labor-intensive, and necessitates costly revalidation. We have implemented a high-level language and run-time environment that allow failure-handling strategies to be incorporated into existing Fortran and C analysis programs while preserving their computational integrity. Our approach relies on globally managing the execution of these programs at the level of discretely callable functions so that the computation is only affected when problems are detected. Problem handling procedures are constructed from a knowledge base of generic problem management strategies. We show that our approach is effective in improving analysis program robustness and design optimization performance in the domain of conceptual design of jet engine nozzles. ",
"neighbors": [
240,
2308,
2637
],
"mask": "Train"
},
{
"node_id": 2653,
"label": 6,
"text": "Title: On the Sample Complexity of Weakly Learning \nAbstract: In this paper, we study the sample complexity of weak learning. That is, we ask how much data must be collected from an unknown distribution in order to extract a small but significant advantage in prediction. We show that it is important to distinguish between those learning algorithms that output deterministic hypotheses and those that output randomized hypotheses. We prove that in the weak learning model, any algorithm using deterministic hypotheses to weakly learn a class of Vapnik-Chervonenkis dimension d(n) requires ( d(n)) examples. In contrast, when randomized hypotheses are allowed, we show that fi(1) examples suffice in some cases. We then show that there exists an efficient algorithm using deterministic hypotheses that weakly learns against any distribution on a set of size d(n) with only O(d(n) 2=3 ) examples. Thus for the class of symmetric Boolean functions over n variables, where the strong learning sample complexity is fi(n), the sample complexity for weak learning using deterministic hypotheses is ( n) and O(n 2=3 ), and the sample complexity for weak learning using randomized hypotheses is fi(1). Next we prove the existence of classes for which the distribution-free sample size required to obtain a slight advantage in prediction over random guessing is essentially equal to that required to obtain arbitrary accuracy. Finally, for a class of small circuits, namely all parity functions of subsets of n Boolean variables, we prove a weak learning sample complexity of fi(n). This bound holds even if the weak learning algorithm is allowed to replace random sampling with membership queries, and the target distribution is uniform on f0; 1g n . p",
"neighbors": [
456,
672,
1363,
2028
],
"mask": "Train"
},
{
"node_id": 2654,
"label": 3,
"text": "Title: On the Sample Complexity of Weakly Learning \nAbstract: Convergence Results for the EM Approach to Abstract The Expectation-Maximization (EM) algorithm is an iterative approach to maximum likelihood parameter estimation. Jordan and Jacobs (1993) recently proposed an EM algorithm for the mixture of experts architecture of Jacobs, Jordan, Nowlan and Hinton (1991) and the hierarchical mixture of experts architecture of Jordan and Jacobs (1992). They showed empirically that the EM algorithm for these architectures yields significantly faster convergence than gradient ascent. In the current paper we provide a theoretical analysis of this algorithm. We show that the algorithm can be regarded as a variable metric algorithm with its searching direction having a positive projection on the gradient of the log likelihood. We also analyze the convergence of the algorithm and provide an explicit expression for the convergence rate. In addition, we describe an acceleration technique that yields a significant speedup in simulation experiments. This report describes research done at the Dept. of Brain and Cognitive Sciences, the Center for Biological and Computational Learning, and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for CBCL is provided in part by a grant from the NSF (ASC-9217041). Support for the laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Dept. of Defense. The authors were supported by a grant from the McDonnell-Pew Foundation, by a grant from ATR Human Information Processing Research Laboratories, by a grant from Siemens Corporation, by by grant IRI-9013991 from the National Science Foundation, by grant N00014-90-J-1942 from the Office of Naval Research, and by NSF grant ECS-9216531 to support an Initiative in Intelligent Control at MIT. Michael I. Jordan is a NSF Presidential Young Investigator. ",
"neighbors": [
74,
2421
],
"mask": "Train"
},
{
"node_id": 2655,
"label": 4,
"text": "Title: Associative Reinforcement Learning: A Generate and Test Algorithm \nAbstract: An agent that must learn to act in the world by trial and error faces the reinforcement learning problem, which is quite different from standard concept learning. Although good algorithms exist for this problem in the general case, they are often quite inefficient and do not exhibit generalization. One strategy is to find restricted classes of action policies that can be learned more efficiently. This paper pursues that strategy by developing an algorithm that performans an on-line search through the space of action mappings, expressed as Boolean formulae. The algorithm is compared with existing methods in empirical trials and is shown to have very good performance. ",
"neighbors": [
1975
],
"mask": "Train"
},
{
"node_id": 2656,
"label": 0,
"text": "Title: ADAPtER: an Integrated Diagnostic System Combining Case-Based and Abductive Reasoning \nAbstract: The aim of this paper is to describe the ADAPtER system, a diagnostic architecture combining case-based reasoning with abductive reasoning and exploiting the adaptation of the solution of old episodes, in order to focus the reasoning process. Domain knowledge is represented via a logical model and basic mechanisms, based on abductive reasoning with consistency constraints, have been defined for solving complex diagnostic problems involving multiple faults. The model-based component has been supplemented with a case memory and adaptation mechanisms have been developed, in order to make the diagnostic system able to exploit past experience in solving new cases. A heuristic function is proposed, able to rank the solutions associated to retrieved cases with respect to the adaptation effort needed to transform such solutions into possible solutions for the current case. We will discuss some preliminary experiments showing the validity of the above heuristic and the convenience of solving a new case by adapting a retrieved solution rather than solving the new problem from scratch.",
"neighbors": [
799,
1699,
2626
],
"mask": "Train"
},
{
"node_id": 2657,
"label": 6,
"text": "Title: Learning Complex Boolean Functions: Algorithms and Applications \nAbstract: The most commonly used neural network models are not well suited to direct digital implementations because each node needs to perform a large number of operations between floating point values. Fortunately, the ability to learn from examples and to generalize is not restricted to networks of this type. Indeed, networks where each node implements a simple Boolean function (Boolean networks) can be designed in such a way as to exhibit similar properties. Two algorithms that generate Boolean networks from examples are presented. The results show that these algorithms generalize very well in a class of problems that accept compact Boolean network descriptions. The techniques described are general and can be applied to tasks that are not known to have that characteristic. Two examples of applications are presented: image reconstruction and hand-written character recognition.",
"neighbors": [
1161,
2423
],
"mask": "Validation"
},
{
"node_id": 2658,
"label": 0,
"text": "Title: Control Systems Magazine, 14, 1, pp.57-71. Robot Juggling: An Implementation of Memory-based Learning \nAbstract: This paper explores issues involved in implementing robot learning for a challenging dynamic task, using a case study from robot juggling. We use a memory-based local model - ing approach (locally weighted regression) to represent a learned model of the task to be performed. Statistical tests are given to examine the uncertainty of a model, to optimize its pre diction quality, and to deal with noisy and corrupted data. We develop an exploration algorithm that explicitly deals with prediction accuracy requirements dur ing explo - ration. Using all these ingredients in combination with methods from optimal control, our robot achieves fast real - time learning of the task within 40 to 100 trials. * Address of both authors: Massachusetts Institute of Technology, The Artificial Intelligence Laboratory & The Department of Brain and Cognitive Sciences, 545 Technology Square, Cambride, MA 02139, USA. Email: ss-chaal@ai.mit.edu, cga@ai.mit.edu. Support was provided by the Air Force Office of Sci entific Research and by Siemens Cor pora tion. Support for the first author was provided by the Ger man Scholar ship Foundation and the Alexander von Hum boldt Founda tion. Support for the second author was provided by a Na tional Sci ence Foundation Pre sidential Young Investigator Award. We thank Gideon Stein for im ple ment ing the first version of LWR on the i860 microprocessor, and Gerrie van Zyl for build ing the devil stick robot and implementing the first version of devil stick learning. ",
"neighbors": [
427,
477,
566,
691,
843,
1559,
1860,
2647
],
"mask": "Train"
},
{
"node_id": 2659,
"label": 1,
"text": "Title: Adaptation of Genetic Algorithms for Engineering Design Optimization \nAbstract: Genetic algorithms have been extensively used in different domains as a means of doing global optimization in a simple yet reliable manner. However, in some realistic engineering design optimization domains it was observed that a simple classical implementation of the GA based on binary encoding and bit mutation and crossover was sometimes inefficient and unable to reach the global optimum. Using floating point representation alone does not eliminate the problem. In this paper we describe a way of augmenting the GA with new operators and strategies that take advantage of the structure and properties of such engineering design domains. Empirical results (initially in the domain of conceptual design of supersonic transport aircraft and the domain of high performance supersonic missile inlet design) demonstrate that the newly formulated GA can be significantly better than the classical GA in terms of efficiency and reliability. http://www.cs.rutgers.edu/~shehata/papers.html",
"neighbors": [
163,
743,
2030,
2316
],
"mask": "Train"
},
{
"node_id": 2660,
"label": 3,
"text": "Title: Discovering Structure in Continuous Variables Using Bayesian Networks \nAbstract: We study Bayesian networks for continuous variables using nonlinear conditional density estimators. We demonstrate that useful structures can be extracted from a data set in a self-organized way and we present sampling techniques for belief update based on ",
"neighbors": [
558,
577,
1933
],
"mask": "Validation"
},
{
"node_id": 2661,
"label": 3,
"text": "Title: Minimax Risk over l p -Balls for l q -error Key Words. Minimax Decision Theory.\nAbstract: Consider estimating the mean vector from data N n (; 2 I) with l q norm loss, q 1, when is known to lie in an n-dimensional l p ball, p 2 (0; 1). For large n, the ratio of minimax linear risk to minimax risk can be arbitrarily large if p < q. Obvious exceptions aside, the limiting ratio equals 1 only if p = q = 2. Our arguments are mostly indirect, involving a reduction to a univariate Bayes minimax problem. When p < q, simple non-linear co-ordinatewise threshold rules are asymptotically minimax at small signal-to-noise ratios, and within a bounded factor of asymptotic minimaxity in general. Our results are basic to a theory of estimation in Besov spaces ",
"neighbors": [
1910,
2159,
2242,
2375,
2506
],
"mask": "Train"
},
{
"node_id": 2662,
"label": 2,
"text": "Title: Efficient Visual Search: A Connectionist Solution \nAbstract: Searching for objects in scenes is a natural task for people and has been extensively studied by psychologists. In this paper we examine this task from a connectionist perspective. Computational complexity arguments suggest that parallel feed-forward networks cannot perform this task efficiently. One difficulty is that, in order to distinguish the target from distractors, a combination of features must be associated with a single object. Often called the binding problem, this requirement presents a serious hurdle for connectionist models of visual processing when multiple objects are present. Psychophysical experiments suggest that people use covert visual attention to get around this problem. In this paper we describe a psychologically plausible system which uses a focus of attention mechanism to locate target objects. A strategy that combines top-down and bottom-up information is used to minimize search time. The behavior of the resulting system matches the reaction time behavior of people in several interesting tasks. ",
"neighbors": [
527,
1822,
2606
],
"mask": "Train"
},
{
"node_id": 2663,
"label": 5,
"text": "Title: Inverting Implication with Small Training Sets \nAbstract: We present an algorithm for inducing recursive clauses using inverse implication (rather than inverse resolution) as the underlying generalization method. Our approach applies to a class of logic programs similar to the class of primitive recursive functions. Induction is performed using a small number of positive examples that need not be along the same resolution path. Our algorithm, implemented in a system named CRUSTACEAN, locates matched lists of generating terms that determine the pattern of decomposition exhibited in the (target) recursive clause. Our theoretical analysis defines the class of logic programs for which our approach is complete, described in terms characteristic of other ILP approaches. Our current implementation is considerably faster than previously reported. We present evidence demonstrating that, given randomly selected inputs, increasing the number of positive examples increases accuracy and reduces the number of outputs. We relate our approach to similar recent work on inducing recursive clauses.",
"neighbors": [
1781,
2229
],
"mask": "Train"
},
{
"node_id": 2664,
"label": 1,
"text": "Title: Coevolving Communicative Behavior in a Linear Pursuer-Evader Game \nAbstract: The pursuer-evader (PE) game is recognized as an important domain in which to study the coevolution of robust adaptive behavior and protean behavior (Miller and Cliff, 1994). Nevertheless, the potential of the game is largely unrealized due to methodological hurdles in coevolutionary simulation raised by PE; versions of the game that have optimal solutions (Isaacs, 1965) are closed-ended, while other formulations are opaque with respect to their solution space, for the lack of a rigorous metric of agent behavior. This inability to characterize behavior, in turn, obfuscates coevolutionary dynamics. We present a new formulation of PE that affords a rigorous measure of agent behavior and system dynamics. The game is moved from the two-dimensional plane to the one-dimensional bit-string; at each time step, the evader generates a bit that the pursuer must simultaneously predict. Because behavior is expressed as a time series, we can employ information theory to provide quantitative analysis of agent activity. Further, this version of PE opens vistas onto the communicative component of pursuit and evasion behavior, providing an open-ended serial communications channel and an open world (via coevolution). Results show that subtle changes to our game determine whether it is open-ended, and profoundly affect the viability of arms-race dynamics. ",
"neighbors": [
189,
415,
712,
2102
],
"mask": "Train"
},
{
"node_id": 2665,
"label": 0,
"text": "Title: Parametric Design Problem Solving \nAbstract: The aim of this paper is to understand what is involved in parametric design problem solving. In order to achieve this goal, in this paper i) we identify and detail the conceptual elements defining a parametric design task specification; ii) we illustrate how these elements are interpreted and operationalised during the design process; and iii) we formulate a generic model of parametric design problem solving. We then redescribe a number of problem solving methods in terms of the proposed generic model and we show that such a redescription enables us to provide a more precise account of the different competence behaviours expressed by the methods in Design is about constructing artifacts. This means that, broadly speaking, any design process is 'creative', in the sense that a design process produces a 'new solution', as opposed to selecting a solution from a predefined set. While recognizing the essential creative elements present in any design process, researchers, e.g. Gero (1990), restrict the use of the term 'creative design' to those design applications where the design elements of the target artifact cannot be selected from a predefined set. For instance, when designing a new car model it is normally the case that some design innovations are included, which were not present in previous car designs. In other words it is not (always) possible to characterise the process of designing a new car as one in which components are assembled and configured from a predefined set. Nevertheless, in a large number of real-world applications it is possible to assume that the target artifact is going to be designed in terms of predefined design elements. In such a scenario the design process consists of assembling and configuring these preexisting design elements in a way which satisfies the design requirements and constraints, and approximates some, typically cost-related, optimization criterion. This class of design tasks takes the name of configuration design (Stefik, 1995). In many cases, typically when the problem in hand does not exhibit complex spatial requirements and all possible solutions adhere to a common solution template, it is possible to simplify the configuration design problem even further, by modelling the target artifact as a set of parameters and characterizing design problem solving as the process of assigning values to parameters in accordance with the given design requirements, constraints, and optimization criterion. When this assumption is true for a particular task, we say that this is a parametric design task. The VT application (Marcus et al., 1988; Yost and Rothenfluh, 1996) provides a well-known example of a parametric design task. The aim of this paper is to understand what is involved in parametric design problem solving. In order to achieve this goal, in this paper i) we identify and detail the conceptual elements defining a parametric design task specification; ii) we illustrate how these elements are interpreted and operationalised during the design process; and iii) we produce a generic model of parametric design problem solving, characterised at the knowledge level, which generalizes from existing methods for parametric design. We then redescribe a number of problem solving methods in terms of the question.",
"neighbors": [
2395
],
"mask": "Test"
},
{
"node_id": 2666,
"label": 4,
"text": "Title: A Distributed Reinforcement Learning Scheme for Network Routing \nAbstract: In this paper we describe a self-adjusting algorithm for packet routing in which a reinforcement learning method is embedded into each node of a network. Only local information is used at each node to keep accurate statistics on which routing policies lead to minimal routing times. In simple experiments involving a 36-node irregularly-connected network, this learning approach proves superior to routing based on precomputed shortest paths.",
"neighbors": [
451,
2411,
2453
],
"mask": "Train"
},
{
"node_id": 2667,
"label": 1,
"text": "Title: Biological metaphors and the design of modular artificial neural networks Master's thesis of \nAbstract: In this paper we describe a self-adjusting algorithm for packet routing in which a reinforcement learning method is embedded into each node of a network. Only local information is used at each node to keep accurate statistics on which routing policies lead to minimal routing times. In simple experiments involving a 36-node irregularly-connected network, this learning approach proves superior to routing based on precomputed shortest paths.",
"neighbors": [
163,
2295,
2381,
2429,
2504
],
"mask": "Validation"
},
{
"node_id": 2668,
"label": 5,
"text": "Title: Efficient Instruction Scheduling Using Finite State Automata \nAbstract: Modern compilers employ sophisticated instruction scheduling techniques to shorten the number of cycles taken to execute the instruction stream. In addition to correctness, the instruction scheduler must also ensure that hardware resources are not oversubscribed in any cycle. For a contemporary processor implementation with multiple pipelines and complex resource usage restrictions, this is not an easy task. The complexity involved in reasoning about such resource hazards is one of the primary factors that constrain the instruction scheduler from performing many aggressive transformations. For example, the ability to do code motion or instruction replacement in the middle of an already scheduled block would be a very powerful transformation if it could be performed efficiently. We extend a technique for detecting pipeline resource hazards based on finite state automata, to support the efficient implementation of such transformations that are essential for aggressive instruction scheduling beyond basic blocks. Although similar code transformations can be supported by other schemes such as reservation tables, our scheme is superior in terms of space and time. A global instruction scheduler that used these techniques was implemented in the KSR compiler. ",
"neighbors": [
2365
],
"mask": "Train"
},
{
"node_id": 2669,
"label": 3,
"text": "Title: A proposal for variable selection in the Cox model \nAbstract: We propose a new method for variable selection and estimation in Cox's proportional hazards model. Our proposal minimizes the log partial likelihood subject to the sum of the absolute values of the parameters being bounded by a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly zero and hence gives interpretable models. The method is a variation of the \"lasso\" proposal of Tibshirani (1994), designed for the linear regression context. Simulations indicate that the lasso can be more accurate than stepwise selection in this setting. ",
"neighbors": [
2596
],
"mask": "Validation"
},
{
"node_id": 2670,
"label": 2,
"text": "Title: A proposal for variable selection in the Cox model \nAbstract: GMD Report #633 Abstract Many of the current artificial neural network systems have serious limitations, concerning accessibility, flexibility, scaling and reliability. In order to go some way to removing these we suggest a reflective neural network architecture. In such an architecture, the modular structure is the most important element. The building-block elements are called \"minos' modules. They perform self-observation and inform on the current level of development, or scope of expertise, within the module. A Pandemonium system integrates such submodules so that they work together to handle mapping tasks. Network complexity limitations are attacked in this way with the Pandemonium problem decomposition paradigm, and both static and dynamic unreliability of the whole Pandemonium system is effectively eliminated through the generation and interpretation of confidence and ambiguity measures at every moment during the development of the system. Two problem domains are used to test and demonstrate various aspects of our architecture. Reliability and quality measures are defined for systems that only answer part of the time. Our system achieves better quality values than single networks of larger size for a handwritten digit problem. When both second and third best answers are accepted, our system is left with only 5% error on the test set, 2.1% better than the best single net. It is also shown how the system can elegantly learn to handle garbage patterns. With the parity problem it is demonstrated how complexity of problems may be decomposed automatically by the system, through solving it with networks of size smaller than a single net is required to be. Even when the system does not find a solution to the parity problem, because networks of too small a size are used, the reliability remains around 99-100%. Our Pandemonium architecture gives more power and flexibility to the higher levels of a large hybrid system than a single net system can, offering useful information for higher-level feedback loops, through which reliability of answers may be intelligently traded for less reliable but important \"intuitional\" answers. In providing weighted alternatives and possible generalizations, this architecture gives the best possible service to the larger system of which it will form part. ",
"neighbors": [
253,
489,
1815
],
"mask": "Train"
},
{
"node_id": 2671,
"label": 2,
"text": "Title: Rejection of Incorrect Answers from a Neural Net Classifier \nAbstract: Frank Smieja Report number: 1993/2 ",
"neighbors": [
1815
],
"mask": "Train"
},
{
"node_id": 2672,
"label": 2,
"text": "Title: REINFORCEMENT LEARNING FOR COORDINATED REACTIVE CONTROL \nAbstract: The demands of rapid response and the complexity of many environments make it difficult to decompose, tune and coordinate reactive behaviors while ensuring consistency. Reinforcement learning networks can address the tuning problem, but do not address the problem of decomposition and coordination. We hypothesize that interacting reactions can often be decomposed into separate control tasks resident in separate networks and that the interaction can be coordinated through the tuning mechanism and a higher level controller. To explore these issues, we have implemented a reinforcement learning architecture as the reactive component of a two layer control system for a simulated race car. By varying the architecture, we test whether decomposing reactivity into separate controllers leads to superior overall performance and learning convergence in our domain. ",
"neighbors": [
465,
565,
636,
2409
],
"mask": "Train"
},
{
"node_id": 2673,
"label": 1,
"text": "Title: A genetic prototype learner \nAbstract: Supervised classification problems have received considerable attention from the machine learning community. We propose a novel genetic algorithm based prototype learning system, PLEASE, for this class of problems. Given a set of prototypes for each of the possible classes, the class of an input instance is determined by the prototype nearest to this instance. We assume ordinal attributes and prototypes are represented as sets of feature-value pairs. A genetic algorithm is used to evolve the number of prototypes per class and their positions on the input space as determined by corresponding feature-value pairs. Comparisons with C4.5 on a set of artificial problems of controlled complexity demonstrate the effectiveness of the pro posed system.",
"neighbors": [
163,
638,
995,
1224,
2541
],
"mask": "Test"
},
{
"node_id": 2674,
"label": 2,
"text": "Title: Comparing Methods for Refining Certainty-Factor Rule-Bases \nAbstract: This paper compares two methods for refining uncertain knowledge bases using propositional certainty-factor rules. The first method, implemented in the Rapture system, employs neural-network training to refine the certainties of existing rules but uses a symbolic technique to add new rules. The second method, based on the one used in the Kbann system, initially adds a complete set of potential new rules with very low certainty and allows neural-network training to filter and adjust these rules. Experimental results indicate that the former method results in significantly faster training and produces much simpler refined rule bases with slightly greater accuracy.",
"neighbors": [
159,
1352,
2066,
2543
],
"mask": "Train"
},
{
"node_id": 2675,
"label": 6,
"text": "Title: CONSTRUCTING CONJUNCTIVE TESTS FOR DECISION TREES \nAbstract: This paper discusses an approach of constructing new attributes based on decision trees and production rules. It can improve the concepts learned in the form of decision trees by simplifying them and improving their predictive accuracy. In addition, this approach can distinguish relevant primitive attributes from irrelevant primitive attributes. ",
"neighbors": [
1256,
1595,
1824,
1862,
1964
],
"mask": "Test"
},
{
"node_id": 2676,
"label": 2,
"text": "Title: Models of perceptual learning in vernier hyperacuity \nAbstract: Performance of human subjects in a wide variety of early visual processing tasks improves with practice. HyperBF networks (Poggio and Girosi, 1990) constitute a mathematically well-founded framework for understanding such improvement in performance, or perceptual learning, in the class of tasks known as visual hyperacuity. The present article concentrates on two issues raised by the recent psychophysical and computational findings reported in (Poggio et al., 1992b; Fahle and Edelman, 1992). First, we develop a biologically plausible extension of the HyperBF model that takes into account basic features of the functional architecture of early vision. Second, we explore various learning modes that can coexist within the HyperBF framework and focus on two unsupervised learning rules which may be involved in hyperacuity learning. Finally, we report results of psychophysical experiments that are consistent with the hypothesis that activity-dependent presynaptic amplification may be involved in perceptual learning in hyperacuity. ",
"neighbors": [
611,
1787,
2385
],
"mask": "Test"
},
{
"node_id": 2677,
"label": 3,
"text": "Title: Mining and Model Simplicity: A Case Study in Diagnosis \nAbstract: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD), 1996. The official version of this paper has been published by the American Association for Artificial Intelligence (http://www.aaai.org) c fl 1996, American Association for Artificial Intelligence. All rights reserved. Abstract We describe the results of performing data mining on a challenging medical diagnosis domain, acute abdominal pain. This domain is well known to be difficult, yielding little more than 60% predictive accuracy for most human and machine diagnosticians. Moreover, many researchers argue that one of the simplest approaches, the naive Bayesian classifier, is optimal. By comparing the performance of the naive Bayesian classifier to its more general cousin, the Bayesian network classifier, and to selective Bayesian classifiers with just 10% of the total attributes, we show that the simplest models perform at least as well as the more complex models. We argue that simple models like the selective naive Bayesian classifier will perform as well as more complicated models for similarly complex domains with relatively small data sets, thereby calling into question the extra expense necessary to induce more complex models. ",
"neighbors": [
1339,
1582,
1909,
2017,
2338
],
"mask": "Train"
},
{
"node_id": 2678,
"label": 2,
"text": "Title: Egocentric spatial representation in early vision \nAbstract: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD), 1996. The official version of this paper has been published by the American Association for Artificial Intelligence (http://www.aaai.org) c fl 1996, American Association for Artificial Intelligence. All rights reserved. Abstract We describe the results of performing data mining on a challenging medical diagnosis domain, acute abdominal pain. This domain is well known to be difficult, yielding little more than 60% predictive accuracy for most human and machine diagnosticians. Moreover, many researchers argue that one of the simplest approaches, the naive Bayesian classifier, is optimal. By comparing the performance of the naive Bayesian classifier to its more general cousin, the Bayesian network classifier, and to selective Bayesian classifiers with just 10% of the total attributes, we show that the simplest models perform at least as well as the more complex models. We argue that simple models like the selective naive Bayesian classifier will perform as well as more complicated models for similarly complex domains with relatively small data sets, thereby calling into question the extra expense necessary to induce more complex models. ",
"neighbors": [
2477,
2576
],
"mask": "Validation"
},
{
"node_id": 2679,
"label": 3,
"text": "Title: STUDIES OF QUALITY MONITOR TIME SERIES: THE V.A. HOSPITAL SYSTEM build on foundational contributions in\nAbstract: This report describes statistical research and development work on hospital quality monitor data sets from the nationwide VA hospital system. The project covers statistical analysis, exploration and modelling of data from several quality monitors, with the primary goals of: (a) understanding patterns of variability over time in hospital-level and monitor area specific quality monitor measures, and (b) understanding patterns of dependencies between sets of monitors. We present discussion of basic perspectives on data structure and preliminary data exploration for three monitors, followed by developments of several classes of formal models. We identify classes of hierarchical random effects time series models to be of relevance in mod-elling single or multiple monitor time series. We summarise basic model features and results of analyses of the three monitor data sets, in both single and multiple monitor frameworks, and present a variety of summary inferences in graphical displays. Our discussion includes summary conclusions related to the two key goals, discussions of questions of comparisons across hospitals, and some recommendations about further potential substantive and statistical investigations. ",
"neighbors": [
99,
2578
],
"mask": "Train"
},
{
"node_id": 2680,
"label": 2,
"text": "Title: Least Absolute Shrinkage is Equivalent to Quadratic Penalization \nAbstract: Adaptive ridge is a special form of ridge regression, balancing the quadratic penalization on each parameter of the model. This paper shows the equivalence between adaptive ridge and lasso (least absolute shrinkage and selection operator). This equivalence states that both procedures produce the same estimate. Least absolute shrinkage can thus be viewed as a particular quadratic penalization. From this observation, we derive an EM algorithm to compute the lasso solution. We finally present a series of applications of this type of algorithm in regres sion problems: kernel regression, additive modeling and neural net training.",
"neighbors": [
101,
157,
2596
],
"mask": "Train"
},
{
"node_id": 2681,
"label": 2,
"text": "Title: Regression with Input-dependent Noise: A Gaussian Process Treatment \nAbstract: Technical Report NCRG/98/002, available from http://www.ncrg.aston.ac.uk/ To appear in Advances in Neural Information Processing Systems 10 eds. M. I. Jordan, M. J. Kearns and S. A. Solla. Lawrence Erlbaum (1998). Abstract Gaussian processes provide natural non-parametric prior distributions over regression functions. In this paper we consider regression problems where there is noise on the output, and the variance of the noise depends on the inputs. If we assume that the noise is a smooth function of the inputs, then it is natural to model the noise variance using a second Gaussian process, in addition to the Gaussian process governing the noise-free output value. We show that prior uncertainty about the parameters controlling both processes can be handled and that the posterior distribution of the noise rate can be sampled from using Markov chain Monte Carlo methods. Our results on a synthetic data set give a posterior noise variance that well-approximates the true variance.",
"neighbors": [
78,
160,
1857
],
"mask": "Validation"
},
{
"node_id": 2682,
"label": 3,
"text": "Title: Importance Sampling \nAbstract: Technical Report No. 9805, Department of Statistics, University of Toronto Abstract. Simulated annealing | moving from a tractable distribution to a distribution of interest via a sequence of intermediate distributions | has traditionally been used as an inexact method of handling isolated modes in Markov chain samplers. Here, it is shown how one can use the Markov chain transitions for such an annealing sequence to define an importance sampler. The Markov chain aspect allows this method to perform acceptably even for high-dimensional problems, where finding good importance sampling distributions would otherwise be very difficult, while the use of importance weights ensures that the estimates found converge to the correct values as the number of annealing runs increases. This annealed importance sampling procedure resembles the second half of the previously-studied tempered transitions, and can be seen as a generalization of a recently-proposed variant of sequential importance sampling. It is also related to thermodynamic integration methods for estimating ratios of normalizing constants. Annealed importance sampling is most attractive when isolated modes are present, or when estimates of normalizing constants are required, but it may also be more generally useful, since its independent sampling allows one to bypass some of the problems of assessing convergence and autocorrelation in Markov chain samplers. ",
"neighbors": [
48,
2348
],
"mask": "Train"
},
{
"node_id": 2683,
"label": 2,
"text": "Title: Introduction to Radial Basis Function Networks \nAbstract: This document is an introduction to radial basis function (RBF) networks, a type of artificial neural network for application to problems of supervised learning (e.g. regression, classification and time series prediction). It is available in either PostScript or hyper-text 2 . ",
"neighbors": [
427,
668,
687,
2044
],
"mask": "Train"
},
{
"node_id": 2684,
"label": 2,
"text": "Title: INVERSION IN TIME \nAbstract: Inversion of multilayer synchronous networks is a method which tries to answer questions like \"What kind of input will give a desired output?\" or \"Is it possible to get a desired output (under special input/output constraints)?\". We will describe two methods of inverting a connectionist network. Firstly, we extend inversion via backpropagation (Linden/Kindermann [4], Williams [11]) to recurrent (El-man [1], Jordan [3], Mozer [5], Williams/Zipser [10]), time-delayed (Waibel at al. [9]) and discrete versions of continuous networks (Pineda [7], Pearlmutter [6]). The result of inversion is an input vector. The corresponding output vector is equal to the target vector except a small remainder. The knowledge of those attractors may help to understand the function and the generalization qualities of connectionist systems of this kind. Secondly, we introduce a new inversion method for proving the non-existence of an input combination under special constraints, e.g. in a subspace of the input space. This method works by iterative exclusion of invalid activation values. It might be a helpful way to judge the properties of a trained network. We conclude with simulation results of three different tasks: XOR, morse signal decoding and handwritten digit recognition. ",
"neighbors": [
2523
],
"mask": "Validation"
},
{
"node_id": 2685,
"label": 6,
"text": "Title: Learning under persistent drift \nAbstract: In this paper we study learning algorithms for environments which are changing over time. Unlike most previous work, we are interested in the case where the changes might be rapid but their \"direction\" is relatively constant. We model this type of change by assuming that the target distribution is changing continuously at a constant rate from one extreme distribution to another. We show in this case how to use a simple weighting scheme to estimate the error of an hypothesis, and using this estimate, to minimize the error of the prediction.",
"neighbors": [
2053,
2054
],
"mask": "Test"
},
{
"node_id": 2686,
"label": 2,
"text": "Title: Penalisation multiple adaptative un nouvel algorithme de regression, la penalisation multiple adapta-tive. Cet algorithme represente\nAbstract: Chaque parametre du modele est penalise individuellement. Le reglage de ces penalisations se fait automatiquement a partir de la definition d'un hyperparametre de regularisation globale. Cet hyperparametre, qui controle la complexite du regresseur, peut ^etre estime par des techniques de reechantillonnage. Nous montrons experimentalement les performances et la stabilite de la penalisation multiple adaptative dans le cadre de la regression lineaire. Nous avons choisi des problemes pour lesquels le probleme du controle de la complexite est particulierement crucial, comme dans le cadre plus general de l'estimation fonctionnelle. Les comparaisons avec les moindres carres regularises et la selection de variables nous permettent de deduire les conditions d'application de chaque algorithme de penalisation. Lors des simulations, nous testons egalement plusieurs techniques de reechantillonnage. Ces techniques sont utilisees pour selectionner la complexite optimale des estimateurs de la fonction de regression. Nous comparons les pertes occasionnees par chacune d'entre elles lors de la selection de modeles sous-optimaux. Nous regardons egalement si elles permettent de determiner l'estimateur de la fonction de regression minimisant l'erreur en generalisation parmi les differentes methodes de penalisation en competition. ",
"neighbors": [
101,
916,
2596
],
"mask": "Train"
},
{
"node_id": 2687,
"label": 4,
"text": "Title: ALECSYS and the AutonoMouse: Learning to Control a Real Robot by Distributed Classifier Systems \nAbstract: Chaque parametre du modele est penalise individuellement. Le reglage de ces penalisations se fait automatiquement a partir de la definition d'un hyperparametre de regularisation globale. Cet hyperparametre, qui controle la complexite du regresseur, peut ^etre estime par des techniques de reechantillonnage. Nous montrons experimentalement les performances et la stabilite de la penalisation multiple adaptative dans le cadre de la regression lineaire. Nous avons choisi des problemes pour lesquels le probleme du controle de la complexite est particulierement crucial, comme dans le cadre plus general de l'estimation fonctionnelle. Les comparaisons avec les moindres carres regularises et la selection de variables nous permettent de deduire les conditions d'application de chaque algorithme de penalisation. Lors des simulations, nous testons egalement plusieurs techniques de reechantillonnage. Ces techniques sont utilisees pour selectionner la complexite optimale des estimateurs de la fonction de regression. Nous comparons les pertes occasionnees par chacune d'entre elles lors de la selection de modeles sous-optimaux. Nous regardons egalement si elles permettent de determiner l'estimateur de la fonction de regression minimisant l'erreur en generalisation parmi les differentes methodes de penalisation en competition. ",
"neighbors": [
636,
764,
2174
],
"mask": "Train"
},
{
"node_id": 2688,
"label": 1,
"text": "Title: An Adverse Interaction between the Crossover Operator and a Restriction on Tree Depth of Crossover\nAbstract: The Crossover operator is common to most implementations of Genetic Programming (GP). Another, usually unavoidable, factor is some form of restriction on the size of trees in the GP population. This paper concentrates on the interaction between the Crossover operator and a restriction on tree depth demonstrated by the MAX problem, which involves returning the largest possible value for given function and terminal sets. ",
"neighbors": [
1784,
1839,
1840,
2216
],
"mask": "Train"
},
{
"node_id": 2689,
"label": 6,
"text": "Title: Expected Mistake Bound Model for On-Line Reinforcement Learning \nAbstract: We propose a model of efficient on-line reinforcement learning based on the expected mistake bound framework introduced by Haussler, Littlestone and Warmuth (1987). The measure of performance we use is the expected difference between the total reward received by the learning agent and that received by an agent behaving optimally from the start. We call this expected difference the cumulative mistake of the agent and we require that it \"levels off\" at a reasonably fast rate as the learning progresses. We show that this model is polynomially equivalent to the PAC model of off-line reinforcement learning introduced in (Fiechter, 1994). In particular we show how an off-line PAC reinforcement learning algorithm can be transformed into an efficient on-line algorithm in a simple and practical way. An immediate consequence of this result is that the PAC algorithm for the general finite state-space reinforcement learning problem described in (Fiechter, 1994) can be transformed into a polynomial on-line al gorithm with guaranteed performances.",
"neighbors": [
1975,
2209
],
"mask": "Test"
},
{
"node_id": 2690,
"label": 6,
"text": "Title: Robust Trainability of Single Neurons \nAbstract: We propose a model of efficient on-line reinforcement learning based on the expected mistake bound framework introduced by Haussler, Littlestone and Warmuth (1987). The measure of performance we use is the expected difference between the total reward received by the learning agent and that received by an agent behaving optimally from the start. We call this expected difference the cumulative mistake of the agent and we require that it \"levels off\" at a reasonably fast rate as the learning progresses. We show that this model is polynomially equivalent to the PAC model of off-line reinforcement learning introduced in (Fiechter, 1994). In particular we show how an off-line PAC reinforcement learning algorithm can be transformed into an efficient on-line algorithm in a simple and practical way. An immediate consequence of this result is that the PAC algorithm for the general finite state-space reinforcement learning problem described in (Fiechter, 1994) can be transformed into a polynomial on-line al gorithm with guaranteed performances.",
"neighbors": [
591,
2053
],
"mask": "Train"
},
{
"node_id": 2691,
"label": 2,
"text": "Title: A map of the protein space An automatic hierarchical classification of all protein sequences \nAbstract: We investigate the space of all protein sequences. We combine the standard measures of similarity (SW, FASTA, BLAST), to associate with each sequence an exhaustive list of neighboring sequences. These lists induce a (weighted directed) graph whose vertices are the sequences. The weight of an edge connecting two sequences represents their degree of similarity. This graph encodes much of the fundamental properties of the sequence space. We look for clusters of related proteins in this graph. These clusters correspond to strongly connected sets of vertices. Two main ideas underlie our work: i) Interesting homologies among proteins can be deduced by transitivity. ii) Transitivity should be applied restrictively in order to prevent unrelated proteins from clustering together. Our analysis starts from a very conservative classification, based on very significant similarities, that has many classes. Subsequently, classes are merged to include less significant similarities. Merging is performed via a novel two phase algorithm. First, the algorithm identifies groups of possibly related clusters (based on transitivity and strong connectivity) using local considerations, and merges them. Then, a global test is applied to identify nuclei of strong relationships within these groups of clusters, and the classification is refined accordingly. This process takes place at varying thresholds of statistical significance, where at each step the algorithm is applied on the classes of the previous classification, to obtain the next one, at the more permissive threshold. Consequently, a hierarchical organization of all proteins is obtained. The resulting classification splits the space of all protein sequences into well defined groups of proteins. The results show that the automatically induced sets of proteins are closely correlated with natural biological families and super families. The hierarchical organization reveals finer sub-families that make up known families of proteins as well as many interesting relations between protein families. The hierarchical organization proposed may be considered as the first map of the space of all protein sequences. An interactive web site including the results of our analysis has been constructed, and is now accessible through http://www.protomap.cs.huji.ac.il ",
"neighbors": [
1751
],
"mask": "Test"
},
{
"node_id": 2692,
"label": 0,
"text": "Title: Multi-Strategy Learning and Theory Revision \nAbstract: This paper presents the system WHY, which learns and updates a diagnostic knowledge base using domain knowledge and a set of examples. The a-priori knowledge consists of a causal model of the domain, stating the relationships among basic phenomena, and a body of phenomenological theory, describing the links between abstract concepts and their possible manifestations in the world. The phenomenological knowledge is used deductively, the causal model is used abductively and the examples are used inductively. The problems of imperfection and intractability of the theory are handled by allowing the system to make assumptions during its reasoning. In this way, robust knowledge can be learned with limited complexity and limited number of examples. The system works in a first order logic environment and has been applied in a real domain. ",
"neighbors": [
1370,
2038,
2172
],
"mask": "Validation"
},
{
"node_id": 2693,
"label": 3,
"text": "Title: FROM METROPOLIS TO DIFFUSIONS: GIBBS STATES AND OPTIMAL SCALING \nAbstract: This paper investigates the behaviour of the random walk Metropolis algorithm in high dimensional problems. Here we concentrate on the case where the components in the target density is a spatially homogeneous Gibbs distribution with finite range. The performance of the algorithm is strongly linked to the presence or absence of phase transition for the Gibbs distribution; the convergence time being approximately linear in dimension for problems where phase transition is not present. Related to this, there is an optimal way to scale the variance of the proposal distribution in order to maximise the speed of convergence of the algorithm. This turns out to involve scaling the variance of the proposal as the reciprocal of dimension (at least in the phase transition free case). Moreover the actual optimal scaling can be characterised in terms of the overall acceptance rate of the algorithm, the maximising value being 0:234, the value as predicted by studies on simpler classes of target density. The results are proved in the framework of a weak convergence result, which shows that the algorithm actually behaves like an infinite dimensional diffusion process in high dimensions. 1. Introduction and discussion of results ",
"neighbors": [
2025
],
"mask": "Train"
},
{
"node_id": 2694,
"label": 6,
"text": "Title: Partition-Based Uniform Error Bounds \nAbstract: This paper develops probabilistic bounds on out-of-sample error rates for several classifiers using a single set of in-sample data. The bounds are based on probabilities over partitions of the union of in-sample and out-of-sample data into in-sample and out-of-sample data sets. The bounds apply when in-sample and out-of-sample data are drawn from the same distribution. Partition-based bounds are stronger than VC-type bounds, but they require more computation. ",
"neighbors": [
571,
1762,
2331,
2495
],
"mask": "Train"
},
{
"node_id": 2695,
"label": 6,
"text": "Title: A Polynomial Time Incremental Algorithm for Regular Grammar Inference \nAbstract: This paper develops probabilistic bounds on out-of-sample error rates for several classifiers using a single set of in-sample data. The bounds are based on probabilities over partitions of the union of in-sample and out-of-sample data into in-sample and out-of-sample data sets. The bounds apply when in-sample and out-of-sample data are drawn from the same distribution. Partition-based bounds are stronger than VC-type bounds, but they require more computation. ",
"neighbors": [
2057,
2198,
2696
],
"mask": "Train"
},
{
"node_id": 2696,
"label": 6,
"text": "Title: Learning DFA from Simple Examples \nAbstract: We present a framework for learning DFA from simple examples. We show that efficient PAC learning of DFA is possible if the class of distributions is restricted to simple distributions where a teacher might choose examples based on the knowledge of the target concept. This answers an open research question posed in Pitt's seminal paper: Are DFA's PAC-identifiable if examples are drawn from the uniform distribution, or some other known simple distribution? Our approach uses the RPNI algorithm for learning DFA from labeled examples. In particular, we describe an efficient learning algorithm for exact learning of the target DFA with high probability when a bound on the number of states (N ) of the target DFA is known in advance. When N is not known, we show how this algorithm can be used for efficient PAC learning of DFAs. ",
"neighbors": [
98,
672,
2036,
2695
],
"mask": "Train"
},
{
"node_id": 2697,
"label": 3,
"text": "Title: Belief Maintenance with Probabilistic Logic \nAbstract: We present a framework for learning DFA from simple examples. We show that efficient PAC learning of DFA is possible if the class of distributions is restricted to simple distributions where a teacher might choose examples based on the knowledge of the target concept. This answers an open research question posed in Pitt's seminal paper: Are DFA's PAC-identifiable if examples are drawn from the uniform distribution, or some other known simple distribution? Our approach uses the RPNI algorithm for learning DFA from labeled examples. In particular, we describe an efficient learning algorithm for exact learning of the target DFA with high probability when a bound on the number of states (N ) of the target DFA is known in advance. When N is not known, we show how this algorithm can be used for efficient PAC learning of DFAs. ",
"neighbors": [
1759,
2288,
2698,
2700
],
"mask": "Train"
},
{
"node_id": 2698,
"label": 3,
"text": "Title: Forecasting Glucose Concentration in Diabetic Patients using Ignorant Belief Networks \nAbstract: We present a framework for learning DFA from simple examples. We show that efficient PAC learning of DFA is possible if the class of distributions is restricted to simple distributions where a teacher might choose examples based on the knowledge of the target concept. This answers an open research question posed in Pitt's seminal paper: Are DFA's PAC-identifiable if examples are drawn from the uniform distribution, or some other known simple distribution? Our approach uses the RPNI algorithm for learning DFA from labeled examples. In particular, we describe an efficient learning algorithm for exact learning of the target DFA with high probability when a bound on the number of states (N ) of the target DFA is known in advance. When N is not known, we show how this algorithm can be used for efficient PAC learning of DFAs. ",
"neighbors": [
1759,
2697,
2700
],
"mask": "Train"
},
{
"node_id": 2699,
"label": 3,
"text": "Title: EXACT BOUND FOR THE CONVERGENCE OF METROPOLIS CHAINS \nAbstract: In this note, we present a calculation which gives us the exact bound for the convergence of independent Metropolis chains in a finite state space. Metropolis chain, convergence rate, Markov chain Monte Carlo ",
"neighbors": [
2153
],
"mask": "Train"
},
{
"node_id": 2700,
"label": 3,
"text": "Title: Probabilistic Reasoning under Ignorance \nAbstract: In this note, we present a calculation which gives us the exact bound for the convergence of independent Metropolis chains in a finite state space. Metropolis chain, convergence rate, Markov chain Monte Carlo ",
"neighbors": [
2697,
2698
],
"mask": "Train"
},
{
"node_id": 2701,
"label": 2,
"text": "Title: Simple Synchrony Networks Learning to Parse Natural Language with Temporal Synchrony Variable Binding \nAbstract: The Simple Synchrony Network (SSN) is a new connectionist architecture, incorporating the insights of Temporal Synchrony Variable Binding (TSVB) into Simple Recurrent Networks. The use of TSVB means SSNs can output representations of structures, and can learn generalisations over the constituents of these structures (as required by systematicity). This paper describes the SSN and an associated training algorithm, and demonstrates SSNs' generalisation abilities through results from training SSNs to parse real natural language sentences. ",
"neighbors": [
2247,
2263
],
"mask": "Train"
},
{
"node_id": 2702,
"label": 1,
"text": "Title: An Evolutionary Method to Find Good Building-Blocks for Architectures of Artificial Neural Networks \nAbstract: This paper deals with the combination of Evolutionary Algorithms and Artificial Neural Networks (ANN). A new method is presented, to find good building-blocks for architectures of Artificial Neural Networks. The method is based on Cellular Encoding, a representation scheme by F. Gruau, and on Genetic Programming by J. Koza. First it will be shown that a modified Cellular Encoding technique is able to find good architectures even for non-boolean networks. With the help of a graph-database and a new graph-rewriting method, it is secondly possible to build architectures from modular structures. The information about building-blocks for architectures is obtained by statistically analyzing the data in the graph-database. Simulation results for two real world problems are given.",
"neighbors": [
881,
2624
],
"mask": "Train"
},
{
"node_id": 2703,
"label": 1,
"text": "Title: learning easier tasks. More work is necessary in order to determine more precisely the relationship\nAbstract: We have attempted to obtain a stronger correlation between the relationship between G 0 and G 1 and performance. This has included studying the variance in the fitnesses of the members of the population, as well as observing the rate of convergence of the GP with respect to G 1 when a population was evolved for G 0 . 13 Unfortunately, we have not yet been able to obtain a significant correlation. In future work, we plan to to track the genetic diversity (we have only considered phenotypic variance so far) of populations in order to shed some light on the underlying mechanism for priming. One factor that has made this analysis difficult so far is our use of genetic programming, for which the space of genotypes is very large, (i.e., there are many redundant solutions), and for which the neighborhood structure is less easily intuited than that of a standard genetic algorithm. Since there is every reason to believe that the underlying mechanism of incremental evolution is largely independent of the peculiarities of genetic programming, we are currently investigating the incremental evolution mechanism using genetic algorithms with fixed-length genotypes. This should enable a better understanding of the mechanism. Ultimately, we will scale up this research effort to analyze incremental evolution with more than one transition between test cases. This will involve many open issues regarding the optimization of the transition schedule between test cases. 13 We performed the following experiment: Let F it(I; G) be the fitness value of a genetic program I according to the evaluation function G, and Best Of(P op; t; G) be the member I fl of population P op at time t with highest fitness according to G | in other words, I fl = Best Of (P op; t; G) maximizes F it(I; G) over all I 2 P op. A population P op 0 was evolved in the usual manner using evaluation function G 0 for t = 25 generations. However, at each generation 1 i 25 we also evaluated the current population using evaluation function G 1 , and recorded the value of F it(Best Of (P op; i; G 1 ); G 1 ). In other words, we evolved the population using G 0 as the evaluation function, but at every generation we also computed the fitness of the best individual in the population according to G 1 and saved this value. Using the same random seed and control parameters, we then evolved a population P op 1 for t = 30 generations using G 1 as the evaluation function (note that at generation 0, P op 1 is identical to P op 0 ). For all values of t, we compared F it(Best Of (P op 0 ; t; G 1 ); G 1 ) with F it(Best Of (P op 1 ; t; G 1 ); G 1 ). in order to better formalize and exploit this notion of domain difficulty.",
"neighbors": [
1221,
1409,
2200
],
"mask": "Test"
},
{
"node_id": 2704,
"label": 1,
"text": "Title: A Genome Compiler for High Performance Genetic Programming \nAbstract: Genetic Programming is very computationally expensive. For most applications, the vast majority of time is spent evaluating candidate solutions, so it is desirable to make individual evaluation as efficient as possible. We describe a genome compiler which compiles s-expressions to machine code, resulting in significant speedup of individual evaluations over standard GP systems. Based on performance results with symbolic regression, we show that the execution of the genome compiler system is comparable to the fastest alternative GP systems. We also demonstrate the utility of compilation on a real-world problem, lossless image compression. A somewhat surprising result is that in our test domains, the overhead of compilation is negligible. ",
"neighbors": [
209,
2407
],
"mask": "Train"
},
{
"node_id": 2705,
"label": 1,
"text": "Title: The MAX Problem for Genetic Programming Highlighting an Adverse Interaction between the Crossover Operator and\nAbstract: The Crossover operator is common to most implementations of Genetic Programming (GP). Another, usually unavoidable, factor is some form of restriction on the size of trees in the GP population. This paper concentrates on the interaction between the Crossover operator and a restriction on tree depth demonstrated by the MAX problem, which involves returning the largest possible value for given function and terminal sets. Some characteristics and inadequacies of Crossover in `normal' use are highlighted and discussed. Subtree discovery and movement takes place mostly near the leaf nodes, with nodes near the root left untouched. Diversity drops quickly to zero near the root node in the tree population. GP is then unable to create `fitter' trees via the crossover operator, leaving a Mutation operator as the only common, but ineffective, route to discovery of `fitter' trees. ",
"neighbors": [
1784,
1839,
1840,
2216
],
"mask": "Train"
},
{
"node_id": 2706,
"label": 0,
"text": "Title: Functional Representation as Design Rationale \nAbstract: Design rationale is a record of design activity: of alternatives available, choices made, the reasons for them, and explanations of how a proposed design is intended to work. We describe a representation called the Functional Representation (FR) that has been used to represent how a device's functions arise causally from the functions of its components and their interconnections. We propose that FR can provide the basis for capturing the causal aspects of the design rationale. We briefly discuss the use of FR for a number of tasks in which we would expect the design rationale to be useful: generation of diagnostic knowledge, design verification and redesign. ",
"neighbors": [
1046,
1138,
1640,
1752
],
"mask": "Validation"
},
{
"node_id": 2707,
"label": 2,
"text": "Title: Human Face Detection in Visual Scenes \nAbstract: We present a neural network-based face detection system. A retinally connected neural network examines small windows of an image, and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We use a bootstrap algorithm for training, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting non-face training examples, which must be chosen to span the entire space of non-face images. Comparisons with another state-of-the-art face detection system are presented; our system has better performance in terms of detection and false-positive rates.",
"neighbors": [
774,
1389,
2344
],
"mask": "Train"
}
]