- Title
- A learning classifier system approach to relational reinforcement learning
- Creator
- Mellor, Drew
- Relation
- University of Newcastle Research Higher Degree Thesis
- Resource Type
- thesis
- Date
- 2008
- Description
- Research Doctorate - Doctor of Philosophy (PhD)
- Description
- Machine learning methods usually represent knowledge and hypotheses using attribute-value languages, principally because of their simplicity and demonstrated utility over a broad variety of problems. However, attribute-value languages have limited expressive power and for some problems the target function can only be expressed as an exhaustive conjunction of specific cases. Such problems are handled better with inductive logic programming (ILP) or relational reinforcement learning (RRL), which employ more expressive languages, typically languages over first-order logic. Methods developed within these fields generally extend upon attribute-value algorithms; however, many attribute-value algorithms that are potentially viable for RRL, the younger of the two fields, remain to be extended. This thesis investigates an approach to RRL derived from the learning classifier system XCS. In brief, the new system, FOXCS, generates, evaluates, and evolves a population of ``condition-action'' rules that are definite clauses over first-order logic. The rules are typically comprehensible enough to be understood by humans and can be inspected to determine the acquired principles. Key properties of FOXCS, which are inherited from XCS, are that it is general (applies to arbitrary Markov decision processes), model-free (rewards and state transitions are ``black box'' functions), and ``tabula rasa'' (the initial policy can be unspecified). Furthermore, in contrast to decision tree learning, its rule-based approach is ideal for incrementally learning expressions over first-order logic, a valuable characteristic for an RRL system. Perhaps the most novel aspect of FOXCS is its inductive component, which synthesizes evolutionary computation and first-order logic refinement for incremental learning. New evolutionary operators were developed because previous combinations of evolutionary computation and first-order logic were non-incremental. The effectiveness of the inductive component was empirically demonstrated by benchmarking on ILP tasks, which found that FOXCS produced hypotheses of comparable accuracy to several well-known ILP algorithms. Further benchmarking on RRL tasks found that the optimality of the policies learnt were at least comparable to those of existing RRL systems. Finally, a significant advantage of its use of variables in rules was demonstrated: unlike RRL systems that did not use variables, FOXCS, with appropriate extensions, learnt scalable policies that were genuinely independent of the dimensionality of the task environment.
- Subject
- LCS; relational reinforcement learning; mutagenesis; blocks world
- Identifier
- uon:2506
- Identifier
- http://hdl.handle.net/1959.13/29285
- Rights
- Copyright 2008 Drew Mellor
- Language
- eng
- Full Text
- Hits: 1354
- Visitors: 2781
- Downloads: 941
Thumbnail | File | Description | Size | Format | |||
---|---|---|---|---|---|---|---|
View Details Download | ATTACHMENT01 | Abstract | 113 KB | Adobe Acrobat PDF | View Details Download | ||
View Details Download | ATTACHMENT02 | Thesis | 1 MB | Adobe Acrobat PDF | View Details Download |