Selasa, 22 April 2008

Intelligent Systems for Engineers and Scientists, Second Edition

The author


Adrian Hopgood has earned his BSc from Bristol University, PhD from
Oxford University, and MBA from the Open University. After completing his
PhD in 1984, he spent two years developing applied intelligent systems for
Systems Designers PLC. He subsequently joined the academic staff of the
Open University, where he has established his research in intelligent systems
and their application in engineering and science. Between 1990 and 1992 he
worked for Telstra Research Laboratories in Australia, where he contributed to
the development of intelligent systems for telecommunications applications.
Following his return to the Open University he led the development of the
course T396 – Artificial Intelligence for Technology. He has further developed
his interests in intelligent systems and pioneered the development of the
blackboard system, ARBS.

Chapter one: Introduction
1.1 Intelligent systems
1.2 Knowledge-based systems
1.3 The knowledge base
1.4 Deduction, abduction, and induction
1.5 The inference engine
1.6 Declarative and procedural programming
1.7 Expert systems
1.8 Knowledge acquisition
1.9 Search
1.10 Computational intelligence
1.11 Integration with other software
References
Further reading

Chapter two: Rule-based systems
2.1 Rules and facts
2.2 A rule-based system for boiler control
2.3 Rule examination and rule firing
2.4 Maintaining consistency
2.5 The closed-world assumption
2.6 Use of variables within rules
2.7 Forward-chaining (a data-driven strategy)
2.7.1 Single and multiple instantiation of variables
2.7.2 Rete algorithm
2.8 Conflict resolution
2.8.1 First come, first served
2.8.2 Priority values
2.8.3 Metarules
2.9 Backward-chaining (a goal-driven strategy)
2.9.1 The backward-chaining mechanism
2.9.2 Implementation of backward-chaining
2.9.3 Variations of backward-chaining
2.10 A hybrid strategy
2.11 Explanation facilities
2.12 Summary
References
Further reading
Chapter three: Dealing with uncertainty
3.1 Sources of uncertainty
3.2 Bayesian updating
3.2.1 Representing uncertainty by probability
3.2.2 Direct application of Bayes’ theorem
3.2.3 Likelihood ratios
3.2.4 Using the likelihood ratios
3.2.5 Dealing with uncertain evidence
3.2.6 Combining evidence
3.2.7 Combining Bayesian rules with production rules
3.2.8 A worked example of Bayesian updating
3.2.9 Discussion of the worked example
3.2.10 Advantages and disadvantages of Bayesian updating
3.3 Certainty theory
3.3.1 Introduction
3.3.2 Making uncertain hypotheses
3.3.3 Logical combinations of evidence
3.3.4 A worked example of certainty theory
3.3.5 Discussion of the worked example
3.3.6 Relating certainty factors to probabilities
3.4 Possibility theory: fuzzy sets and fuzzy logic
3.4.1 Crisp sets and fuzzy sets
3.4.2 Fuzzy rules
3.4.3 Defuzzification
3.5 Other techniques
3.5.1 Dempster–Shafer theory of evidence
3.5.2 Inferno
3.6 Summary
References
Further reading
Chapter four: Object-oriented systems
4.1 Objects and frames
4.2 An illustrative example
4.3 Introducing OOP
4.4 Data abstraction
4.4.1 Classes
4.4.2 Instances
4.4.3 Attributes (or data members)
4.4.4 Operations (or methods or member functions)
4.4.5 Creation and deletion of instances
4.5 Inheritance

4.5.1 Single inheritance
4.5.2 Multiple and repeated inheritance
4.5.3 Specialization of methods
4.5.4 Browsers
4.6 Encapsulation
4.7 Unified Modeling Language (UML)
4.8 Dynamic (or late) binding
4.9 Message passing and function calls
4.9.1 Pseudovariables
4.9.2 Metaclasses
4.10 Type checking
4.11 Further aspects of OOP
4.11.1 Persistence
4.11.2 Concurrency
4.11.3 Overloading
4.11.4 Active values and daemons
4.12 Frame-based systems
4.13 Summary
References
Further reading
Chapter five: Intelligent agents
5.1 Characteristics of an intelligent agent
5.2 Agents and objects
5.3 Agent architectures
5.3.1 Logic-based architectures
5.3.2 Emergent behavior architectures
5.3.3 Knowledge-level architectures
5.3.4 Layered architectures
5.4 Multiagent systems
5.4.1 Benefits of a multiagent system
5.4.2 Building a multiagent system
5.4.3 Communication between agents
5.5 Summary
References
Further reading
Chapter six: Symbolic learning
6.1 Introduction
6.2 Learning by induction
6.2.1 Overview
6.2.2 Learning viewed as a search problem
6.2.3 Techniques for generalization and specialization
6.3 Case-based reasoning (CBR)
6.3.1 Storing cases
6.3.2 Retrieving cases

6.3.3 Adapting case histories
6.3.4 Dealing with mistaken conclusions
6.4 Summary
References
Further reading
Chapter seven: Optimization algorithms
7.1 Optimization
7.2 The search space
7.3 Searching the search space
7.4 Hill-climbing and gradient descent algorithms
7.4.1 Hill-climbing
7.4.2 Steepest gradient descent or ascent
7.4.3 Gradient-proportional descent
7.4.4 Conjugate gradient descent or ascent
7.5 Simulated annealing
7.6 Genetic algorithms
7.6.1 The basic GA
7.6.2 Selection
7.6.3 Gray code
7.6.4 Variable length chromosomes
7.6.5 Building block hypothesis
7.6.6 Selecting GA parameters
7.6.7 Monitoring evolution
7.6.8 Lamarckian inheritance
7.6.9 Finding multiple optima
7.6.10 Genetic programming
7.7 Summary
References
Further reading
Chapter eight: Neural networks
8.1 Introduction
8.2 Neural network applications
8.2.1 Nonlinear estimation
8.2.2 Classification
8.2.3 Clustering
8.2.4 Content-addressable memory
8.3 Nodes and interconnections
8.4 Single and multilayer perceptrons
8.4.1 Network topology
8.4.2 Perceptrons as classifiers
8.4.3 Training a perceptron
8.4.4 Hierarchical perceptrons
8.4.5 Some practical considerations
8.5 The Hopfield network

8.6 MAXNET
8.7 The Hamming network
8.8 Adaptive Resonance Theory (ART) networks
8.9 Kohonen self-organizing networks
8.10 Radial basis function networks
8.11 Summary
References
Further reading
Chapter nine: Hybrid systems
9.1 Convergence of techniques
9.2 Blackboard systems
9.3 Genetic-fuzzy systems
9.4 Neuro-fuzzy systems
9.5 Genetic-neural systems
9.6 Clarifying and verifying neural networks
9.7 Learning classifier systems
9.8 Summary
References
Further reading
Chapter ten: Tools and languages
10.1 A range of intelligent systems tools
10.2 Expert system shells
10.3 Toolkits and libraries
10.4 Artificial intelligence languages
10.4.1 Lists
10.4.2 Other data types
10.4.3 Programming environments
10.5 Lisp
10.5.1 Background
10.5.2 Lisp functions
10.5.3 A worked example
10.6 Prolog
10.6.1 Background
10.6.2 A worked example
10.6.3 Backtracking in Prolog
10.7 Comparison of AI languages
10.8 Summary
References
Further reading
Chapter eleven: Systems for interpretation and diagnosis
11.1 Introduction
11.2 Deduction and abduction for diagnosis

11.3 Depth of knowledge
11.3.1 Shallow knowledge
11.3.2 Deep knowledge
11.3.3 Combining shallow and deep knowledge
11.4 Model-based reasoning
11.4.1 The limitations of rules
11.4.2 Modeling function, structure, and state
11.4.3 Using the model
11.4.4 Monitoring
11.4.5 Tentative diagnosis
11.4.6 Fault simulation
11.4.7 Fault repair
11.4.8 Using problem trees
11.4.9 Summary of model-based reasoning
11.5 Case study: a blackboard system
for interpreting ultrasonic images
11.5.1 Ultrasonic imaging
11.5.2 Knowledge sources in ARBS
11.5.3 Rules in ARBS
11.5.4 Inference engines in ARBS
11.5.5 The stages of image interpretation
11.5.6 The use of neural networks
11.5.7 Rules for verifying neural networks
11.6 Summary
References
Further reading
Chapter twelve: Systems for design and selection
12.1 The design process
12.2 Design as a search problem
12.3 Computer aided design
12.4 The product design specification (PDS):
a telecommunications case study
12.4.1 Background
12.4.2 Alternative views of a network
12.4.3 Implementation
12.4.4 The classes
12.4.5 Summary of PDS case study
12.5 Conceptual design
12.6 Constraint propagation and truth maintenance
12.7 Case study: the design of a lightweight beam
12.7.1 Conceptual design
12.7.2 Optimization and evaluation
12.7.3 Detailed design
12.8 Design as a selection exercise
12.8.1 Overview

12.8.2 Merit indices
12.8.3 The polymer selection example
12.8.4 Two-stage selection
12.8.5 Constraint relaxation
12.8.6 A naive approach to scoring
12.8.7 A better approach to scoring
12.8.8 Case study: the design of a kettle
12.8.9 Reducing the search space by classification
12.9 Failure mode and effects analysis (FMEA)
12.10 Summary
References
Further reading
Chapter thirteen: Systems for planning
13.1 Introduction
13.2 Classical planning systems
13.3 STRIPS
13.3.1 General description
13.3.2 An example problem
13.3.3 A simple planning system in Prolog
13.4 Considering the side effects of actions
13.4.1 Maintaining a world model
13.4.2 Deductive rules
13.5 Hierarchical planning
13.5.1 Description
13.5.2 Benefits of hierarchical planning
13.5.3 Hierarchical planning with ABSTRIPS
13.6 Postponement of commitment
13.6.1 Partial ordering of plans
13.6.2 The use of planning variables
13.7 Job-shop scheduling
13.7.1 The problem
13.7.2 Some approaches to scheduling
13.8 Constraint-based analysis
13.8.1 Constraints and preferences
13.8.2 Formalizing the constraints
13.8.3 Identifying the critical sets of operations
13.8.4 Sequencing in the disjunctive case
13.8.5 Sequencing in the nondisjunctive case
13.8.6 Updating earliest start times and latest finish times
13.8.7 Applying preferences
13.8.8 Using constraints and preferences
13.9 Replanning and reactive planning
13.10 Summary
References
Further reading

Chapter fourteen: Systems for control
14.1 Introduction
14.2 Low-level control
14.2.1 Open-loop control
14.2.2 Feedforward control
14.2.3 Feedback control
14.2.4 First- and second-order models
14.2.5 Algorithmic control: the PID controller
14.2.6 Bang-bang control
14.3 Requirements of high-level (supervisory) control
14.4 Blackboard maintenance
14.5 Time-constrained reasoning
14.5.1 Prioritization of processes and knowledge sources
14.5.2 Approximation
14.5.3 Single and multiple instantiation
14.6 Fuzzy control
14.6.1 Crisp and fuzzy control
14.6.2 Firing fuzzy control rules
14.6.3 Defuzzification
14.6.4 Some practical examples of fuzzy controllers
14.7 The BOXES controller
14.7.1 The conventional BOXES algorithm
14.7.2 Fuzzy BOXES
14.8 Neural network controllers
14.8.1 Direct association of state variables
with action variables
14.8.2 Estimation of critical state variables
14.9 Statistical process control (SPC)
14.9.1 Applications
14.9.2 Collecting the data
14.9.3 Using the data
14.10 Summary
References
Further reading
Chapter fifteen: Concluding remarks
15.1 Benefits
15.2 Implementation
15.3 Trends
References

PDF VERSION

$5.00 ONLY

E GOLD-5175039

PAY PAL : mack_zie@yahoo.com

100% safe online business!!!

honest business is the best way to success!!!

Tidak ada komentar: