书籍详情
《人工智能:一种现代的方法)》[43M]百度网盘|亲测有效|pdf下载
  • 人工智能:一种现代的方法)

  • 出版社:读买天下图书专营店
  • 出版时间:2011-07
  • 热度:6897
  • 上架时间:2024-06-30 09:08:33
  • 价格:0.0
书籍下载
书籍预览
免责声明

本站支持尊重有效期内的版权/著作权,所有的资源均来自于互联网网友分享或网盘资源,一旦发现资源涉及侵权,将立即删除。希望所有用户一同监督并反馈问题,如有侵权请联系站长或发送邮件到ebook666@outlook.com,本站将立马改正

内容介绍

  商品基本信息,请以下列介绍为准
商品名称:人工智能:一种现代的方法(第3版)(大学计算机教育国外教材系列(影印版))
作者:StuartJ.Russell,PeterNorvig著
定价:158.0
出版社:清华大学出版社
出版日期:2011-07-01
ISBN:9787302252955
印次:1
版次:1
装帧:
开本:大32开

  目录
Ⅰ artificial intelligence
1 introduction
1.1what is al?
1.2the foundations of artificial intelligence
1.3the history of artificial intelligence
1.4the state of the art
ummary, bibliographical and historical notes, exerciser>
2 intelligent agentr>2.1agents and environmentr>2.2good behavior: the concept of rationality
2.3the nature of environmentr>2.4the structure of agentr>2.5summary, bibliographical and historical notes, exerciser>
Ⅱ problem-solving
3 solving problemy searching
3.1problem-solving agentr>3.2example problemr>3.3searching for solutionr>3.4uninformed search strategier>3.5informed (heuristic) search strategier>3.6heuristic functionr>3.7summary, bibliographical and historical notes, exerciser>
4 beyond classical search
4.1local search algorithms and optimization problemr>4.2local search in continuous spacer>4.3searching with nondeterministic actionr>4.4searching with partial observationr>4.5online search agents and unknown environmentr>4.6summary, bibliographical and historical notes, exerciser>
5 adversarial search
5.1gamer>5.2optimal decisions in gamer>5.3alpha-beta pruning
5.4imperfect real-time decisionr>5.5stochastic gamer>5.6partially observable gamer>5.7state-of-the-art game programr>5.8alternative approacher>5.9summary, bibliographical and historical notes, exerciser>
6 constraint satisfaction problemr>6.1defining constraint satisfaction problemr>6.2constraint propagation: inference in cspr>6.3backtra search for cspr>6.4local search for cspr>6.5the structure of problemr>6.6summary, bibliographical and historical notes, exerciser>
Ⅲ knowledge, reasoning, and planning
7 logical agentr>7.1knowledge-based agentr>7.2the wumpus world
7.3logic
7.4propositional logic: a very simple logic
7.5propositional theorem proving
7.6effective propositional model che
7.7agentased on propositional logic
7.8summary, bibliographical and historical notes, exerciser>
8 first-order logic
8.1representation revisited
8.2syntax and semantics of first-order logic
8.3using first-order logic
8.4knowledge engineering in first-order logic
8.5summary, bibliographical and historical notes, exerciser>
9 inference in first-order logic
9.1propositional vs. first-order inference
9.2unification and lifting
9.3forward chaining
9.4backward chaining
9.5resolution
9.6summary, bibliographical and historical notes, exerciser>
10 classical planning
10.1 definition of classical planning
10.2 algorithms for planning as state-space search
10.3 planning graphr>10.4 other classical planning approacher>10.5 analysis of planning approacher>10.6 summary, bibliographical and historical notes, exerciser>
11 planning and acting in the real world
11.1 time, schedules, and resourcer>11.2 hierarchical planning
11.3 planning and acting in nondeterministic domainr>11.4 multiagent planning
11.5 summary, bibliographical and historical notes, exerciser>
12 knowledge representation
12.1 ontological engineering
12.2 categories and objectr>12.3 eventr>12.4 mental events and mental objectr>12.5 reasoning systems for categorier>12.6 reasoning with default information
12.7 the intemet shopping world
12.8 summary, bibliographical and historical notes, exerciser>
Ⅳ uncertain knowledge and reasoning
13 quantifying uncertainty
13.1 acting under uncertainty
13.2 basic probability notation
13.3 inference using full joint distributionr>13.4 independence
13.5 bayes' rule and its use
13.6 the wumpus world revisited
13.7 summary, bibliographical and historical notes, exerciser>
14 probabilistic reasoning
14.1 representing knowledge in an uncertain domain
14.2 the semantics of bayesian networkr>14.3 efficient representation of conditional distributionr>14.4 exact inference in bayesian networkr>14.5 approximate inference in bayesian networkr>14.6 relational and first-order probability modelr>14.7 other approaches to uncertain reasoning
14.8 summary, bibliographical and historical notes, exerciser>
15 probabilistic reasoning over time
15.1 time and uncertainty
15.2 inference in temporal modelr>15.3 hen markov modelr>15.4 kalman filterr>15.5 dynamic bayesian networkr>15.6 keeping track of many objectr>15.7 summary, bibliographical and historical notes, exerciser>
16 m simple decisionr>16.1 combining beliefs and desires under uncertainty
16.2 the basis of utility theory
16.3 utility functionr>16.4 multiattribute utility functionr>16.5 decision networkr>16.6 the value of information
16.7 decision-theoretic expert systemr>16.8 summary, bibliographical and historical notes, exerciser>
17 m complex decisionr>17.equential decision problemr>17.2 value iteration
17.3 policy iteration
17.4 partially observable mdpr>17.5 decisions with multiple agents: game theory
17.6 mechanism design
17.7 summary, bibliographical and historical notes, exerciser>
V learning
18 learning from exampler>18.1 forms of learning
18.2 supervised learning
18.3 leaming decision treer>18.4 evaluating and choosing the best hypothesir>18.5 the theory of learning
18.6 regression and classification with linear modelr>18.7 artificial neural networkr>18.8 nonparametric modelr>18.9 support vector machiner>18.10 enle learning
18.11 practical machine learning
18.ummary, bibliographical and historical notes, exerciser>
19 knowledge in learning
19.1 a logical formulation of learning
19.2 knowledge in learning
19.3 explanation-based learning
19.4 learning using relevance information
19.5 inductive logic programming
19.6 summary, bibliographical and historical notes, exercir>
20 learning probabilistic modelr>20.tatistical learning
20.2 learning with complete data
20.3 learning with hen variables: the em algorithm.
20.4 summary, bibliographical and historical notes, exercir>
21 reinforcement learning
21. l introduction
21.2 passive reinforcement learning
21.3 active reinforcement learning
21.4 generalization in reinforcement learning
21.5 policy search
21.6 applications of reinforcement learning
21.7 summary, bibliographical and historical notes, exercir>
VI communicating, perceiving, and acting
22 natural language processing
22.1 language modelr>22.2 text classification
22.3 information retrieval
22.4 information extraction
22.5 summary, bibliographical and historical notes, exercir>
23 natural language for communication
23.1 phrase structure grammarr>23.2 syntactic analysis (parsing)
23.3 augmented grammars and semantic interpretation
23.4 machine translation
23.5 speech recognition
23.6 summary, bibliographical and historical notes, exercir>
24 perception
24.1 image formation
24.2 early image-processing operationr>24.3 object recognition by appearance
24.4 reconstructing the 3d world
24.5 object recognition from structural information
24.6 using vision
24.7 summary, bibliographical and historical notes, exerciser>
25 roboticr>25.1 introduction
25.2 robot hardware
25.3 robotic perception
25.4 planning to move
25.5 planning uncertain movementr>25.6 moving
25.7 robotic software architecturer>25.8 application domainr>25.9 summary, bibliographical and historical notes, exerciser>
VII conclusionr>26 philosophical foundationr>26.1 weak ai: can machines act intelligently?
26.2 strong ai: can machines really think?
26.3 the ethics and risks of developing artificial intelligence
26.4 summary, bibliographical and historical notes, exerciser>
27 al: the present and future
27.1 agent componentr>27.2 agent architecturer>27.3 are we going in the right direction?
27.4 what if ai does succeed?

a mathematical background
a. 1complexity analysis and o0 notation
a.2 vectors, matrices, and linear algebra
a.3 probability distributionr>b notes on languages and algorithmr>b.1defining languages with backus-naur form (bnf)
b.2describing algorithms with pseudocode
b.3online help
bibliography
index