- Home
- Psychology
- General
- Inside Case-Based Explanation - 9780805810295
Inside Case-Based Explanation - 9780805810295
List Price:
$26.99
- Availability: Confirm prior to ordering
- Branding: minimum 50 pieces (add’l costs below)
- Check Freight Rates (branded products only)
Branding Options (v), Availability & Lead Times
- 1-Color Imprint: $2.00 ea.
- Promo-Page Insert: $2.50 ea. (full-color printed, single-sided page)
- Belly-Band Wrap: $2.50 ea. (full-color printed)
- Set-Up Charge: $45 per decoration
- Availability: Product availability changes daily, so please confirm your quantity is available prior to placing an order.
- Branded Products: allow 10 business days from proof approval for production. Branding options may be limited or unavailable based on product design or cover artwork.
- Unbranded Products: allow 3-5 business days for shipping. All Unbranded items receive FREE ground shipping in the US. Inquire for international shipping.
- RETURNS/CANCELLATIONS: All orders, branded or unbranded, are NON-CANCELLABLE and NON-RETURNABLE once a purchase order has been received.
Product Details
Author:
Roger C. Schank, Alex Kass, Christopher K. Riesbeck
Format:
Paperback
Pages:
436
Publisher:
Taylor & Francis (May 1, 1994)
Language:
English
ISBN-13:
9780805810295
ISBN-10:
0805810293
Weight:
28.625oz
Dimensions:
6" x 9"
File:
TAYLORFRANCIS-TayFran_260412045134893-20260412.xml
Folder:
TAYLORFRANCIS
List Price:
$26.99
Series:
Artificial Intelligence Series
Case Pack:
89
As low as:
$25.64
Publisher Identifier:
P-CRC
Discount Code:
H
Pub Discount:
30
Audience:
Professional and scholarly
Country of Origin:
United States
Imprint:
Psychology Press
Overview
This book is the third volume in a series that provides a hands-on perspective on the evolving theories associated with Roger Schank and his students. The primary focus of this volume is on constructing explanations. All of the chapters relate to the problem of building computer programs that can develop hypotheses about what might have caused an observed event. Because most researchers in natural language processing don't really want to work on inference, memory, and learning issues, most of their sample text fragments are chosen carefully to de-emphasize the need for non text-related reasoning.
The ability to come up with hypotheses about what is really going on in a story is a hallmark of human intelligence. The biggest difference between truly intelligent readers and less intelligent ones is the extent to which the reader can go beyond merely understanding the explicit statements being communicated. Achieving a creative level of understanding means developing hypotheses about questions for which there may be no conclusively correct answer at all. The focus of the lab, during the period documented in this book, was to work on getting a computer program to do that.
The volume adopts a case-based approach to the construction of explanations which suggests that the main steps in the process of explaining a given anomaly are as follows:
* Retrieve an explanation that might be relevant to the anomaly.
* Evaluate whether the retrieved explanation makes sense when applied to the current anomaly.
* Adapt the explanation to produce a new variant that fits better if the retrieved explanation doesn't fit the anomaly perfectly.
The ability to come up with hypotheses about what is really going on in a story is a hallmark of human intelligence. The biggest difference between truly intelligent readers and less intelligent ones is the extent to which the reader can go beyond merely understanding the explicit statements being communicated. Achieving a creative level of understanding means developing hypotheses about questions for which there may be no conclusively correct answer at all. The focus of the lab, during the period documented in this book, was to work on getting a computer program to do that.
The volume adopts a case-based approach to the construction of explanations which suggests that the main steps in the process of explaining a given anomaly are as follows:
* Retrieve an explanation that might be relevant to the anomaly.
* Evaluate whether the retrieved explanation makes sense when applied to the current anomaly.
* Adapt the explanation to produce a new variant that fits better if the retrieved explanation doesn't fit the anomaly perfectly.








