null
Loading... Please wait...
FREE SHIPPING on All Unbranded Items LEARN MORE
Print This Page

Evals for AI Engineers (Systematically Measuring and Improving AI Applications)

List Price: $79.99
SKU:
9798341660724
Quantity:
Minimum Purchase
25 unit(s)
Expected release date is Dec 1st 2026
  • Availability: Confirm prior to ordering
  • Branding: minimum 50 pieces (add’l costs below)
  • Check Freight Rates (branded products only)

Branding Options (v), Availability & Lead Times

  • 1-Color Imprint: $2.00 ea.
  • Promo-Page Insert: $2.50 ea. (full-color printed, single-sided page)
  • Belly-Band Wrap: $2.50 ea. (full-color printed)
  • Set-Up Charge: $45 per decoration
FULL DETAILS
  • Availability: Product availability changes daily, so please confirm your quantity is available prior to placing an order.
  • Branded Products: allow 10 business days from proof approval for production. Branding options may be limited or unavailable based on product design or cover artwork.
  • Unbranded Products: allow 3-5 business days for shipping. All Unbranded items receive FREE ground shipping in the US. Inquire for international shipping.
  • RETURNS/CANCELLATIONS: All orders, branded or unbranded, are NON-CANCELLABLE and NON-RETURNABLE once a purchase order has been received.
  • Product Details

    Author:
    Shreya Shankar, Hamel Husain
    Format:
    Paperback
    Pages:
    225
    Publisher:
    O'Reilly Media (December 1, 2026)
    Imprint:
    O'Reilly Media
    Release Date:
    December 1, 2026
    Language:
    English
    ISBN-13:
    9798341660724
    Weight:
    16oz
    Dimensions:
    7" x 9.19"
    File:
    TWO RIVERS-PERSEUS-Metadata_Only_Perseus_Distribution_Customer_Group_Metadata_20260330163216-20260330.xml
    Folder:
    TWO RIVERS
    List Price:
    $79.99
    Country of Origin:
    United States
    Pub Discount:
    60
    Case Pack:
    22
    As low as:
    $68.79
    Publisher Identifier:
    P-PER
    Discount Code:
    C
  • Overview

    Stop using guesswork to find out how your AI applications are performing. Evals for AI Engineers equips you with the proven tools and processes required to systematically test, measure, and enhance the reliability of AI applications, especially those using LLMs. Written by AI engineers with extensive experience in real-world consulting (across 35+ AI products) and cutting-edge research, this practical resource will help you move from assumptions to robust, data-driven evaluation.

    Ideal for software engineers, technical product managers, and technical leads, this hands-on guide dives into techniques like error analysis, synthetic data generation, automated LLM-as-a-judge systems, production monitoring, and cost optimization. You'll learn how to debug LLM behavior, design test suites based on synthetic and real data, and build data flywheels that improve over time.

    Whether you're starting without user data or scaling a production system, you'll gain the skills to build AI you can trust—with processes that are repeatable, measurable, and aligned with real-world outcomes.

    • Run systematic error analyses to uncover, categorize, and prioritize failure modes
    • Build, implement, and automate evaluation pipelines using code-based and LLM-based metrics
    • Optimize AI performance and costs through smart evaluation and feedback loops
    • Apply key principles and techniques for monitoring AI applications in production