- Home
- Computers
- Programming
- Rearchitecting LLMs (Structural techniques for efficient models)
Rearchitecting LLMs (Structural techniques for efficient models)
List Price:
$59.99
| Expected release date is Oct 27th 2026 |
- Availability: Confirm prior to ordering
- Branding: minimum 50 pieces (add’l costs below)
- Check Freight Rates (branded products only)
Branding Options (v), Availability & Lead Times
- 1-Color Imprint: $2.00 ea.
- Promo-Page Insert: $2.50 ea. (full-color printed, single-sided page)
- Belly-Band Wrap: $2.50 ea. (full-color printed)
- Set-Up Charge: $45 per decoration
- Availability: Product availability changes daily, so please confirm your quantity is available prior to placing an order.
- Branded Products: allow 10 business days from proof approval for production. Branding options may be limited or unavailable based on product design or cover artwork.
- Unbranded Products: allow 3-5 business days for shipping. All Unbranded items receive FREE ground shipping in the US. Inquire for international shipping.
- RETURNS/CANCELLATIONS: All orders, branded or unbranded, are NON-CANCELLABLE and NON-RETURNABLE once a purchase order has been received.
Product Details
Author:
Pere Martra
Format:
Paperback
Pages:
380
Publisher:
Manning (October 27, 2026)
Imprint:
Manning
Release Date:
October 27, 2026
Language:
English
ISBN-13:
9781633434332
ISBN-10:
1633434338
Weight:
16.05oz
Dimensions:
7.375" x 9.25"
File:
Eloquence-SimonSchuster_04022026_P9912986_onix30_Complete-20260402.xml
Folder:
Eloquence
List Price:
$59.99
Pub Discount:
37
As low as:
$56.99
Publisher Identifier:
P-SS
Discount Code:
H
Overview
Get a free eBook (PDF or ePub) from Manning as well as access to the online liveBook format (and its AI assistant that will answer your questions in any language) when you purchase the print book.
By default, general purpose LLMs are not optimized for specific domains and business goals. Using techniques like specialized fine-tuning, pruning unnecessary neural components, and knowledge distillation, you can rearchitect your models to cost less, run faster, and deliver more accurate results.
This book turns research from the latest AI papers into production-ready practices for domain-specific model optimization. As you work through this practical book, you’ll perform hands-on surgery on popular open-source models like Llama-3, Gemma, and Qwen to create cost-effective local small language models (SLMs). Along the way, you’ll learn how to combine behavioral analysis with structural modifications, identifying and removing parts that don’t contribute to your model’s goals, and even use “fair pruning” to reduce model bias at the neuron level.
What's inside
• Universal techniques for customizing model architecture
• End-to-end pipelines for model rearchitecting
• Improve bias and explainability with model “cleanup”
• Replacing external LLMs with local SLMs
About the reader
For practicing AI, ML, and data engineers who know Python.
About the author
Pere Martra is an Applied AI Engineer, the creator of the Optipfair model efficiency library, an international AI speaker, and maintainer of widely-used LLM courses and popular open-source tools. He is the author of Large Language Models Projects.
By default, general purpose LLMs are not optimized for specific domains and business goals. Using techniques like specialized fine-tuning, pruning unnecessary neural components, and knowledge distillation, you can rearchitect your models to cost less, run faster, and deliver more accurate results.
This book turns research from the latest AI papers into production-ready practices for domain-specific model optimization. As you work through this practical book, you’ll perform hands-on surgery on popular open-source models like Llama-3, Gemma, and Qwen to create cost-effective local small language models (SLMs). Along the way, you’ll learn how to combine behavioral analysis with structural modifications, identifying and removing parts that don’t contribute to your model’s goals, and even use “fair pruning” to reduce model bias at the neuron level.
What's inside
• Universal techniques for customizing model architecture
• End-to-end pipelines for model rearchitecting
• Improve bias and explainability with model “cleanup”
• Replacing external LLMs with local SLMs
About the reader
For practicing AI, ML, and data engineers who know Python.
About the author
Pere Martra is an Applied AI Engineer, the creator of the Optipfair model efficiency library, an international AI speaker, and maintainer of widely-used LLM courses and popular open-source tools. He is the author of Large Language Models Projects.









