Books on basic logic design swarm the shelves. So do books on Verilog and VHDL. Why, then is logic design always learned on the job? I mean real, industrial logic design, with all the gritty bits. Kilts asked the question too, and wrote this book in response.
The first three chapters start in on the first three goals (conflicting goals, usually) of logic design: high speed, low power, and minimal area. Speed, of course, includes both throughput and latency - again, goals that often conflict with each other. Examples go well beyond the basic, on up to pipelined AES, a pipelined RISC, IEEE floating point units, and commercial standards for digitized audio, case studies with plenty of room to make the design points that Kilts means to get across.
The book's value comes from its willingness to get into technology specifics, way past the bland idealizations of pure logic design. For example, clock gating doesn't just make a design hard to follow, it often blocks the use of the chip's special purpose clock networks. Those have been engineered beyond belief for low skew under massive loading. You can use other wires as clocks, but you expose yourself to lots of ugly problems when you do. Special logic inputs matter, too, especially dedicated set and reset lines on flops. (I've seen some remarkable uses of the dedicated carry lines between closely coupled LUTs, too, but he doesn't touch on those.)
Of course, there are weak spots. Kilts touches on simulation and testbenches, but only touches. Testbenches and verification have their own texts, though, and exotica like mixed level simulations depend intimately on the specific tools at hand. A few pages, but only a few, presented maddening typos, like the capital-X-sub-i on p.125 where small-x-sub-i would have made sense (non-technical readers: if you made it this far, just trust me, it matters), or the resistor symbol in figure 15.12 where inductance is discussed. Section 8.2, on implementing math.h kinds of functions should simply have been dropped, or maybe replaced with a discussion on range reduction. The intended reader took Calc I and remembers the Taylor expansion. Being familiar is its only advantage, though. It doesn't minimize mean-square or maximum error, doesn't deal with endpoint continuity or differentiability in piecewise approximations (which aren't mentioned either), and has lots more problems. A list of grown-up techniques and references would have been far more helpful. Also, this text simply does not address one of the most pressing and painful issues in real-world logic design: compilation time. Although Kilts mentions floor-planning, he says nothing about how it supports incremental compilation, and notes tradeoff of result quality vs. turnaround in only one offhand phrase, as near as I could tell. Incremental compilation might be a non-Xilinx advantage, though, so forgivable within Kilts's stated limitations.
Kilts more than makes up for that small weakness in other areas, including discussion of parameterization. Because this is Verilog based, it doesn't mention VHDL's architecture configurability. Even in Verilog, though, parameterization appears pervasively in industrial design, especially when reuse matters, and rarely if ever shows its face in basic texts on logic design.
This book assumes that you already know Verilog well enough to build a simple pipelined processor, or at least to follow along closely. It also assumes that you've spent some time with industrial synthesis tools, and can translate from tool-specific advice in this book into the different but equivalent specifics of the tools that you're using. In academic terms, I'd call it a backup text for a third course in logic design, or for a course in something else that uses FPGAs heavily. It's not just for classrooms, though. Beginning professionals stand to benefit from this advice, and even battle-scarred logic designers who still remember 5V power rails might pick up a hint or two.