Вхід або реєстрація
Для відслідковування статусу замовлень та рекомендацій
Щоб бачити терміни доставки
A principled, high-level view of computer performance and how to exploit it. Ideal for software architects and data scientists.
The dramatic increase in computer performance has been extraordinary, but not for all computations: it has key limits and structure. Software architects, developers, and even data scientists need to understand how exploit the fundamental structure of computer performance to harness it for future applications. Ideal for upper level undergraduates, Computer Architecture for Scientists covers four key pillars of computer performance and imparts a high-level basis for reasoning with and understanding these concepts: Small is fast – how size scaling drives performance; Implicit parallelism – how a sequential program can be executed faster with parallelism; Dynamic locality – skirting physical limits, by arranging data in a smaller space; Parallelism – increasing performance with teams of workers. These principles and models provide approachable high-level insights and quantitative modelling without distracting low-level detail. Finally, the text covers the GPU and machine-learning accelerators that have become increasingly important for mainstream applications.
A principled, high-level view of computer performance and how to exploit it. Ideal for software architects and data scientists.
The dramatic increase in computer performance has been extraordinary, but not for all computations: it has key limits and structure. Software architects, developers, and even data scientists need to understand how exploit the fundamental structure of computer performance to harness it for future applications. Ideal for upper level undergraduates, Computer Architecture for Scientists covers four key pillars of computer performance and imparts a high-level basis for reasoning with and understanding these concepts: Small is fast – how size scaling drives performance; Implicit parallelism – how a sequential program can be executed faster with parallelism; Dynamic locality – skirting physical limits, by arranging data in a smaller space; Parallelism – increasing performance with teams of workers. These principles and models provide approachable high-level insights and quantitative modelling without distracting low-level detail. Finally, the text covers the GPU and machine-learning accelerators that have become increasingly important for mainstream applications.