Скачать презентацию
Идет загрузка презентации. Пожалуйста, подождите
Презентация была опубликована 9 лет назад пользователемЕвгения Зыкова
1 University of Colorado at Boulder Core Research Lab Tipp Moseley, Graham Price, Brian Bushnell, Manish Vachharajani, and Dirk Grunwald University of Colorado at Boulder Towards a Toolchain for Pipeline-Parallel Programming on CMPs John Giacomoni
2 University of Colorado at Boulder Core Research Lab Problem UP performance at end of life Chip-Multiprocessor systems –Individual cores less powerful than UP –Asymmetric and Heterogeneous –10s-100s-1000s of cores How to program? Intel (2x2-core)MIT RAW (16-core)100-core400-core
3 University of Colorado at Boulder Core Research Lab Programmers… Programmers are: –Bad at explicitly parallel programming –Better at sequential programming Solutions? –Hide parallelism Compilers Sequential libraries? –Math, iteration, searching, and ??? Routines
4 University of Colorado at Boulder Core Research Lab Using Multi-Core Task Parallelism –Desktop Data Parallelism –Web serving –Split/Join, MapReduce, etc… Pipeline Parallelism –Video Decoding –Network Processing
5 University of Colorado at Boulder Core Research Lab We believe that the best strategy for developing parallel programs may be to evolve them from sequential implementations. Therefore we need a toolchain that assists programmers in converting sequential programs into parallel ones. This toolchain will need to support all four conversion stages: identification, implementation, verification, and runtime system support. Joining the Minority Chorus
6 University of Colorado at Boulder Core Research Lab The Toolchain Identification –LoopProf and LoopSampler –ParaMeter Implementation –Concurrent Threaded Pipelining Verification Runtime system support
7 University of Colorado at Boulder Core Research Lab LoopProf LoopSampler Thread level parallelism benefits from coarse grain information –Not provided by gprof, et al. Visualize relationship between functions and hot loops No recompilation LoopSampler is effectively overhead free
8 University of Colorado at Boulder Core Research Lab Partial Loop Call Graph Boxes are functions Ovals are loops
9 University of Colorado at Boulder Core Research Lab ParaMeter Dynamic Instruction Number vs. Ready Time graph Visualize dependence chains –Fast random access of trace information –Compact representation Trace Slicing –Moving forward or backwards in a trace based on a flow (control, dependences, etc) –Requires information from disparate trace locations Variable Liveness Analysis
10 University of Colorado at Boulder Core Research Lab DIN vs. Ready Time
11 University of Colorado at Boulder Core Research Lab DIN vs. Ready Time DIN plot for 254.gap (IA64,gcc,inf) Multiple Dep. chains
12 University of Colorado at Boulder Core Research Lab Handling the Information Glut Challenging Trace size Trace complexity Need fast random access Solution –Binary Decision Diagrams –Compression ratios: 16-60x 10 9 instructions in 1GB
13 University of Colorado at Boulder Core Research Lab Implementation Well researched –Task-Parallel –Data-Parallel More work to be done –Pipeline-Parallel Concurrent Threaded Pipelining –FastForward –DSWP Stream languages –Streamit
14 University of Colorado at Boulder Core Research Lab Concurrent Threaded Pipelining Pipeline-Parallel organization –Each stage bound to a processor –Sequential data flow Data Hazards are a problem Software solution –FastForward
15 University of Colorado at Boulder Core Research Lab Threaded Pipelining Concurrent Sequential
16 University of Colorado at Boulder Core Research Lab Related Work Software architectures –Click, SEDA, Unix pipes, sysv queues, etc… –Locking queues take >= 1200 cycles (600ns) Additional overhead for cross-domain communication Compiler extracted pipelines –Decoupled Software Pipelining (DSWP) Modified IMPACT compiler Communication operations <= 100 cycles (50ns) –Assumes hardware queues Decoupled Access/Execute Architectures
17 University of Colorado at Boulder Core Research Lab FastForward Portable software only framework ~70-80 cycles (35-40ns)/queue operation Core-core & die-die –Architecturally tuned CLF queues Works with all consistency models Temporal slipping & prefetching to hide die-die communication –Cross-domain communication Kernel/Process/Thread
18 University of Colorado at Boulder Core Research Lab Network Scenario FShm How do we protect? GigE Network Properties: 1,488,095 frames/sec 672 ns/frame Frame dependencies
19 University of Colorado at Boulder Core Research Lab Verification Characterize run-time behavior with static analysis –Test generation –Code verification –Post-mortem root-fault analysis Identify the frontier of states leading to an observed fault Use formal methods to final fault-lines
20 University of Colorado at Boulder Core Research Lab Runtime System Support Hardware virtualization –Asymmetric and heterogeneous cores –Cores may not share main memory (GPU) Pipelined OS services Pipelines may cross process domains –FShm –Each domain should keep its private memory Protection Need label for each pipeline –Co/gang-scheduling of pipelines
21 University of Colorado at Boulder Core Research Lab Questions?
Еще похожие презентации в нашем архиве:
© 2024 MyShared Inc.
All rights reserved.