March 29, 2000:
Signup for papers.
April 5, 2000:
A Framework for Interprocedural Optimization in the Presence of Dynamic Class Loading,
Sreedhar, Burke, and Choi (PLDI 00)
April 12, 2000:
A Single Intermediate Language that Supports Multiple Implementations of Exceptions,
Ramsey and Jones (PLDI 00)
April 19, 2000:
Optimal Instruction Scheduling using Integer Programming,
Wilken, Liu, and Heffernan (PLDI 00)
April 26, 2000:
Efficient Algorithms for Bidirectional Debugging,
Bob Boothe (PLDI 00)
May 3, 2000:
Contaminated Garbage Collection,
Cannarozzi, Plezbert, and Cytron (PLDI 00)
May 10, 2000:
Special Guest Speaker: Manuvir Das, Microsoft Research
"Unification-Based Pointer Analysis with Directional Assignments"
Abstract: Our research group at Microsoft is working on developing error detection methods for commercial applications software. Many of these methods involve some form of static program analysis. Because the programs being analyzed make heavy use of pointer valued variables, the success of our error detection methods depends critically on pointer analysis. Unfortunately, pointer analysis algorithms exhibit a natural trade-off between the quality of information produced, and the efficiency of the algorithm. Fast algorithms are too imprecise, while precise algorithms are too slow. In this talk, I shall describe our attempts (both successes and failures) to produce a pointer analysis that scales both in performance, and in precision. I shall describe three new algorithms for pointer analysis that successively improve precision, while maintaining scalability in performance. The result is an analysis that scales easily to large programs (it processes a 1.4MLOC program in less than two minutes of analysis time and 200MB of memory), while providing precision similar to that of much more expensive analyses. Finally, I shall draw on my experiences to suggest a methodology for developing scalable static analysis algorithms.
Bio: Manuvir Das is a Researcher in the Software Productivity Tools group at Microsoft Research. He received a B.Tech. in Computer Science from the Indian Institute of Technology, Bombay, in 1991. He did his graduate work at the University of Wisconsin-Madison, where he received a Ph.D. in 1998. His interests lie in scalable program analysis, its application to error detection tools, and methods to improve the correctness of software.
May 17, 2000:
Off-Line Variable Substitution for Scaling Points-to Analysis,
Rountev and Chandra (PLDI 00)
May 24, 2000:
Eric Ruf, Microsoft
Synchronization Elimination via Thread Closure and Summary-Based Flow Analyses
Abstract: We present a new technique for removing unnecessary synchronization operations from statically compiled Java programs. Our approach improves upon current efforts based on escape analysis, as it can eliminate synchronization operations even on objects that escape their allocating threads. It makes use of a compact, equivalence-class-based representation that eliminates the need for fixed point operations during the analysis.
We describe and evaluate the performance of an implementation in the Marmot native Java compiler. For the benchmark programs examined, the optimization removes 100% of the dynamic synchronization operations in single-threaded programs, and 0-99% in multi-threaded programs, at a low cost in additional compilation time and code growth. We also discuss benefits and limitations of the general approach, in both the specific context of synchronization elimination and in the more general context of value flow based transformations.
Bio: Erik Ruf joined Microsoft Research in 1993 and has been a member of the Advanced Programming Languages group since 1997. His interests include sparse program representations, efficient program analyses, and automated specialization of both programs and data. His current work focuses on performance optimizations for the Marmot Java compiler. Erik holds a B.S. degree in Computer Engineering from Case Western Reserve University and M.S. and Ph.D. degrees in Electrical Engineering from Stanford University.
May 31, 2000:
Super Todd will not be presenting three papers this week. Instead, we will have a special guest speaker:
Dr. Barbara Ryder, Rutgers University, Data-Flow Analysis of Program Fragments
Abstract: Traditional data-flow analysis is performed on whole programs; however, such whole-program analysis is not feasible for large or incomplete programs. We propose fragment data-flow analysis as an alternative approach which computes data-flow information for a specific program fragment. The analysis is parameterized by the additional information available about the rest of the program.
We have identified the theoretical issues involved in the design of a fragment analysis and have explored two specific uses of our model. Initial empirical evaluation of the cost and precision of this analysis indicates possible applicability to real-world software. (*This work was presented, in part, at FSE'99 in a co-authored paper with graduate student Atanas Rountev and William Landi.)
Linear Scan Register Allocation, Poletto and Sarkar (Prog. Lang. Sys., Sept. 1999)
Procedure Placement Using Temporal-Ordering Information, Gloy and Smith (Prog. Lang. Sys., Sept. 1999)
Static Correlated Branch Prediction, Young and Smith (Prog. Lang. Sys., Sept. 1999)
Three for the price of one: PA-RISC to IA-64: Transparent Execution, No Recompilation, Zheg and Thompson; Dynamic and Trnasparent Binary Translation, Gschwind, et al.; and UQBT: Adaptable Binary Translation at Low Cost, Cifuentes and VanEmmerik (Computer, March 2000)
Dynamo: A Transparent Runtime Optimization System, Bala, Duesterwalk, and Banerjia (PLDI 00)
Dynamic Optimizations for Java, Cierniak, Lueh, and Stichnoth (PLDI 00)
Thanks for the Memory: Split-Stream Dictionary Program Compression, Lucco (PLDI 00)
Off-Line Variable Substitution for Scaling Points-to Analysis, Rountev and Chandra (PLDI 00)
Modular Interprocedural Pointer Analysis Using Access Paths: Design Implementation and Evaluation, Cheng and Hwu (PLDI 00)
Bitwidth Analysis with Application to Silicon Compilation, Stephenson, Babb, and Amarasinghe (PLDI 00)
Improved Spill Code Generation for Software Pipelined Loops, Zalamea, et al. (PLDI 00)
Exploiting Superword Level Parallelism with Multimedia Instruction Sets, Larsen and Amarasinghe (PLDI 00)
Compiler Analysis of Irregular Memory Accesses, Lin and Padua (PLDI 00)
Transforming Loops to Recursion for Multi-Level Memory Hierarchies, Yi, Adve, and Kennedy (PLDI 00)
Symbolic Analysis of Array Index Bounds and Memory Access Regions, Rugina and Rinard (PLDI 00)
Type-Based Race Detection for Java, Freund and Flanagan (PLDI 00)
On Loops, Dominators, and Dominance Frontiers, Ramalingam (PLDI 00)
Functional Reactive Programming from First Principles, Wan and Hudak (PLDI 00)
Scalable Context-Sensitive Flow Analysis Using Instantiation Constraints, Fahndrich, Rehof, and Das (PLDI 00)
Efficient Algorithms for Bidirectional Debugging, Boothe (PLDI 00)
ABCD: Eliminating Array Bounds Checks on Demand, Bodik, Gupta, and Sarkar (PLDI 00)
Field Analysis: Getting Useful and Low-Cost Interprocedural Information, Ghemawat, Randall, and Scales (PLDI 00)
An Automatic Object Inlining Optimization and Its Evaluation, Dolby and Chien (PLDI 00)
To subscribe to the CSE 590k mailing list, send email to the majordomo mailing list at "majordomo@cs"; the mail's contents should include the line "subscribe cse590k". Leave the "Subject" line blank. You should shortly receive a message back saying "welcome".
02 May 2000