AJ Shankar

Berkeley, California, United States Contact Info
2K followers 500+ connections

Join to view profile

Activity

Join now to see all activity

Experience & Education

  • Everlaw

View AJ’s full experience

See their title, tenure and more.

or

By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.

Publications

  • Jolt: Lightweight Dynamic Analysis and Removal of Object Churn

    OOPSLA

    It has been observed that component-based applications exhibit object churn, the excessive creation of short-lived objects, often caused by trading performance for modularity. Because churned objects are short-lived, they appear to be good candidates for stack allocation. Unfortunately, most churned objects escape their allocating function, making escape analysis ineffective. We reduce object churn with three contributions. First, we formalize two measures of churn, capture and control. Second,…

    It has been observed that component-based applications exhibit object churn, the excessive creation of short-lived objects, often caused by trading performance for modularity. Because churned objects are short-lived, they appear to be good candidates for stack allocation. Unfortunately, most churned objects escape their allocating function, making escape analysis ineffective. We reduce object churn with three contributions. First, we formalize two measures of churn, capture and control. Second, we develop lightweight dynamic analyses for measuring both capture and control. Third, we develop an algorithm that uses capture and control to inline portions of the call graph to make churned objects non-escaping, enabling churn optimization via escape analysis. Jolt is a lightweight dynamic churn optimizer that uses our algorithms. We embedded Jolt in the JIT compiler of the IBM J9 commercial JVM, and evaluated Jolt on large application frameworks, including Eclipse and JBoss. We found that Jolt eliminates over 4 times as many allocations as a state-of-the-art escape analysis alone.

    Other authors
    See publication
  • Ditto: Automatic Incrementalization of Data Structure Invariant Checks (in Java)

    PLDI

    We present Ditto, an automatic incrementalizer for dynamic, side-effect-free data structure invariant checks. Incrementalization speeds up the execution of a check by reusing its previous executions, checking the invariant anew only on the changed parts of the data structure. Ditto exploits properties specific to the domain of invariant checks to automate and simplify the process without restricting what mutations the program can perform. Our incrementalizer works for modern imperative…

    We present Ditto, an automatic incrementalizer for dynamic, side-effect-free data structure invariant checks. Incrementalization speeds up the execution of a check by reusing its previous executions, checking the invariant anew only on the changed parts of the data structure. Ditto exploits properties specific to the domain of invariant checks to automate and simplify the process without restricting what mutations the program can perform. Our incrementalizer works for modern imperative languages such as Java and C#. It can incrementalize, for example, verification of red-black tree properties and the consistency of the hash code in a hash table bucket. Our source-to-source implementation for Java is automatic, portable, and efficient. Ditto provides speedups on data structures with as few as 100 elements; on larger data structures, its speedups are characteristic of non-automatic incrementalizers: roughly 5-fold at 5,000 elements, and growing linearly with data structure size.

    Other authors
    See publication
  • Katana: A Specialized Framework for Reliable Web Servers

    E-commerce server reliability is critical, as downtimes cost an average of $10,000 per minute. Commercial web server development today is done with fairly generic programming languages, like Java, Perl, and C#. The generality of these languages, while permitting a wide range of target applications, makes it difficult to guarantee reliability: dynamic type errors, race conditions, and resource leaks contribute to instability. Though the languages may detect such errors at runtime, the resulting…

    E-commerce server reliability is critical, as downtimes cost an average of $10,000 per minute. Commercial web server development today is done with fairly generic programming languages, like Java, Perl, and C#. The generality of these languages, while permitting a wide range of target applications, makes it difficult to guarantee reliability: dynamic type errors, race conditions, and resource leaks contribute to instability. Though the languages may detect such errors at runtime, the resulting downtimes in production code are costly. We present Katana, a specialized framework for creating reliable web servers. Generality is exchanged for specific capabilities tailored to server operation. In particular, servers written with Katana benefit from these properties: truly statically type-checked code; specialized language features for common server tasks, such as data transformation and formatted output; native, statically-checked database interaction; automatic memory management and concurrency control; and built-in state-sharing mechanisms. By eliminating much of the complexity inherent in general-purpose frameworks and unnecessary for web server operation, while retaining a suitable range of expressiveness, Katana servers are not subject to several entire classes of bugs that plague existing web servers, and are thus more reliable. Preliminary results indicate that Katana is comparable to existing server frameworks in terms of ease of use and performance, suggesting that it is a viable architecture for real-world web servers.

    Other authors
    • William McCloskey
    See publication
  • Approaches to Bin Packing with Clique-Graph Conflicts

    The problem of bin packing with arbitrary conflicts was introduced by Jansen. In this paper, we consider a restricted problem, bin packing with clique-graph conflicts. We prove bounds for several approximation algorithms, and show that certain on- and off-line algorithms are equivalent. Finally, we present an optimal polynomial-time algorithm for the case of constant item sizes, and analyze its performance in the more general case of bounded item sizes.

    Other authors
    • William McCloskey
    See publication
  • New Temperatures in Domineering

    INTEGERS

    Domineering is a two-player game that had only 30 known temperatures with denominator less than 512. We found 259 new Domineering temperatures.

    See publication
  • Runtime Specialization With Optimistic Heap Analysis

    OOPSLA

    We describe a highly practical program specializer for Java programs. The specializer is powerful, because it specializes optimistically, using (potentially transient) constants in the heap; it is precise, because it specializes using data structures that are only partially invariant; it is deployable, because it is hidden in a JIT compiler and does not require any user annotations or offline preprocessing; it is simple, because it uses existing JIT compiler ingredients; and it is fast, because…

    We describe a highly practical program specializer for Java programs. The specializer is powerful, because it specializes optimistically, using (potentially transient) constants in the heap; it is precise, because it specializes using data structures that are only partially invariant; it is deployable, because it is hidden in a JIT compiler and does not require any user annotations or offline preprocessing; it is simple, because it uses existing JIT compiler ingredients; and it is fast, because it specializes programs in under 1s. These properties are the result of (1) a new algorithm for selecting specializable code fragments, based on a notion of influence; (2) a precise store profile for identifying constant heap locations; and (3) an efficient invalidation mechanism for monitoring optimistic assumptions about heap constants. Our implementation of the specializer in the Jikes RVM has low overhead, selects specialization points that would be chosen manually, and produces speedups ranging from a factor of 1.2 to 6.4, comparable with annotation-guided specializers.

    Other authors
    See publication

Honors & Awards

  • First Place, Berkeley Business Plan Competition (101 entrants)

    -

  • Winner, Berkeley Venture Lab Prize (58 entrants)

    -

  • NSF Graduate Research Fellowship Recipient

    -

  • National Defense Science and Engineering Graduate Fellowship Recipient

    -

  • Outstanding Graduate Student Instructor Award

    -

    One of the top 3 TAs in department

  • Teaching Effectiveness Award

    -

    20 TAs selected campus-wide out of 3,000

  • Harvard University Certificate of Distinction in Teaching

    -

  • John D. Barnwell Award

    -

    For achievement in academics, athletics, and music

  • National Merit Scholar

    -

  • Robert C. Byrd Scholar

    -

Recommendations received

More activity by AJ

View AJ’s full profile

  • See who you know in common
  • Get introduced
  • Contact AJ directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named AJ Shankar

Add new skills with these courses