1. Home page
  2. BibTeX

Publications, presentations, and other written artifacts

[1]
David Ediger, Jason Riedy, David A. Bader, and Henning Meyerhenke. Computational graph analytics for massive streaming data. In Hamid Sarbazi-azad and Albert Zomaya, editors, Large Scale Network-Centric Computing Systems, Parallel and Distributed Computing, chapter 25. Wiley, July 2013. (to appear). [ bib ]
[2]
David Ediger, Robert McColl, Jason Riedy, and David A. Bader. STINGER: High performance data structure for streaming graphs. In The IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, September 2012. Best paper award. [ bib ]
[3]
David A. Bader, David Ediger, and Jason Riedy. Streaming graph analytics for massive graphs. SIAM Annual Meeting, July 2012. [ bib | http ]
Emerging real-world graph problems include detecting community structure in large social networks, improving the resilience of the electric power grid, and detecting and preventing disease in human populations. The volume and richness of data combined with its rate of change renders monitoring properties at scale by static recomputation infeasible. We approach these problems with massive, fine-grained parallelism across different shared memory architectures both to compute solutions and to explore the sensitivity of these solutions to natural bias and omissions within the data.

[4]
E. Jason Riedy, David A. Bader, and Henning Meyerhenke. Scalable multi-threaded community detection in social networks. In 6th Workshop on Multithreaded Architectures and Applications (MTAAP), May 2012. (9/15 papers accepted, 60% acceptance). [ bib | http ]
The volume of existing graph-structured data requires improved parallel tools and algorithms. Finding communities, smaller subgraphs densely connected within the subgraph than to the rest of the graph, plays a role both in developing new parallel algorithms as well as opening smaller portions of the data to current analysis tools. We improve performance of our parallel community detection algorithm by 20% on the massively multithreaded Cray XMT, evaluate its performance on the next-generation Cray XMT2, and extend its reach to Intel-based platforms with OpenMP. To our knowledge, not only is this the first massively parallel community detection algorithm but also the only such algorithm that achieves excellent performance and good parallel scalability across all these platforms. Our implementation analyzes a moderate sized graph with 105 million vertices and 3.3 billion edges in around 500 seconds on a four processor, 80-logical-core Intel-based system and 1100 seconds on a 64-processor Cray XMT2.

[5]
Jason Riedy, Henning Meyerhenke, David A. Bader, David Ediger, and Timothy G. Mattson. Analysis of streaming social networks and graphs on multicore architectures. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Kyoto, Japan, March 2012. [ bib | http ]
Analyzing static snapshots of massive, graph-structured data cannot keep pace with the growth of social networks, financial transactions, and other valuable data sources. We introduce a framework, STING (Spatio-Temporal Interaction Networks and Graphs), and evaluate its performance on multicore, multisocket Intel(R)-based platforms. STING achieves rates of around 100000 edge updates per second on large, dynamic graphs with a single, general data structure. We achieve speed-ups of up to 1000× over parallel static computation, improve monitoring a dynamic graph's connected components, and show an exact algorithm for maintaining local clustering coefficients performs better on Intel-based platforms than our earlier approximate algorithm.

[6]
E. Jason Riedy, Henning Meyerhenke, David Ediger, and David A. Bader. Parallel community detection for massive graphs. In 10th DIMACS Implementation Challenge - Graph Partitioning and Graph Clustering. (workshop paper), Atlanta, Georgia, February 2012. Won first place in the Mix Challenge and Mix Pareto Challenge. [ bib | .pdf ]
[7]
Henning Meyerhenke, E. Jason Riedy, and David A. Bader. Parallel community detection in streaming graphs. SIAM Parallel Processing for Scientific Computing, February 2012. [ bib ]
[8]
David Ediger, E. Jason Riedy, Henning Meyerhenke, and David A. Bader. Analyzing massive networks with graphct. SIAM Parallel Processing for Scientific Computing, February 2012. [ bib ]
[9]
E. Jason Riedy, David Ediger, Henning Meyerhenke, and David A. Bader. Sting: Software for analysis of spatio-temporal interaction networks and graphs. SIAM Parallel Processing for Scientific Computing, February 2012. [ bib ]
[10]
E. Jason Riedy and Henning Meyerhenke. Scalable algorithms for analysis of massive, streaming graphs. SIAM Parallel Processing for Scientific Computing, February 2012. [ bib | http ]
[11]
David Ediger, Jason Riedy, Rob McColl, and David A. Bader. Parallel programming for graph analysis. In 17th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP), New Orleans, LA, February 2012. [ bib | .html ]
An increasingly fast-paced, digital world has produced an ever-growing volume of petabyte-sized datasets. At the same time, terabytes of new, unstructured data arrive daily. As the desire to ask more detailed questions about these massive streams has grown, parallel software and hardware have only recently begun to enable complex analytics in this non-scientific space. In this tutorial, we will discuss the open problems facing us with analyzing this "data deluge". We will present algorithms and data structures capable of analyzing spatio-temporal data at massive scale on parallel systems. We will try to understand the difficulties and bottlenecks in parallel graph algorithm design on current systems and will show how multithreaded and hybrid systems can overcome these challenges. We will demonstrate how parallel graph algorithms can be implemented on a variety of architectures using different programming models. The goal of this tutorial is to provide a comprehensive introduction to the field of parallel graph analysis to an audience with computing background, interested in participating in research and/or commercial applications of this field. Moreover, we will cover leading-edge technical and algorithmic developments in the field and discuss open problems and potential solutions.

[12]
David Ediger, Karl Jiang, Jason Riedy, and David A. Bader. Graphct: Multithreaded algorithms for massive graph analysis. IEEE Transactions in Parallel and Distributed Systems, 2012. (to appear). [ bib ]
[13]
E. Jason Riedy, Henning Meyerhenke, David Ediger, and David A. Bader. Parallel community detection for massive graphs. In 9th International Conference on Parallel Processing and Applied Mathematics (PPAM11). Springer, September 2011. (134/243 papers accepted, 55% acceptance rate). [ bib ]
Tackling the current volume of graph-structured data requires parallel tools. We extend our work on analyzing such massive graph data with the first massively parallel algorithm for community detection that scales to current data sizes, scaling to graphs of over 122 million vertices and nearly 2 billion edges in under 7300 seconds on a massively multithreaded Cray XMT. Our algorithm achieves moderate parallel scalability without sacrificing sequential operational complexity. Community detection partitions a graph into subgraphs more densely connected within the subgraph than to the rest of the graph. We take an agglomerative approach similar to Clauset, Newman, and Moore's sequential algorithm, merging pairs of connected intermediate subgraphs to optimize different graph properties. Working in parallel opens new approaches to high performance. On smaller data sets, we find the output's modularity compares well with the standard sequential algorithms.

[14]
Jason Riedy, David Ediger, David A. Bader, and Henning Meyerhenke. Tracking structure of streaming social networks. 2011 Graph Exploitation Symposium hosted by MIT Lincoln Labs, August 2011. [ bib | .pdf ]
[15]
Jason Riedy, David A. Bader, Henning Meyerhenke, David Ediger, and Timothy Mattson. Sting: Spatio-temporal interaction networks and graphs for intel platforms. Presentation at Intel Corporation, Santa Clara, CA, August 2011. [ bib | .pdf ]
[16]
David Ediger, E. Jason Riedy, David A. Bader, and Henning Meyerhenke. Tracking structure of streaming social networks. In 5th Workshop on Multithreaded Architectures and Applications (MTAAP), May 2011. (10/17 papers accepted, 59% acceptance rate). [ bib ]
Current online social networks are massive and still growing. For example, Facebook has over 500 million active users sharing over 30 billion items per month. The scale within these data streams has outstripped traditional graph analysis methods. Monitoring requires dynamic analysis rather than repeated static analysis. The massive state behind multiple persistent queries requires shared data structures and not problem-specific representations. We present a framework based on the STINGER data structure that can monitor a global property, connected components, on a graph of 16 million vertices at rates of up to 240000 updates per second on a 32 processor Cray XMT. For very large scale-free graphs, our implementation uses novel batching techniques that exploit the scale-free nature of the data and run over three times faster than prior methods. Our framework handles, for the first time, real-world data rates, opening the door to higher-level analytics such as community and anomaly detection.

[17]
Jason Riedy. “the storm's coming when the chickens spread out”. In Fiona Robyn and Kaspalita, editors, pay attention: a river of stones, page 77. http://lulu.com, March 2011. [ bib | http ]
[18]
David A. Bader, David Ediger, and E. Jason Riedy. Parallel programming for graph analysis. In 16th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming (PPoPP), San Antonio, TX, February 2011. [ bib | .html ]
An increasingly fast-paced, digital world has produced an ever-growing volume of petabyte-sized datasets. At the same time, terabytes of new, unstructured data arrive daily. As the desire to ask more detailed questions about these massive streams has grown, parallel software and hardware have only recently begun to enable complex analytics in this non-scientific space. In this tutorial, we will discuss the open problems facing us with analyzing this "data deluge". We will present algorithms and data structures capable of analyzing spatio-temporal data at massive scale on parallel systems. We will try to understand the difficulties and bottlenecks in parallel graph algorithm design on current systems and will show how multithreaded and hybrid systems can overcome these challenges. We will demonstrate how parallel graph algorithms can be implemented on a variety of architectures using different programming models. The goal of this tutorial is to provide a comprehensive introduction to the field of parallel graph analysis to an audience with computing background, interested in participating in research and/or commercial applications of this field. Moreover, we will cover leading-edge technical and algorithmic developments in the field and discuss open problems and potential solutions.

[19]
Jason Riedy, David A. Bader, Karl Jiang, Pushkar Pande, and Richa Sharma. Detecting communities from given seeds in social networks. Technical Report GT-CSE-11-01, Georgia Institute of Technology, February 2011. [ bib | http ]
Analyzing massive social networks challenges both high-performance computers and human understanding. These massive networks cannot be visualized easily, and their scale makes applying complex analysis methods computationally expensive. We present a region-growing method for finding a smaller, more tractable subgraph, a community, given a few example seed vertices. Unlike existing work, we focus on a small number of seed vertices, from two to a few dozen. We also present the first comparison between five algorithms for expanding a small seed set into a community. Our comparison applies these algorithms to an R-MAT generated graph component with 240 thousand vertices and 32 million edges and evaluates the community size, modularity, Kullback-Leibler divergence, conductance, and clustering coefficient. We find that our new algorithm with a local modularity maximizing heuristic based on Clauset, Newman, and Moore performs very well when the output is limited to 100 or 1000 vertices. When run without a vertex size limit, a heuristic from McCloskey and Bader generates communities containing around 60% of the graph's vertices and having a small conductance and modularity appropriate to the result size. A personalized PageRank algorithm based on Andersen, Lang, and Chung also performs well with respect to our metrics.

[20]
E. Jason Riedy. Making Static Pivoting Scalable and Dependable. PhD thesis, EECS Department, University of California, Berkeley, December 2010. [ bib | .html ]
Solving square linear systems of equations Ax=b is one of the primary workhorses in scientific computing. With asymptotically and practically small amounts of extra calculation and higher precision, we can render solution techniques dependable. We produce a solution with tiny error for almost all systems where we should expect a tiny error, and we correctly flag potential failures. Our method uses a proven technique: iterative refinement. We extend prior work by applying extra precision not only in calculating the residual b-A yi of an intermediate solution yi but also in carrying that intermediate solution yi. Analysis shows that extra precision in the intermediate solutions lowers the limiting backward error (measuring perturbations in the initial problem) to levels that produce a forward error (measuring perturbations in the solution) not much larger than the precision used to store the result. We also demonstrate that condition estimation is not necessary for determining success, reducing the computation in refinement substantially. This basic, dependable solver applies to typical dense LU factorization methods using partial pivoting as well as methods that risk greater failure by choosing pivots for non-numerical reasons. Sparse factorization methods may choose pivots to promote structural sparsity or even choose pivots before factorization to decouple the phases. We show through experiments that solutions using these restrictive pivoting methods still have small error so long as an estimate of factorization quality, the growth factor, does not grow too large. Our refinement algorithm dependably flags such failures. Additionally, we find a better choice of heuristic for sparse static pivoting than the defaults in Li and Demmel's SuperLU package. Static pivoting in a distributed-memory setting needs an algorithm for choosing pivots that does not rely on fitting the entire matrix into one memory space. We investigate a set of algorithms, Bertsekas's auction algorithms, for choosing a static pivoting via maximum weight perfect bipartite matching. Auction algorithms have a natural mapping to distributed memory computation through their bidding mechanism. We provide an analysis of the auction algorithm fitting it comfortably in linear optimization theory and characterizing approximately maximum weight perfect bipartite matches. These approximately maximum weight perfect matches work well as static pivot choices and can be computed much more quickly than the exact maximum weight matching. Finally, we consider the performance of auction algorithm implementations on a suite of real-world sparse problems. Sequential performance is roughly equivalent to existing implementations like Duff and Koster's MC64, but varies widely with different parameter and input settings. The parallel performance is even more wildly unpredictable. Computing approximately maximum weight matchings helps performance somewhat, but we still conclude that the performance is too variable for a black-box solution method.

[21]
David A. Bader, Jonathan Berry, Simon Kahan, Richard Murphy, E. Jason Riedy, and Jeremiah Willcock. Graph 500 benchmark 1 ("search"). Version 1.1, October 2010. [ bib | .html ]
[22]
Report on NSF Workshop on Center Scale Activities Related to Accelerators for Data Intensive Applications. This workshop is supported by NSF Grant Number 1051537, in response to the Call for Exploratory Workshop Proposals for Scientific Software Innovation Institutes (S2I2)., October 2010. [ bib ]
[23]
Jason Riedy, David Bader, and David Ediger. Applications in social networks. In NSF Workshop on Accelerators for Data-Intensive Applications, October 2010. [ bib | .pdf ]
[24]
David Ediger, Karl Jiang, E. Jason Riedy, David A. Bader, Courtney Corley, Rob Farber, and William N. Reynolds. Massive social network analysis: Mining twitter for social good. In 39th International Conference on Parallel Processing (ICPP), San Diego, CA, September 2010. (70/225 papers accepted: 31.1% acceptance rate). [ bib | .html ]
[25]
E. Jason Riedy. “here, on the farthest point of the peninsula”. In Dana Martin Guthrie, editor, Read Write Poem NaPoWriMo Anthology, page 86. http://issuu.com, September 2010. [ bib | http ]
[26]
David Ediger, Karl Jiang, E. Jason Riedy, and David A. Bader. Massive streaming data analytics: A case study with clustering coefficients. In 4th Workshop on Multithreaded Architectures and Applications (MTAAP), Atlanta, GA, April 2010. (11/22 papers accepted, 50% acceptance rate). [ bib | .html ]
[27]
E. Jason Riedy. Dependable direct solutions for linear systems using a little extra precision. CSE Seminar at Georgia Institute of Technology, August 2009. [ bib | http ]
Solving a square linear system Ax=b often is considered a black box. It's supposed to "just work," and failures often are blamed on the original data or subtleties of floating-point. Now that we have an abundance of cheap computations, however, we can do much better. A little extra precision in just the right places produces accurate solutions cheaply or demonstrates when problems are too hard to solve without significant cost. This talk will outline the method, iterative refinement with a new twist; the benefits, small backward and forward errors; and the trade-offs and unexpected benefits.

[28]
James W. Demmel, Mark Frederick Hoemmen, Yozo Hida, and E. Jason Riedy. Non-negative diagonals and high performance on low-profile matrices from Householder QR. SIAM Journal on Scientific Computing, 31(4):2832–2841, July 2009. [ bib | DOI ]
Keywords: LAPACK; QR factorization; Householder reflection; floating-point
[29]
James W. Demmel, Yozo Hida, Xiaoye S. Li, and E. Jason Riedy. Extra-precise iterative refinement for overdetermined least squares problems. ACM Transactions on Mathematical Software, 35(4):1–32, February 2009. [ bib | DOI ]
We present the algorithm, error bounds, and numerical results for extra-precise iterative refinement applied to overdetermined linear least squares (LLS) problems. We apply our linear system refinement algorithm to Björck’s augmented linear system formulation of an LLS problem. Our algorithm reduces the forward normwise and componentwise errors to O(ɛ) unless the system is too ill conditioned. In contrast to linear systems, we provide two separate error bounds for the solution x and the residual r. The refinement algorithm requires only limited use of extra precision and adds only O(mn) work to the O(mn2) cost of QR factorization for problems of size m-by-n. The extra precision calculation is facilitated by the new extended-precision BLAS standard in a portable way, and the refinement algorithm will be included in a future release of LAPACK and can be extended to the other types of least squares problems.

[30]
E. Jason Riedy. Auctions for distributed (and possibly parallel) matchings. Visit to CERFACS courtesy of the Franco-Berkeley Fund, December 2008. [ bib | .pdf ]
[31]
James W. Demmel, Mark Frederick Hoemmen, Yozo Hida, and E. Jason Riedy. Non-negative diagonals and high performance on low-profile matrices from householder qr. LAPACK Working Note 203, Netlib, May 2008. Also issued as UCB/EECS-2008-76; modified from SISC version. [ bib | .pdf ]
[32]
James W. Demmel, Yozo Hida, Xiaoye S. Li, and E. Jason Riedy. Extra-precise iterative refinement for overdetermined least squares problems. LAPACK Working Note 188, Netlib, May 2007. Also issued as UCB/EECS-2007-77; version accepted for TOMS. [ bib | .pdf ]
We present the algorithm, error bounds, and numerical results for extra-precise iterative refinement applied to overdetermined linear least squares (LLS) problems. We apply our linear system refinement algorithm to Björck’s augmented linear system formulation of an LLS problem. Our algorithm reduces the forward normwise and componentwise errors to O(ɛ) unless the system is too ill conditioned. In contrast to linear systems, we provide two separate error bounds for the solution x and the residual r. The refinement algorithm requires only limited use of extra precision and adds only O(mn) work to the O(mn2) cost of QR factorization for problems of size m-by-n. The extra precision calculation is facilitated by the new extended-precision BLAS standard in a portable way, and the refinement algorithm will be included in a future release of LAPACK and can be extended to the other types of least squares problems.

[33]
James W. Demmel, Yozo Hida, Xiaoye S. Li, E. Jason Riedy, Meghana Vishvanath, and David Vu. Precise solutions for overdetermined least squares problems. Stanford 50 – Eighth Bay Area Scientific Computing Day, March 2007. [ bib | .pdf ]
Linear least squares (LLS) fitting is the most widely used data modeling technique and is included in almost every data analysis system (e.g. spreadsheets). These software systems often give no feedback on the conditioning of the LLS problem or the floating-point calculation errors present in the solution. With limited use of extra precision, we can eliminate these concerns for all but the most ill-conditioned LLS problems. Our algorithm provides either a solution and residual with relatively tiny error or a notice that the LLS problem is too ill-conditioned.

[34]
James W. Demmel, Jack Dongarra, Beresford Parlett, W. Kahan, Ming Gu, David Bindel, Yozo Hida, Xiaoye S. Li, Osni A. Marques, E. Jason Riedy, Christof Vömel, Julien Langou, Piotr Luszczek, Jakub Kurzak, Alfredo Buttari, Julie Langou, and Stanimire Tomov. Prospectus for the next LAPACK and ScaLAPACK libraries. LAPACK Working Note 181, Netlib, February 2007. Also issued as UT-CS-07-592. [ bib | .pdf ]
[35]
Osni A. Marques, E. Jason Riedy, and Christof Vömel. Benefits of IEEE-754 features in modern symmetric tridiagonal eigensolvers. SIAM Journal on Scientific Computing, 28(5):1613–1633, September 2006. [ bib | DOI ]
Bisection is one of the most common methods used to compute the eigenvalues of symmetric tridiagonal matrices. Bisection relies on the Sturm count: For a given shift sigma, the number of negative pivots in the factorization T - σI = LDLT equals the number of eigenvalues of T that are smaller than sigma. In IEEE-754 arithmetic, the value ∞ permits the computation to continue past a zero pivot, producing a correct Sturm count when T is unreduced. Demmel and Li showed [IEEE Trans. Comput., 43 (1994), pp. 983–992] that using ∞ rather than testing for zero pivots within the loop could significantly improve performance on certain architectures. When eigenvalues are to be computed to high relative accuracy, it is often preferable to work with LDLT factorizations instead of the original tridiagonal T. One important example is the MRRR algorithm. When bisection is applied to the factored matrix, the Sturm count is computed from LDLT which makes differential stationary and progressive qds algorithms the methods of choice. While it seems trivial to replace T by LDLT, in reality these algorithms are more complicated: In IEEE-754 arithmetic, a zero pivot produces an overflow followed by an invalid exception (NaN, or “Not a Number”) that renders the Sturm count incorrect. We present alternative, safe formulations that are guaranteed to produce the correct result. Benchmarking these algorithms on a variety of platforms shows that the original formulation without tests is always faster provided that no exception occurs. The transforms see speed-ups of up to 2.6x over the careful formulations. Tests on industrial matrices show that encountering exceptions in practice is rare. This leads to the following design: First, compute the Sturm count by the fast but unsafe algorithm. Then, if an exception occurs, recompute the count by a safe, slower alternative. The new Sturm count algorithms improve the speed of bisection by up to 2x on our test matrices. Furthermore, unlike the traditional tiny-pivot substitution, proper use of IEEE-754 features provides a careful formulation that imposes no input range restrictions.

[36]
Jack Dongarra, Julien Langou, and E. Jason Riedy. Sca/LAPACK program style. August 2006. [ bib | .html ]
The purpose of this document is to facilitate contributions to LAPACK and ScaLAPACK by documenting their design and implementation guidelines. The long-term goal is to provide guidelines for both LAPACK and ScaLAPACK. However, the parallel ScaLAPACK code has more open issues, so this document primarily concerns LAPACK.

[37]
James W. Demmel, Yozo Hida, W. Kahan, Xiaoye S. Li, Sonil Mukherjee, and E. Jason Riedy. Error bounds from extra-precise iterative refinement. ACM Transactions on Mathematical Software, 32(2):325–351, June 2006. [ bib | DOI ]
We present the design and testing of an algorithm for iterative refinement of the solution of linear equations where the residual is computed with extra precision. This algorithm was originally proposed in 1948 and analyzed in the 1960s as a means to compute very accurate solutions to all but the most ill-conditioned linear systems. However, two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard has essentially removed the first obstacle. To overcome the second obstacle, we show how the application of iterative refinement can be used to compute an error bound in any norm at small cost and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound.

[38]
James W. Demmel, Jack Dongarra, Beresford Parlett, W. Kahan, Ming Gu, David Bindel, Yozo Hida, Xiaoye S. Li, Osni A. Marques, E. Jason Riedy, Christof Vömel, Julien Langou, Piotr Luszczek, Jakub Kurzak, Alfredo Buttari, Julie Langou, and Stanimire Tomov. Prospectus for the next LAPACK and ScaLAPACK libraries. In PARA'06: State-of-the-Art in Scientific and Parallel Computing, Umeå, Sweden, June 2006. High Performance Computing Center North (HPC2N) and the Department of Computing Science, Umeå University, Springer. [ bib | .pdf ]
LAPACK and ScaLAPACK are widely used software libraries for numerical linear algebra. There have been over 68M web hits at www.netlib.org for the associated libraries LAPACK, ScaLAPACK, CLAPACK and LAPACK95. LAPACK and ScaLAPACK are used to solve leading edge science problems and they have been adopted by many vendors and software providers as the basis for their own libraries, including AMD, Apple (under Mac OS X), Cray, Fujitsu, HP, IBM, Intel, NEC, SGI, several Linux distributions (such as Debian), NAG, IMSL, the MathWorks (producers of MATLAB), Interactive Supercomputing, and PGI. Future improvements in these libraries will therefore have a large impact on users.

[39]
E. Jason Riedy. Making static pivoting dependable. Seventh Bay Area Scientific Computing Day, March 2006. [ bib | .pdf ]
For sparse LU factorization, dynamic pivoting tightly couples symbolic and numerical computation. Dynamic structural changes limit parallel scalability. Demmel and Li use static pivoting in distributed SuperLU for performance, but intentionally perturbing the input may lead silently to erroneous results. Are there experimentally stable static pivoting heuristics that lead to a dependable direct solver? The answer is currently a qualified yes. Current heuristics fail on a few systems, but all failures are detectable.

[40]
E. Jason Riedy, Yozo Hida, and James W. Demmel. The future of LAPACK and ScaLAPACK. Robert C. Thompson Matrix Meeting, November 2005. [ bib | .pdf ]
We are planning new releases of the widely used LAPACK and ScaLAPACK numerical linear algebra libraries. Based on an on-going user survey (http://www.netlib.org/lapack-dev) and research by many people, we are proposing the following improvements: Faster algorithms (including better numerical methods, memory hierarchy optimizations, parallelism, and automatic performance tuning to accomodate new architectures), more accurate algorithms (including better numerical methods, and use of extra precision), expanded functionality (including updating and downdating, new eigenproblems, etc. and putting more of LAPACK into ScaLAPACK), and improved ease of use (friendlier interfaces in multiple languages). To accomplish these goals we are also relying on better software engineering techniques and contributions from collaborators at many institutions. This is joint work with Jack Dongarra.

[41]
Osni A. Marques, E. Jason Riedy, and Christof Vömel. Benefits of IEEE-754 features in modern symmetric tridiagonal eigensolvers. LAPACK Working Note 172, Netlib, September 2005. Also issued as UCB//CSD-05-1414; expanded from SISC version. [ bib | .pdf ]
[42]
David Hough, Bill Hay, Jeff Kidder, E. Jason Riedy, Guy L. Steele Jr., and Jim Thomas. Arithmetic interactions: From hardware to applications. In 17th IEEE Symposium on Computer Arithmetic (ARITH'05), June 2005. See related presentation. [ bib | DOI ]
The entire process of creating and executing applications that solve interesting problems with acceptable cost and accuracy involves a complex interaction among hardware, system software, programming environments, mathematical software libraries, and applications software, all mediated by standards for arithmetic, operating systems, and programming environments. This panel will discuss various issues arising among these various contending points of view, sometimes from the point of view of issues raised during the current IEEE 754R standards revision effort.

[43]
E. Jason Riedy. Modern language tools and 754R. ARITH'05, June 2005. [ bib | .pdf ]
[44]
James W. Demmel, Yozo Hida, W. Kahan, Xiaoye S. Li, Sonil Mukherjee, and E. Jason Riedy. Error bounds from extra-precise iterative refinement. LAPACK Working Note 165, Netlib, February 2005. Also issued as UCB//CSD-05-1414, UT-CS-05-547, and LBNL-56965; expanded from TOMS version. [ bib | .pdf ]
[45]
E. Jason Riedy. Parallel combinatorial computing and sparse matrices. SIAM Conference on Computational Science and Engineering, February 2005. [ bib | .pdf ]
[46]
E. Jason Riedy. Parallel weighted bipartite matching and applications. SIAM Parallel Processing for Scientific Computing, February 2004. [ bib | .pdf ]
[47]
E. Jason Riedy. Sparse data structures for weighted bipartite matching. SIAM Workshop on Combinatorial Scientific Computing, February 2004. [ bib | .pdf ]
[48]
E. Jason Riedy. Practical alternatives for parallel pivoting. SIAM Annual Meeting, June 2003. [ bib | .pdf ]
[49]
E. Jason Riedy. Parallel bipartite matching for sparse matrix computations. SIAM Conference on Computational Science and Engineering, February 2003. [ bib | .pdf ]
[50]
David Bindel and E. Jason Riedy. Exception handling interfaces, implementations, and evaluation. IEEE-754r revision meeting, August 2002. [ bib | .pdf ]
[51]
E. Jason Riedy. Parallel bipartite matching for sparse matrix computation. Third Bay Area Scientific Computing Day, March 2002. [ bib ]
[52]
E. Jason Riedy. Type system support for floating-point computation. May 2001. [ bib | .pdf ]
Floating-point arithmetic is often seen as untrustworthy. We show how manipulating precisions according to the following rules of thumb enhances the reliability of and removes surprises from calculations: Store data narrowly, compute intermediates widely, and derive properties widely. Further, we describe a typing system for floating point that both supports and is supported by these rules. A single type is established for all in- termediate computations. The type describes a precision at least as wide as all inputs to and results from the computation. Picking a single type provides benefits to users, compilers, and interpreters. The type system also extends cleanly to encompass intervals and higher precisions.

[53]
E. Jason Riedy and Robert Szewczyk. Power and control in networked sensors. Cited, May 2000. [ bib | .pdf ]
The fundamental constraint on a networked sensor is its energy consumption, since it may be either impossible or not feasible to replace its energy source. We analyze the power dissipation implications of implementing the network sensor with either a central processor switching between I/O devices or a family of processors, each dedicated to a single device. We present the energy measurements of the current generations of networked sensors, and develop an abstract description of tradeoffs between both designs.

[54]
E. Jason Riedy and Rich Vuduc. Microbenchmarking the Tera MTA. Cited, presentation version available, May 1999. [ bib | .pdf ]
The Tera Multithreaded Architecture, or MTA, addresses scalable shared memory system design with a difierent approach; it tolerates latency through providing fast access to multiple threads of execution. The MTA employs a number of radical design ideas: creation of hardware threads (streams) with frequent context switching; full-empty bits for each memory word; a flat memory hierarchy; and deep pipelines. Recent evaluations of the MTA have taken a top-down approach: port applications and application benchmarks, and compare the absolute performance with conventional systems. While useful, these studies do not reveal the effect of the Tera MTA's unique hardware features on an application. We present a bottom-up approach to the evaluation of the MTA via a suite of microbenchmarks to examine in detail the underlying hardware mechanisms and the cost of runtime system support for multithreading. In particular, we measure memory, network, and instruction latencies; memory bandwidth; the cost of low-level synchronization via full-empty bits; overhead for stream management; and the effects of software pipelining. These data should provide a foundation for performance modeling on the MTA. We also present results for list ranking on the MTA, an application which has traditionally been difficult to scale on conventional parallel systems.

[55]
Joseph N. Wilson, E. Jason Riedy, Gerhard X. Ritter, and Hongchi Shi. An Image Algebra based SIMD image processing environment. In C. W. Chen and Y. Q. Zhang, editors, Visual Information Representation, Communication, and Image Processing, pages 523–542. Marcel Dekker, New York, 1999. [ bib | .pdf ]
SIMD parallel computers have been employed for image related applications since their inception. They have been leading the way in improving processing speed for those applications. However, current parallel programming technologies have not kept pace with the performance growth and cost decline of parallel hardware. A highly usable parallel software development environment is needed. This chapter presents a computing environment that integrates a SIMD mesh architecture with image algebra for high-performance image processing applications. The environment describes parallel programs through a machine-independent, retargetable image algebra object library that supports SIMD execution on the Lockheed Martin PAL-I parallel computer. Program performance on this machine is improved through on-the-fly execution analysis and scheduling. We describe the relevant elements of the system structure, outline the scheme for execution analysis, and provide examples of the current cost model and scheduling system.

[56]
Joseph N. Wilson and E. Jason Riedy. Efficient SIMD evaluation of image processing programs. In Hongchi Shi and Patrick C. Coffield, editors, Parallel and Distributed Methods for Image Processing, volume 3166, pages 199–210, San Diego, CA, July 1997. SPIE. [ bib | DOI | .pdf ]
SIMD parallel systems have been employed for image processing and computer vision applications since their inception. This paper describes a system in which parallel programs are implemented using a machine-independent, retargetable object library that provides SIMD execution on the Lockheed Martin PAL-I SIMD parallel processor. Programs' performance on this machine is improved through on-the-fly execution analysis and scheduling. We describe the relevant elements of the system structure, the general scheme for execution analysis, and the current cost model for scheduling.


This file was generated by bibtex2html 1.97.