Geoff Davis — Page 2
Geoff Davis — Page 3
Recent reports have called for changes to the training of postdoctoral scientists and engineers. We tested the hypothesis that the practices advocated make a measurable difference in the experiences and productivity of postdoctoral researchers using data from a large-scale survey. We found that structured oversight and professional development opportunities are associated with a broad range of positive outcomes; compensation-related measures, in contrast, have few quantifiable benefits. Postdocs who wrote research/career plans at the start of their appointments were 23% more productive than those did not. Teaching experiences, exposure to non-academic careers, and training in proposal writing and project management were also associated with multiple positive outcomes.
We examine the central issues of invertibility, stability, artifacts, and frequency-domain characteristics in the construction of non-linear analogs of the wavelet transform. The lifting framework for wavelet construction motivates our analysis and provides new insight into the problem. We describe a new type of non-linearity for use in constructing non-linear transforms: a set of linear predictors that are chosen adaptively using a non-linear selection function. We also describe how a previously studied family of non-linear filter banks can be extended through the use of prediction functions operating on a causal neighborhood. We present preliminary results for a synthetic test image.
This paper presents an overview of wavelet-based image coding. We develop the basics of image coding with a discussion of vector quantization. We motivate the use of transform coding in practical settings, and describe the properties of various decorrelating transforms. We motivate the use of the wavelet transform in coding using rate-distortion considerations as well as approximation-theoretic considerations. Finally, we give an overview of current coders in the literature.
Lifting provides a simple method for constructing biorthogonal wavelet bases. We generalize lifting to the case of multiwavelets. In so doing we provide useful intuition about the additional degrees of freedom available in constructing multiwavelets. We show that any compactly supported multiwavelet transform can be decomposed into a sequence of lifting steps. Finally, we compare lifting to the two-scale similarity transform construction method.
Young mathematicians have been facing dismal job prospects throughout the nineties. What are the effects of the current labor market problems on the mathematics community as a whole? What forces have contributed to these problems? What are effective remedies? We address each of these questions, providing partial answers when data exists, and pointing out the key gaps in our current understanding. We conclude by describing some specific steps that the mathematical societies can take to improve the current labor market situation for mathematics Ph.D.s.
A number of recent embedded transform coders, including Shapiro's EZW scheme, Said and Pearlman's SPIHT scheme, and Xiong et al.'s EZDCT scheme employ a common algorithm called significance tree quantization (STQ). Each of these coders have been selected from a large family of significance tree quantizers based on empirical work and a priori knowledge about transform coefficient behavior. We describe an algorithm for selecting a particular form of STQ that is optimized for a given class of images. We apply our optimization procedure to the task of quantizing 8x8 DCT blocks. Our algorithm yields a fully embedded, low-complexity coder with performances from 0.6 to 1.9 dB better than baseline JPEG for standard test images.
We describe a joint source/channel allocation scheme for transmitting images lossily over a block erasure channel such as the Internet. The goal is to reduce image transmission latency. Our subband-level and bitplane-level optimization procedures give rise to an embedded channel coding strategy. Source and channel coding bits are allocated in order to minize an expected distortion measure. More perceptually important low frequency subbands of images are shielded heavily using channel codes, and higher frequencies are shielded lightly. The result is a more efficient use of channel codes that can reduce channel coding overhead. The reduction is most pronounced on bursty channels for which the uniform application of channel codes is particularly expensive. We derive optimal source/channel coding tradeoffs for our block erasure channel. We describe an implementation of this algorithm and compare its performance on the Internet to lossless TCP/IP transmission of the same images. In our experiments our lossy image transmission scheme delivered images significantly faster than TCP/IP during periods of heavy traffic.
Recipient of the 2000 IEEE Leon K. Kirchmayer Prize Paper Award
Why does fractal image compression work? What is the implicit image model underlying fractal block coding? How can we characterize the types of images for which fractal block coders will work well? These are the central issues we address in the papers below. We introduce a new wavelet-based analytical framework for block-based fractal compression schemes. Within this framework we are able to draw upon insights from the well-established transform coder paradigm in order to address the issue of why fractal block coders work. We show that fractal block coders are a form of Haar wavelet subtree quantization scheme. We generalize fractal block coders to smooth wavelet subtree coders and in so doing obtain a simpler fractal-based coding scheme that gives performance comparable to the best fractal schemes in the literature. Our wavelet framework gives new insight into the convergence properties of fractal block coders, and leads us to develop an unconditionally convergent scheme with a fast decoding algorithm. Our experiments with this new algorithm indicate that fractal coders derive much of their effectiveness from their ability to efficiently represent wavelet zerotrees. Finally, our framework enables us to examine some of the fundamental limitations of current fractal compression schemes.
We examine the problem of decomposing functions over redundant dictionaries of waveforms. We prove that the problem of finding optimal function expansions over a redundant dictionary is an NP-hard problem. We analyze a greedy algorithm for decomposing functions, the matching pursuit algorithm, and we introduce an orthogonalized version of the pursuit. We prove that a particular renormalized matching pursuit in R^3 is a chaotic map, and we prove that this map possesses an ergodic invariant measure. We also present evidence that this chaotic behavior exists for dictionaries in higher dimensions. We prove that the orthogonalized algorithm, in contrast, converges in N steps for dictionaries in R^N. We compare the performance and complexity of matching pursuit and orthogonal pursuit decompositions and find that for the coherent portion of the function decomposition, the orthogonal pursuit converges slightly faster than the non-orthogonal pursuit, but requires an order of magnitude more operations to compute.