Analysis of Time Series and Spatial Data (Geophysics 505/Math 587)

Richard Aster
Geophysics Program
Department of Earth and Environmental Science

Brian Borchers
Department of Mathematics

New Mexico Institute of Mining and Technology
Socorro, New Mexico

Class will meet during Fall of 2013 in Wier 208 at 11:00 - 11:50 MWF

Class Overview and Schedule

Fall 2013 Overview Information and Syllabus

To send email to the instructors and class: click here


Introduction to Linear Systems, Part 1: The Time Domain (2013)

In-class notes, Part 1 (2013)

Introduction to Linear Systems, Part 2: The Frequency Domain (2013)

Energy and Power Spectra (2013)

Sampled Time Series (2013)

Digital Filtering (2013)

Deconvolution (2013)

Multidimensional Filtering

Random Processes

Wiener Filtering (2013)

Kalman Filtering

Supplementary Notes and Materials

Matlab Primer (pdf)

A Complex Number Primer

A Convolution Animation (c/o Gene DuVall) (htm/swf)

The Fourier Transform and Convolution

A Seismometer Java Applet

Poles and Zeros

Contour Integration

The Discrete Approximation of a Convolution

Plotting Spectra Using Decibels

A Review of Probability Concepts

Example and Demonstration MATLAB Programs

An example MATLAB convolution program

An example MATLAB program to plot a complex transfer function

MATLAB power spectral density program demonstrating concepts using a white noise process

An example MATLAB program that calculates spectral leakage from a variety of windows

An example MATLAB deconvolution (spectral division) program with Tikhonov regularization

An example MATLAB Program that does basic FIR low-pass filter calculations and plots

An example MATLAB program that filters an audio clip by direct manipulation of the FFT

An example MATLAB program that filters an audio clip by various FIR filters

The sample audio clip for the above.

MATLAB codes for identifying and fitting ARIMA models to data, in .tar and .zip format.

MATLAB autocorrelation and periodogram and ARMA prediction examples (zip format).

MATLAB Kalman filtering examples (zip format).

2013 Exercises

Homework 1 (due 9/4/13)

Homework 2 (due 9/25/13)

xtruedisp.mat file for Homework 2

mysteryseries.mat file for Homework 2

Homework 3 (due 10/14/13)

2013 Student Presentations

Ashley Hutton
Seismic Migration by the Fourier Transform Method

Kyle Smith
The Huang-Hilbert Transform

Steve Bernsen
PSD PDFs and Seismic Instrument Siting

Michael Baker
Seismic Receiver Functions

Kyle Murray
Assessing Noise Content in GPS Time Series from the Rio Grande Rift

Matt Perry
The Fractional Fourier Transform and its Applications

Rosalyn Wang
Multistation Deconvolution and Seismic Source Parameters

Stas Edel
Fast Fourier Transforms in Pulsar Data Analysis

Rebecca clemens
Structural Dynamic Characterization of Spacecraft Payloads

Jose Martinez
Comparison of Lightning Discharge-Measured Locations Using GPS with Calculated Discharge Locations Using the Electromagnetic Time-Reversal Technique (EMTR)

Julia Tilles
Fourier Techniques for Point-Source TriangulationL: An Example Using Lightning RF TIme Series

2011 Student Presentations

Holly Rotman
Postglacial Rebound: Earth Response to Ice Sheet Retreat

Viviana Rosero
Fourier transforms in radio astronomy

Emily Morton
Event Detection in the 2009 Socorro Earthquake Swarm Using the Subspace Detector Method

Gary Quezada
Prediction of Trading Volumes

Ana Juracic
Usage of an Ensemble Kalman Filter in Tropical Cyclones Prediction

Katyliz Anderson
Multitaper Spectral Analysis

Rebecca L. Johnson
Energy and Power Spectra of Thunder in the Magdalena Mountains, Central New Mexico

Keith Morris
The VLA Atmospheric Phase Interferometer (API)

Nicole D. McMahon
Towards Automatization of Identification of Tremor in the Erebus Seismic Network, Antarctica

Todd Umstot
Cross-spectral analysis of hydrologic data

Dongxiao Zhang
Musical Processing

Nicolas George
GPS analysis of surface deformation of the Socorro magma body:  Continued inflation or thermal expansion of host rock?

Robert E. Anthony
Measuring active volcanic deformation from tiltmeter data on Mount Erebus, Antarctica

2010 Student Presentations

Aída Quezada-Reyes

The Semblance Method and its Applications in Volcano Monitoring

Semblance is a widely used method that measures the coherency of multichannel data. This technique consists of a cross-correlation function that quantifies the similarity among waveforms for a signal recorded at different receivers of a seismic network. In volcanoes, semblance has been used to determine the coherence of very-long-period (VLP) signals,  related to perturbations in the flow of gas or magma within volcanic conduits to locate their sources. It has also been used in infrasound surveillance to discriminate sound sources originated by eruptive events from noise. In this paper, I will present a broad view of the semblance method and some applications to show its importance.

Luke Schmidt

Recovery of Amplitude and Closure Phase from Two Dimensional Optical Interferograms

Image reconstruction from a sparsely sampled aperture has a long history in radio astronomy. With recent advances in materials, computers, and optical sensors a new generation of optical and infrared interferometers are planned or in construction. In order to reconstruct an image from the resulting interferogram, the power spectra and closure phase must be found. This paper will give a theoretical background to this process and demonstrate the process with sample data.

Travis Barnette

A Method of Business Cycle Extraction

Various economic metrics and indices demonstrate periodicity, commonly referred to as business cycles. My desire is to summarize and explore methods of using frequency domain bandpass filtering (using DFTs) to extract a known periodicity in these signals. The initial step in this method will be rendering a given set of economic data stationary (as best as possible). Then I will use DFT methods to filter the frequency domain and deal with any 'leakage' as a result of the shortcomings in the de-trending/getting-stationary process. Several simulated economic data is available for this method (which will be used) but with real, practical data, an apparent Gibbs Effect may result in some complications [1]. I'll look to explore just what that is and how it can be tackled. This subject is inherently curious to me because: 1) I've never looked at economic data in the frequency domain and, 2) it seems highly debatable [2] amongst academia as to whether or not business cycles are actually cycles of the years proposed (6-8 years for Juglar/business cycles or 3-4 years for Kitchin cycles), or cycles at all! And not just fluctuations. Yet recent research concerning the 2008-2009 economic crisis suggest that there is "a presence of business (Juglar) cycles in the world GDP dynamics at an acceptable level of statistical significance." [3]

[1] Melvin J. Hinich, John Foster and Philip Wild*, School of Economics Discussion Paper No. 357, March 2008, "Discrete Fourier Transform Filters as Business Cycle Extraction Tools: An Investigation of Cycle Extraction Properties and Applicability of ‘Gibbs’ Effect" March 2008, School of Economics, The University of Queensland. Australia.

[2] Summers, Lawrence H. (1986). "Some Skeptical Observations on Real Business Cycle Theory". Federal Reserve Bank of Minneapolis Quarterly Review 10 (Fall): 23–27.

[3] Korotayev, Andrey V., & Tsirel, Sergey V. A Spectral Analysis of World GDP Dynamics: Kondratieff Waves, Kuznets Swings, Juglar and Kitchin Cycles in Global Economic Development, and the 2008–2009 Economic Crisis. Structure and Dynamics. 2010. Vol.4. #1. P.3-57.

T.M. McCracken

Fringe Modulation for an MROI Beam Combiner

The Magdalena Ridge Observatory Interferometer (MROI) is a ten element interferometer that will operate in the optical and near-infared wavelengths. To remove atmospheric conditions, a fringe tracking beam combiner that relays information to the delay lines will be employed. In tracking the fringes, it desired to modulate the beams between combination partners of the array. This will introduce a known optical path di erence (OPD) that can then be corrected for by the delay lines. Modulation in the beam combiner will be performed via modulators introducing an OPD in increments of =4 into the beam. Knowledge of the path di erence introduced needs to be accurate to 1%. To achieve this accuracy, the modulators are characterized and the desired step waveform optimized through Fourier analysis. Of particular interest is the number of Fourier components needed to achieve the desired waveform and how these components will drift overtime. Techniques to compensenate for the departure from the waveform will investigated.

Laura Jones

LIDAR Deconvolution

During the 2004 Antarctic field season, ground-based light distance and ranging (LiDAR) was used in order to observe cm-scale changes in the Mount Erebus lava lake. LiDAR remote sensing systems utilize strong pulses of light, emitted by lasers, and gather the reflected pulses in order to determine the distance and position of the reflecting material (calculated from the flight time of the photon). Consequently, the distance resolution is limited by the time resolution of the terrestrial LiDAR system (TLS). In addition, if the response function of the TLS is longer than the time resolution interval of the detector, the measured signal is smeared and the effective accuracy (i.e. distance resolution) decreases. Theoretically this smeared signal could be corrected by deconvolution, however in reality LiDAR signals are corrupted by noise which is amplified during the deconvolution process. In order to avoid this, a deconvolution of the LiDAR signal must take place without any filtering in the frequency domain. Three methods will be tried on the lava lake LiDAR data:

In the end the hope is to not only be able to better analyze to reflected signal but to also potentially "analyze" the noise to possibly understand how much the volcanic plume is affecting the signal.

Harsdorf, S. and Reuter, R. (2000). Stable Deconvolution of Noisy Lidar Signals. Proceedings of EARSeL- SIG-Workshop LIDAR, Dresden/FRG, June 16-17, 2000.

Raimondo, M. and Stewart, M. (2007). The WaveD Transform in R: Performs Fast Translation-Invariant Wavelet Deconvolution, Journal of Statistical Software, 21(2).

Bahrampour, A.R. and Askari, A.A. (2006). Fourier-Wavelet Regularized Deconvolution (ForWaRD) for Lidar Systems based on TEA-CO2 Laser, Optics Communications, 257.

Shoba Maraj

Effect of causal and acausal filters on seismic data

Filtering of seismic data is an important processing tool that is used to reduce or remove unwanted noise that occurs during data collection. Filtering in the frequency domain involves convolution of the chosen filter with the input time series. Filters can be bandpass,
bandreject, highpass or lowpass. Bandpassfilters are the most commonly used filters for seismic data processing, as they are able to remove both high frequency ambient noise as well as low frequency ground roll. One of the most common bandpass filters is the Butterworth filter as its frequency response is almost flat in the passband, and has a slow rolloff. Causal filters are defined by having outputs dependent on the past and present inputs, while acausal filters are also affected by future inputs. Acausal Butterworth filtering is achieved by convolving a causal filter with the input time domain series both forward and backward. There are several advantages and disadvantages to using both causal and acausal filters. Causal filters will, as expected, not affect the data before t < 0 in the time
domain, but will have a large effect on the phase spectra of the data. The paper and presentation will look at causal and acausal filters and their effects on seismic data.

D.M. Boore and S. Akkar, Effect of causal and acausal filters on elastic and inelastic response spectra, Earthquake Eng Struct Dyn 32 (2003), pp. 1729–1748.

Chakraborty, A., and Okaya, D., 1995, Frequency-time decomposition of seismic data using wavelet based methods: Geophysics, 60, 19061916

Yilmaz, O, 1987, Seismic Data Processing, Society of Exploration Geophysicists

Gunter Leguy

Equilibrium elastic response of a loaded, buoyantly supported crust

The Earth's crust acts like a rigid plate and is yet capable of deformation. It is buoyantly supported by an underlying liquid referred as the Mantle. The deformation of the crust is due to the load distribution such as an ice cap. 20,000 years ago during the last ice age, the Northern continents and Antarctica were loaded by ice sheets. This enormous extra weight compressed the Earth's crust in those areas forcing the mantle to spread towards the uncompressed areas and created bulges. Today the e ects that global warming has on melting of the ice sheets is causing rebounds in the Northern hemisphere and Antarctica. In this project we will discuss the equilibrium elastic response of a loaded, buoyantly supported crust. We will show how time series theory techniques apply to the elevation of the Earth crust. We will incorporate the mantle viscosity parameter and investigate the time dependence behavior of the crust deformation that ensues.

Aaron Curtis

Response of cave air pressure to periodic barometric forcing

Cave microclimate monitoring in several caves has demonstrated an empirical relationships in which external barometric pressure appears to be the primary driver of changes in airflow, pressure, and temperature inside the caves. This phenomenon of "barometric breathing" also drives the cave pCO2 and therefore has implications for speleothem paleoclimate records. Barometric breathing theory has been used in the past to estimate cave volumes and predict the existence of connections between cave systems. Modeling a simple cave as a fixed volume containing compressible gas connected to the surface by a constricted tube entrance causing head loss according to a turbulent flow application of the Darcy-Weisbach equation, allows derivation of a transfer function relating barometric forcing to cave air pressure (and the resulting cave entrance air velocity). A reasonable friction factor will be chosen using the Moody diagram. Analysis of the system response at realistic daily, seasonal, and synoptic-scale forcing periods and amplitudes will be compared with existing in-cave barometric records. Conclusions will be reached regarding implications for speleothem lamination, d18O records, volume and connectivity calculations, and cave fauna.


2008 Student Abstracts

Julien Chaput (11/12 a)

Temporal variability of body wave Green’s functions obtained through cross-correlation of eruption coda on Mount Erebus volcano

The recovery of a medium’s reflection profile, or Green’s function, is one of the most fundamental problems in seismology, and as such there exist various methods for attacking it. One such method in its relative infancy consists of cross-correlating seismic noise, either ambient or coda related, in order to directly recover the green’s function between two spatial points. More robust even is the autocorrelation of waveforms at a single spatial point, which yields the vertical reflection profile beneath that point, given adequate source locations. As this method requires a diffuse seismic wavefield, heavily scattering media are likely to yield better results since the short mean free path means that high frequencies are not attenuated before the wavefield reaches modal equipartition. In this paper I will demonstrate the usefulness of the aptly named seismic interferometry in recovering time varying body wave Green’s functions on Mount Erebus volcano, Antarctica.

Omar Marcillo (11/19 a)

System identification of canopy in response to an infrasonic wave.

An infrasonic wave passing through canopy will be studied to determine the frequency response of the wave to the canopy. The multiple and variable dimensions of canopy (e.g.: leaves, branches) would create a mechanical filter. This study aims to characterize this effect based on pressure measurements. By the use of infrasonic sensors, measurements of pressure will be taken inside and right outside of the canopy to prove this concept. The EMRTC blasts will be used as shock wave source. A comparison of the frequency content of the input and output signal will help us to characterize the system. Depending on the quality of the data,
deconvolution will be performed to calculate the transfer function of this system. This paper will also give a brief introduction on system identification and modeling.

Alisa Shtromberg (11/21 a)

PID Tuning on MROI’s Fringe Tracker Beam Combiner Modulators

The Magdalena Ridge Observatory Interferometer (MROI) will be a 10-element optical interferometer. The Fringe Tracker (FT) will be one of its beam combiner instruments that will be used to track combined beams as fringes are formed and feed back to MROI’s delay line to adjust its position. A set of mirrors that are part of the FT beam combiner will need to be modulated in order to acquire, measure, and track fringes. These modulators will use a piezo controller. Lab testing has been ongoing for the past few months in order to calibrate and test for specifications. This is necessary to meet the FT’s stringent requirements for accuracy and repeatability. A critically damped step response is desired from the modulator impulse signal. Proportional–integral–derivative (PID) tuning is necessary to correct the error between the measured and desired values by calculating and outputting a correction that can adjust the process as necessary. A transfer function of the PID control system is used to optimize the coefficient parameters for the desired response. The frequency domain representation is important for determining the system stability and coefficient tuning. Several existing techniques for PID tuning are explored and results are presented.

Jessica Elias (11/21 b)

Astronomical interferometry: Application and technique of aperture synthesis

Interferometry is an astronomical technique by which the signals from several instruments are combined to simulate an aperture-masked larger telescope. The signal from such an array is then deduced from studying the interference patters from the superposition of the combined waves. The signal is best expressed as a function that can be transformed into the Fourier domain and allows for the deduction of the signal phases. These transformed signals can then be correlated and by scanning through the signal to create an entire picture consisting of these transformed points. The inverse Fourier transform of this signal then restores this image, completing the process of aperture synthesis. I will briefly go over the optical physics of such a process, as well as demonstrate and explain in depth this type of data analysis on a real image.

Kevin Lenth (11/24 a)

The discrete wavelet transform and applications

In many fields that traditionally have made heavy use of discrete Fourier theory, the discrete wavelet transform (DWT) is rapidly replacing or augmenting the DFT as an analysis tool. This is largely because the DFT assumes a signal is periodic, and hence cannot give any information of how it changes over time; in contrast, the DWT provides some idea of the signal's time-evolution. We present a broad view of the DWT and some examples of its applications.

D. Bradly Christensen (11/24 b)

Tau-p transforms and the menacing multiples

The presentation of Tau-p and the Menacing Multiples will focus on my research on the island of Montserrat.  Current data processing of the seismic data collected in December 2007 has revealed multiples within the gathers.  Because volcanic rocks are difficult to work with, advanced processing will be necessary to pull out the proper reflectors.  Deconvolution and f-k filtering have not been successful in the removal of multiples within the data.  Because of this a more advanced transform may be needed to remove the menacing multiples from the volcanic data.  The presentation will focus on the practical applications of the Tau-p transform and how it is used to remove multiples.  The preparation for this presentation will be beneficial to me in understanding how the Tau-p transform works and why it needs to be used when multiples appear and are not easily moved using conventional deconvolution and f-k filtering.  Examples will be presented from the SEA-CALIPSO experiment, Dec 2007, showing the effects of the Tau-p transform on multiples.

Maya El Hariri
(11/26 a)

Empirical Green’s function method applied to the 2006 Java tsunami earthquake

I will use the Empirical Green’s Function (EGF) method to calculate the source parameters of the 2006 Java tsunami earthquake using a smaller magnitude aftershock (EGF). An earthquake signal in the time domain is the convolution of the earthquake source function with the path traveled, the surface site effects and the instrument response. By deconvolving the EGF from the mainshock the site effects, propagation effects and instrument response are isolated. This yields the relative source time function (RSTF) of the mainshock, which reflects the source characteristics. Deconvolution will be performed by spectral division in the frequency domain by transforming the seismograms from the time to the frequency domain (by Fast Fourier transform) and performing spectral division to obtain the spectral ratios of the source. To obtain the RSTF, the seismograms will be transformed back to the time domain by applying the Inverse Fourier Transform. Using the RSTF several source parameters will be calculated. 

Jon Brown (11/26 b)

Specic heat spectroscopy of a simple chain model simulation as a means to investigate the glass transition

Amorphous solid - glasses and polymers have complicated viscoelas tic properties. They have liquid-like and solid-like properties but never go through a thermodynamic state transition into a solid crystalline state. As temperature decreases, their viscosity simply diverges to infinity. The temperature where the material goes from a mostly-solid glassy state to a deformable or soft state is called the glass transition. To investigate this behavior molecular dynamics simulations were run on a simple chain model of a amorphous polymer, while forcing the temperature to vary sinusoidally, for a range of frequencies and mean temperatures. Assuming linear response, storage and loss moduli - measures of storage and dispersion of energy at a given frequency - were computed. The data were fit with standard models, and relaxation properties and glass transition
temperature were computed.

Guohui Wu (12/1 a)

Image processing using the FFT

In the real world, a two-dimensional image is considered to be a function of two real variables, for example, f(x,y) with f as the amplitude (e.g. brightness) of the image at the real coordinate position (x,y). Among many tools to process digital images, I will study image processing using Fast Fourier Transforms (FFTs). First of all, I will discuss thetwo-dimensional FFT followed by the introduction of two kinds of filters. They are lowpass filter and highpass filter. Then, comparisons will be made between lowpass and highpass filters when applied to image processing. Finally, I will write my own MATLAB code and present an example to illustrate the application of FFT in image processing.

Craig Nicholas (12/1 b)

Extracting sheet music from audio files

By an apparently arbitrary decision process, the sheet music corresponding to a musical piece may or may not be in the public domain, even though the information conveyed by the sheet music is, in principle, contained in any audio file of a performance of that work. Restricting our attention to piano music, we develop an algorithm that, within certain parameters, can determine what note(s) are being played at any given moment in a performance of a work for piano. In brief, our algorithm slices a wav file into small segments, performs a Fourier transform on each segment, and uses a modified Karhunen-Loève decomposition on the sequence of transformed slices to circumvent the difficulty associated with the overtones that appear in a straightforward FFT.

Benjamin Dickinson (12/3 a)

Stolt F-K migration; the fourier transform migration method for stacked 2D data

Reflection data can yield inaccurate positions of reflectors and subsurface targets who's precise locations are particularly important in the petroleum industry. The purpose of migration in exploration seismology is to correct distorted and dispersed signals returning from reflectors or diffracting points and develop accurate target locations and depth. The Stolt F-K migration technique offers a fast and effective way to address corrections in complex subsurfaces and dipping layer positions by utilizing the fourier transformation of the wave equation (frequency-domain migration). Within my presentation I plan to explain the fourier transform migration velocity method (Stolt F-K migration), the limitations of the process (constant velocity model) and present further work that can aid in the production of more realistic subsurface models.

Christine Ruhl (12/3 b)

Maximizing the signal-to-noise ratio: Matched filtering

In seismic studies where the source is faint or noisy, it is beneficial to use simultaneous multichannel correlations to reduce the signal-to-noise ratio and bring out repeating signals. The technique of matched filtering involves exploiting the similarities of the unknown, weak signals and known stronger signals in the same area. Beneath Shikoku, Japan, non-volcanic tremor associated with the Nankai trough subducting slab causes low-frequency earthquakes which are ideal for applying matched filtering. This presentation will explore the techniques of matched filtering and use the Nankai trough as an example of an appropriate situation to have used it.

Sonja Behnke (12/5 a)

An analysis of electric field changes during the 2008 eruption of Chaiten, Chile

In May of 2008, the Chilean volcano Chaiten erupted, producing a brilliant display of lightning in the volcanic plume and near the vent. In response, a small Lightning Mapping Array (LMA) and one slow antenna were deployed around Chaiten with hopes of capturing similar electrical activity. The slow antenna, which records the change in electric field, can provide insight into the polarity of a lightning flash. However, there exists 50 Hz noise and associated harmonics in the data, which prevents accurate assessment of the field change. This periodic noise will be removed using an appropriate notch filter implemented in MATLAB. The effects of this on the amplitude and timing of the field changes will be discussed. The field changes in the slow antenna data will then be compared with VHF signals from discharges near the vent as recorded by the LMA to provide a characterization of the type of lightning occurring near the vent.

Michael Jew (12/5 b)

Kalman filters: A method for estimating position

Particularly in robotics, accurate knowledge of the position, velocity, orientation, and other aspects of the robot relative to a known position and direction is very important. Without knowing where the robot is located and where it is going, any sort of autonomous control system will not be able to function. The GPS system is an useful tool that can give position and velocity information. However, the GPS system is only as accurate as the 1 second updates from the satellites. The infrequency of the GPS updates leds to the robot's control system be unable to quickly respond to changing conditions. To solve this problem, an inertial system that gives more frequency data is included. By applying a Kalman filter to the inertial system, the robot can then predict its position and velocity often enough for a control alogrithm to make corrections.

Kyle Jones (12/8 a)

The effects of filtering on cross-correlation and infrasound event location

Local to regional arrival times from infrasonic sources are a good way to determine where volcanic events are occurring. Infrasound locations can be far more accurate and precise than seismic locations because the Green’s functions (impulse responses) for waves traveling through the atmosphere are typically much simpler than for waves traveling through the solid Earth and because the acoustic velocities are relatively slow. However, a big problem with some infrasound data arises from uncorrelated wind noise. This noise can make it very difficult to accurately locate event sources.
I will generate synthetic infrasound signals and add noise (white, red, etc). I will use different filters to try to bring out the signal again. Once the signal is isolated I will cross-correlate signals to obtain relative lag times that I can then use to estimate event locations and associated errors. Which filter is chosen will affect the location estimate. This occurs because the filter affects the correlation of the noisy data, and thus affects the arrival time and location estimate. I would like to use this as a sensitivity study to be included in my thesis so that I understand critical effects of filtering that I will be doing on my data and thus make sure that my locations are as accurate or justifiable as possible.

Amanda North (12/8 b)

Name that sound

Television is continuously bombarding society with programs that take the everyday police work and show how it evolves leaps and bounds scientically. Reality shows give the "Average Joe" a slight insight into how the most complicated of cases can be solved by using simple lab techniques to catch a criminal. One technique that is used by many government organizations to catch criminals before they commit a crime, is to use sound identication and pinpoint a suspect based upon audio. This is better known as voice recognition. In my project I will set up a database of sounds, or English phonemes, and create a training set with which to compare other speakers. I write a MATLAB program that will digitally process each phoneme, take the fast Fourier transform of the signal, and use a power density spectrum to plot and store the coefficients of the training set. The next step is to record several other people saying the training set of sounds, and see if the program can predict what sound is being said by comparing the coefficients from the original training set to those of the speaker. This predictive code has potential in many areas and will incorporate several concepts that have been presented in the Time Series course material.

Richard Sanderson (12/10 a)

Comparing absolute times of the onsets of different signal phases

Seismic signals generated at volcanoes often contain a range of frequencies which vary depending on the type of source process generating them. The seismic signals generated by explosive processes at Santiaguito volcano, Guatemala, contain both long-period (LP) frequency components> (0.125 – 5 Hz) as well as very-long-period components (0.05-0.125 Hz). In order to identify changes in explosive source processes one can compare the onset times of the LP and VLP signals. However, the application of band-pass filters to isolate these frequency bands can introduce> undesired artifacts into the signals which make the comparison of absolute onset times more challenging. Such artifacts may include amongst others, large transients, precursory “ringing”, phase delays and group delays. Other complicating factors include a low signal to noise ratio. This work will focus on finding the optimal way to compare the absolute onsets of VLP and LP components of seismic signals at Santiaguito by investigating the effects of different IIR and FIR filtering techniques performable with MATLAB while considering both stability and computational efficiency.

Pamela Moyer (12/10 b)

From coda envelopes to apparent stress: processing seismic data for source spectra analysis

Using coda waves to determine seismic source spectra is a complex but widely used process for making consistent and reliable magnitude estimations. Coda waves are manipulated to obtain frequency envelopes that calculate and average energy dissipated by an event. Once corrections are made for site and path effects as well as the instrument transfer function, the corner frequencies and seismic moment of an event may be determined. Using data from stations that were deployed to Costa Rica during the Fall of 1999, the process of coda wave manipulation for source spectra analysis, and the steps leading up to an apparent stress calculation from the seismic moment, are discussed.

Sandra Saldana (12/12 a)

What exactly is vibroseis correlation?

Vibroseis sources have been used by industry for many years. This non-impulsive seismic source utilizes a metal plate coupled to the ground to input a sweep of pre-programmed frequencies into the ground. The response is then detected by seismometers set out at regular intervals from the source. What is detected, however, is not actually usuable data but must be correlated in order extract the frequency response of the Earth. The process of correlation is called vibroseis correlation and is key to using vibroseis data. The process of vibroseis correlation and why it works is disected and presented in order understand how digital signal processing can be used to reduce noise and improve signal to noise in order to produce clear seismic reflection cross-sections of the Earth.

2005 Student Abstracts

Gaopeng Lu (5/2)

Noise reduction in balloon-borne sounding electric and magnetic field data

The first balloon-borne slow antenna capable of measuring all three vector components of field change
was launched from Langmuir Laboratory during the summer of 2004. The high frequency noise involved in the sounding
data significantly prevents the accurate acquisition of field change, which is essential for the estimation of charge
transferred to the ground by individual strokes, as well as the source locations. After the Fourier analysis of the noise
and recorded data associated with lightning strokes, the low pass filter function in MATLAB is applied to reduce the influence of
noise, and the intrinsic defect of this procedure is analyzed. The positions of charge sources inferred from the optimized field
data are compared with LMA mapping observation results.

Qian Xia (5/2)

Solution of deconvolution via Tikhonov regularization

By deconvolution we mean the solution of a linear first-kind integral
equation with a convolution-type kernel, i.e., a kernel that depends only
on the difference between the two independent variables. Beacuase
deconvolution is sensitive to noise and subject to instabilities, its
treatment often requires the use of regularization methods. The aid of
this presetation is to show how to use a Tikhonov regularization method to
solve the develution problem.

Brent Henderson (5/4)

Simulation filtering, implementation and stability

There are times when it is necessary to compare seismic records from
different instruments. In order to do this comparison it is necessary to
simulate the other instrument. I will explore the methods used to make on
instrument look like another, as well as the problems associated with such
a simulation.

Jim Sheckard (5/4)

Cheating Nyquist: How pulsar astronomers can circumvent the sampling theorem

Normally, sampling of an analog signal must be done at sufficient speed to
avoid aliasing. Due to the effects of the interstellar medium (ISM) on
radio pulsar signals, intentionally aliased signals can be used to obtain
wider bandwidth signals than the sampling speed can allow. I will give
background on pulsars, particularly on the ISM propogation effects, and
discuss a method for removing the aliasing effects from the recorded

Jana Stankova (5/6)

Techniques on reconstruction of non-uniformly sampled data'.

Due to the economical and other restrains on seismic data acquisition, the
resulting data is incomplete and aliased. There are several algorithms on
how to optimize these data sets, such as the Gulunay's algorithm,
non-uniform DFT, non-unifort FFT and the FRSI algorithm. However, some of
these algorithms have large drawbacks. For example, the Gulunay's method
does not work on non-uniformly sampled data, and the Fourier transforms
have problems with aliasing. It seems optimal to combine some of these
algorithms together to get results without gaps and aliasing.

Yingchun Zhan "Spring" (5/6)

Seismic array processing

Seismic arrays can be used to suppress noise. In this project, I am going to introduce the feature of recordings from such stations,
which is that they allow weak signals to be enhanced above noise. Then how this aids in the interpretaion of seismograms and studies of
source mechanisms particularly for test ban seismology in distinguishing between earthquakes and explosions will be concerned.

2004 Student Abstracts