SaaS Equivalent Source Gridding of Potential Fields


From here, you can submit data files and set options for finding an equivalent source distribution from an input data set, and then use that distribution to compute the field at arbitrary points. This can be used, for example, to flatten a set of flight lines to a common elevation, or to compute the predicted field on a regular grid from irregularly-spaced data.

Results will be emailed back to you at the email address associated with your username. The result email will include a zip archive of the output files and a log of the processing.

Processing is started on a 5-minute timer, so computations may not start for up to 5 min. This will not be noticed if the data set has more than a few thousand data points.

Input data and grid points do not follow any particular scheme or spacing; every point must have a fully-specified location. Default options assume the coordinates are in Cartesian coordinates with a common length unit that also matches the field value unit. Fix unit mismatches before uploading files here!

If the coordinate type is set to geographic, the computation will use spherical coordinates for the computations and thus can compute fields across significant portions of a sphere (or planet). The spherical coordinate setting assumes the points are on the Earth, using the WGS84 ellipsoid to get radii for the computations. Results are always returned in the same format as they are submitted (e.g. Cartesian in = Cartesian out, Geographic in = Geographic out).

A note on speed: Currently, the algorithm requires around N*0.1*√N iterations to find the equivalent source distribution for N input sources, depending on the error tolerance. Each iteration scales as O(N2), so this can get slow for large (>100 000 pts) data sets. Tests with 160k geographic input data points, error tolerance of 1 ppm, and recomputing the field on the same points, takes ~15 hours to complete. The grid computation takes ~1 hour, and the ~2 million iterations to find the equivalent source distribution takes ~14 hours.

EQUIVALENT SOURCE DISTRIBUTION & GRID FILES

INPUT DATA FILES

Input field data should be an ASCII text file with the following whitespace-delimited fields:

X  Y  Z  G

Additional fields on a line are ignored. Lines starting with "#" or completely blank are ignored. Leading whitespace is ignored.

If a suitable distribution of sources has already been found, such as from a previous run with the same input data, the source distribution can be uploaded in place of the input data, and the grid calculations started immediately using the supplied source distribution. Uncheck the "Find Source Distribution" box, and supply a file with the equivalent source distribution (same format as input data) in the Data File. This option can save LOTS of time for large (>100 000 pts) data sets!

If coordinate type is set to geographic, the X,Y,Z coordinates must be longitude, latitude, and height (in that order). Longitude and latitude are in decimal degrees, positive East & North. Heights must be in meters. Coordinates and height are relative to WGS84 ellipsoid.

Field values are in whatever units are supplied - the computations do not care what the units are! Results will be returned in the same units! Note that geographic coordinates will be converted to meters for computing the field!

GRID POINT FILES

Grid point files should be a list of grid points, in X,Y,Z format. Additional fields on a line are ignored. Lines starting with "#" or completely blank are ignored. Leading whitespace is ignored.

Grid coordinates must be in the same units as the input data, so Cartesian or Geographic, and the same distance units (typically meters)!

The output grid will have X, Y, Z, G values for each input line. Additional fields in the input data file are lost.

SUBMIT AN EQUIVALENT SOURCE GRIDDING JOB

Username:
Password:

Options & Processing Controls

Find Equivalent Source distribution?
Compute Predicted Field at Grid Points?
Error tolerance for iterating sources:
Depth scale for source depths:
Put all sources at deepest depth from nearest-neighbor?
Field decay power:
Coordinate Type:
Save check point files for restarts?
Is this a restart run?

Files to Upload

Data/Source File:
Grid Point File:



The Algorithm

This service is based on the work presented by Cordell (1992). In that paper, Cordell lays out an iterative procedure to find a distribution of imaginary sources that reproduces a given set of field measurements. This distribution can then be used to compute a predicted field at any set of positions, while preserving all the information known from the input measurements and the type of field.

This code assumes a scalar field! There is no provision for computing vector fields!

The algorithm works by taking the initial data points, constructing a distribution of sources at the same X, Y (or longitude, latitude) locations but deeper Z (or elevation) positions. The strengths of these sources are iteratively perturbed, one at a time, until the computed field from the sources matches the initial field values at the initial positions (X, Y, Z or longitude, latitude, elevation).

Sources are placed below each input data point according to the distance to the nearest neighbor of point. There is a scaling factor (a in the original paper; depth scale for source depths on this web page) multiplied with the nearest-neighbor distance to find the depth offset for each point. With geographic coordinates, elevations are converted to radii with the WGS84 ellipsoid parameters and the latitude. The scale factor helps keep nearby stations influenced by each source, but limits the total field update for each source.

The code can use an arbitrary field decay model; typical potential fields (like gravity data, or electrostatic charge) use a power of 2. The field decays like 1/r2. Change the field decay power to replicate potentials (1/r decay), or magnetic intensity (r-3).

For some projects, such as large data sets covering large portions of the Earth - such as a gravity map of the state of Alaska, the data are spread across a large elevation difference, and may include overlapping data from different measurement campaigns. Think flight lines from adjacent blocks overlapping. With the equivalent sources near each input point, the resulting field perturbations can be too strong to allow convergence when data overlap but don't agree. The result is a computation that runs away, with increasing errors with more iterations. These runaway runs are caught by a watchdog test that terminates a run if the current max error grows beyond a multiplier to the current smallest error yet found. This allows the run to fluctuate up and down in current max error to iterate to a stable solution, but terminates in a reasonable time if the iterations start to grow without bound.

Also, for geographic setups, the equivalent source distribution has an additional source beyond the input field measurements; a single source at r=0 to produce the average signal observed in the field measurements. This shifts the observed field to deviations from a mean field, which is accurately produced by the Center of Earth (CoE) source at the end of the distribution. Due to the deep (6.4 Mm) position of the CoE source, the source strength will typically be 1014 times stronger than the near-station sources.

Once a distribution of sources is found to replicate the input data within the given tolerance, the equivalent source distribution can be used to compute a predicted field at any set of locations while maintaining the properties of a potential field, honoring the field decay properties, and minimizing the growth of errors from a downward continuation. By feeding in a grid of uniformly sampled locations for the output grid, a sparse point source data set can be used to generate predicted fields at the regular grid. Changing the elevations for the output grid points will upward- or downward-continue the field to the new elevations and locations.

Moreover, the equivalent source distribution need only be found once for a given input data set and parameter choices. The source distribution can then be used to compute any number of alternative positions or geometries without recomputing the source distribution, which is the expensive part.

Cordell, L. 1992. A scattered equivalent-source method for interpolation and gridding of potential-field data in three dimensions. Geophysics, vol. 57, No. 4 (April 1992). Pp. 629-636.