Frequently Asked Questions and Some Short (and Some Long) Answers

OBJECTS vs. LAYERS: THE DIAGNOSTIC 3-D SEISMIC PROCESS (D3DSP)

Frequently Asked Questions (Click for an ANSWER)

1. How did the Diagnostic 3D Seismic Process concept arise?

2. How are D3D seismic volumes different from more conventional 3D volumes?

3. How is the D3DSP different from working with conventionally "inverted" seismic volumes?

4. Is a special type of acquisition technique required for the D3DSP, or can any 3-D seismic volume be re-processed to create a D3D-impedance T'ube?

5. How is D3D processing different from conventional 3D processing?

6. Does AVO (Amplitude versus Offset) play a role in the D3DSP?

7. Of what use are synthetic seismograms and wavelet analyses to the D3DSP?

8. Are the high D3D frequencies really "signal", or merely a processing artifact?

9. What are (thickness) resolution and detection limits of D3D-impedance data?

10. What is most difficult about locating valuable buried objects using the D3DSP?

11.  Is a volume visualization workstation (e.g., VoxelGeo, GeoViz, Magic Earth, etc.) required for D3DSP interpretation?

12. What roles can (or should) a petroleum-industrial geophysicist, geologist, and engineer, play in the profitable application of the D3DSP?


Answer to #1 (How did the Diagnostic 3D Seismic Process concept arise?)

     Starting in 1966, Louis Willhoit spent 5-1/2 years pursuing a B. Sc.  at Florida State University (1970, Tallahassee), and a Ph. D. in Physics, at the Universities of Iowa (Iowa City), and Texas (Austin).  No advanced degree was conferred.  He intended to spend his life looking into space and delving into such fundamental theoretical mysteries as the nature of Time and Gravitation, but ended up looking into the ground instead.

     At the end of 1971, he was recruited by Shell and worked, for exactly one year, as a Geophysicist in Midland, TX, processing analog and digitally recorded seismic, behind Michigan Basin dynamite and Vibroseis field crews. Shell was searching for (Silurian) Niagaran pinnacle reefs, buried beneath thick glacial till deposits.  This first year out of college, he underwent intense on-the-job training, and was soon being indoctrinated into the layered-earth, geological assumptions of Shell's corner of petroleum industry.  He processed many a (2D, Michigan Basin) layer-cake seismic reflection profile, on which experienced Shell interpreters interpreted reefs wherever the uniform, strong, "railroad tracks" Niagaran reflections were broken by "noise", with differential compaction around a reef causing more layered reflections to drape across the "no data" zone.  Even in a stratigraphic play such as this, anticlines were attractive.  Shell was fantastically successful in this play, but as a technically trained physicist, he thought that there had to be a better way to look into the ground.  Working under an innovative (Oceanographer-turned -Shell-geophysicist) Party Chief, Willhoit's primary, entry level task was to mix traces and smooth 2D seismic sections to make those railroad-track-reflections. In 1972, Shell and the rest of the industry (he was soon to find out) was unable to image the most valuable objects on a grid of 2D seismic (hardcopy variable-area-wiggle-trace) cross-sections:  small 3D pinnacle reef reservoirs.


     Then, beginning with his exposure to Shell's digital Information Theory and Physics of Seismology, in a (1973) 5-month Training Course, he started applying these solid technical concepts to grids of 2D data, onshore and near-shore (Gabon, Africa data), usually seeking the more common structural and stratigraphic traps on which Shell's course had focused … not hydrocarbons, directly, in 1973-74. Not in young Willhoit's case, yet, anyhow. In 1974, he was lured away from Shell by a consulting group, that Amoco's Alaska Division (Denver-based) contracted to help them play "catch-up" in the areas of "seismic amplitudes", "bright spots”, and general "rock physics". His education in oil industrial "interpretation" really began here (mid-1974), even though his consulting group was hired to provide "experienced" interpreters and processors to Amoco's seismic (and stacking velocity) projects. And Willhoit had yet to be responsible for interpreting or recommending a drill site.  As an Amoco consultant, with their emphasis on mapping true depth structure beneath a highly variable layer of relatively fast permafrost (and trying to re-invent methods like Shell's and Mobil's "bright spot" and "HCI" technology, for Amoco), he got an intense lesson in the business side of the petroleum industry. He tried to apply the technical knowledge that he had acquired in his physics studies and in Shell's formal, and on-the-job, training.  In 1975, he attended one semester of physics graduate studies at the University of Colorado (Boulder), passed his written and oral comprehensive exams for the PhD, but missed registration for the next semester, due to his hectic work-study-fatherhood schedule, and chose to abandon his quest for an advanced degree in physics.


     In 1977, Mr. Willhoit accepted an offer from an ex-Shell geophysicist at a medium-sized independent (Forest Oil, Denver), to help them find natural gas in under-explored Rocky Mountain Basins.  After working a year, and (with excellent geological teamwork) discovering a subtly trapped, over-pressured Frontier gas sand reservoir, in the Green River Basin, he was promoted to Rocky Mountain Division Geophysicist.  This federally unitized field was named the Megas Unit, was a pure stratigraphic trap, and was mappable as a high-amplitude reflection, caused by a low-density anomaly near the top of the deep, pressured section out in the middle of the gently structured Green River Basin. He also worked with a very astute group of geoscientists to find more gas in the stratigraphically complex Dakota sandstone, along the southern flank of the Moxa Arch (at the Henry Unit). But his technical identification of subtly trapped hydrocarbons was mostly after-the-fact, because Willhoit's most important job was to sell prospects to management, partners, and investors, and to get wells drilled. Here was important continuing education about the differences between major oil companies’ (e.g., Shell, Amoco, Mobil) and the independent sector's (Forest, Milestone Petroleum, BWAB, LL&E, EPL) operating philosophy.  The independent operators' philosophy was (and probably still is?), "The more wells we drill (with industry-shared risk or investors' drilling fund dollars), the more oil and gas we will find. Don't over-science prospects.  Just build an inventory of salable prospects as quickly and inexpensively as possible, and work to get them drilled. We'd rather be lucky than smart."


     And that approach worked quite well, until President Reagan's tax law changes (investors got pinched) and an OPEC-induced oil-price collapse, which culminated in 1986 (when high prices encouraged the use of alternate energy sources ... which concerned OPEC).  It was hard for independents to see that there was any money to be made, with oil selling for $9.00 per barrel and natural gas, for $1.50 per thousand cubic feet. These fundamental changes in the economics of the business seemed to Willhoit to cry out for more, not less, risk-reducing geo-technology, and from 1981 through 1984, the last two pieces of the proto-D3D puzzle were falling into place:

     (1) A geologically and geophysically educational float-trip down the San Juan River (Utah and New Mexico) was organized and led by a pair of aggressive young Forest Oil geologists, during which Forest geoscientists achieved up-close-and-personal experience with outcrops of Paradox Ismay and Desert Creek, phylloid-algal mound "buried objects" (exhumed by the San Juan River).  This hands-on experience, combined with some newly recognized deconvolution techniques (Western provided empirical evidence for the power of their TVSW -- followed by a short, surface-consistent, spiking-decon operator -- to sharpen the seismic wavelet, while preserving the cyclical Pennsylvanian rock echoes), that subsequently led to the clear 2D seismic identification of the producing reservoir at Bug Field, a previously invisible (on 2D seismic), Desert Creek algal mound. Then ...
 

     (2) Willhoit left Forest Oil for "philosophical reasons" and accepted a more technical, and less managerial, position with Milestone Petroleum.  Here (and later, as Chief Geophysicist for BWAB, Incorporated, partnering with Milestone) he was fortunate to work with some of the finest geoscientists and managers-of-geoscientists, on Paradox and Permian Basin carbonate buildup exploration and (3D) development projects. Kiva Field, John Greene, Cindy Crawley Stewart, Terry Britt, David Work, John Edwards and Pat Kist, all deserve special mention in the development of the D3DSP patented technique … although they probably cringe whenever they hear it mentioned. On second thought, Kiva may not be so sensitive any more, since its 4-5 mmboe EUR, at 5000 feet, is now mostly depleted.


     Continuing to develop his so-called "Willhoitian" processing sequence for 2D and 3D seismic data, and then following the 1986 price collapse, Willhoit accepted an offer from Milestone's ex-President to work in LL&E's New Orleans office, as a member of their Technical Advisory Group.  For three years he studied various geological problems and potential (Willhoitian) 3D geophysical solutions, and refined his "proto-D3D" processing sequence.  But as the years went by, he was continually frustrated by the lack of acceptance of his advice, (based on reprocessing 2D lines and a minimal 3D Vibroseis survey acquired with a Permian Basin velocity survey) to shoot and process 3D seismic for Strawn bioherm mounds (Lea County, NM), at Madden Field (Wind River Basin, WY), and in the south Louisiana marsh.  It must be admitted that, as a 1990 experienced interpreter, Willhoit's perceived obsession with 3D seismic (which then was slowly gaining in general popularity), had less to do with the need for good seismic ties to drilled-and-logged wells (for any arbitrary locations and especially for directional wells); and more to do with the many under-appreciated processing advantages offered by a 3D-acquired data set. Some of these advantages are:

  Reduction of TWT (2-way-time) and amplitude misties between lines;
  Better, more consistent near-surface static-correction "solutions";
  Better, more surface-consistent spiking deconvolution operators;
  Creation of "source- and receiver-static" maps (revealing shallow geology);
  Ability to judge real, source-generated, and near-surface "noise" patterns;
  Reduction of "sideswipe" by 3D migration; and
  The collapse of 3D "Fresnel zones" by 3D migration, using carefully chosen
  migration velocities (usually RMS interval velocities from checkshots and logs).

     He had argued loud and long that, as useful as it was to be able to display a 2D profile (extracted from a 3D volume) along any arbitrary well path, 3D acquisition and, especially, non-conventional PROCESSING  provided much greater potential advantages.  He believed strongly that the 3D processing could provide much more accurate and reproducible (for time-lapse work) reflection travel-times and "event character", that represented the general complexity of the geology (and fluid-induced, acoustic impedance changes).   In 1989, he was transferred into the New Orleans Division and told to take his own advice, and acquire a 3D survey to show its value, somewhere in south Louisiana.  The rim-syncline in the NE quadrant of the Lake Washington salt dome (Plaquemines Ph.), was chosen for this unpopular (among his colleagues), but personally exciting assignment. Western Geophysical Company, in Houston, assisted in the design of the largest, technically sound, (all-boat) land-design 3-D survey, that a one million dollar budget could permit, shoot, and process.

     In 1990, with the aid of an experienced field-QC geophysicist, LL&E and Phillips Petroleum as its partner, recorded a 20 square mile 3D survey (330 feet x 330 feet  x 4ms, wide-line, bricked shot strips) using a "static recording spread".  1,100 small-array geophone stations, recorded each of 900 shallow-hole dynamite shots, using 275 (broad-band-field-filter) Seismic Group Recorder cassette tape units.  Essentially no environmental damage was caused by this 40-day layout-shoot-pickup operation. His unorthodox processing method (see FAQ #5, for summary of major differences), unexpectedly resulted in the highest resolution, migrated, Gulf Coast (4 ms) seismic data that Willhoit had ever seen, based on correlations with broad-band-wavelet synthetics and some standard filter tests.  Many co-workers, managers, and partners suggested that these data should be filtered back to more clearly see the faulted railroad track layers ("signal"), and to attack all the "obvious noise" ... but it correlated extraordinarily well to the complex ("noisy") geology seen on numerous well logs.

     From that point forward, VoxelGeo, patent attorneys, LL&E's movement toward Burlington Resources in Houston, Mobil Exploration & Production (U.S.)'s movement toward Exxon (now ExxonMobil in Houston), and Energy Partners, Ltd., all played significant roles in the D3DSP technical story, but only the VoxelGeo's contribution will be discussed here.  By mid-1991, after struggling with the conventional interpretation of the high resolution, Lake Washington (Plaquemines Ph., LA) data volume, Mr. Willhoit was intrigued by the medical analog software that he saw at an Annual Convention (SEG or AAPG, or both?).  VoxelGeo was being developed and marketed by Vital Images (Fairfield, IA), and was derived from a medical "volume visualization" tool called VoxelView. By allowing polylines to be imbedded, along with 2D surface grids (interpreted horizons and faults), and text that would help geoscientists and engineers in their work, VoxelGeo was promoted to the geoscience community as a new method that would allow us to look into the earth, like many medical professionals were then starting to look into opaque human bodies. Vital Images expected a warm reception from the petroleum industry, but it was too slow in coming. Three things are clear:

1. Willhoit realized that "planting seeds and growing 3D-detected sub-volumes”  (or objects), [an Iowa-based phrase, no doubt] was the perfect way to demonstrate the resolving power of the D3DSP ... once the (noisy looking) high resolution reflectivity traces were converted into pseudo-acoustic-impedance traces via trace sample summation (i.e., digital integration). Shell Oil's Information Theory training had taught him that this adding of samples, would produce a relative (natural logarithm of the) acoustic impedance value for each final trace sample.  He saw that, after a VoxelGeo detection run, all VOXELs in a detected "object" had a certain property in common for their amplitude values. That is, all VOXELs had amplitudes less than (or greater than) a specified detection threshold, or "cutoff" (as a well log analyst, counting net pay sands, would call it). Hence, Willhoit used the name, "Common-Impedance Object" (or CIO) in his patent application, to identify these detected geologic clusters of VOXELs, waiting to be measured and evaluated.  Vertical or horizontal slices through a seismic volume, full of CIOs, do look quite "noisy" ... because a smoothly layered-earth is normally assumed. It was evident that only by viewing a seeded object (CIO) in three-dimensions, not slicing it up, could its geologic integrity (or lack thereof) be revealed.  2D slices through a 3D (or D3D) volume will always look cosmetically more attractive to the classical geological eye, so the businessmen-processors tend to try to create layers out of the myriad objects that reflect, refract, and scatter sound waves. But what is the physical meaning of an "object" grown from a seed planted in a seismic "reflectivity" trough (or peak)? The Convolutional Model says that troughs and peaks represent the band-limited (via a low-frequency wavelet?) reflectivity of the interface between two layers?  And what does it mean to say that, "For this portion of the reflection event (a layer-to-layer interface) we detected 500 VOXELS, and therefore we estimate a volume of 1500 acre-feet? What does it mean to talk about the "volume of an interface"?  Inversion to some type of impedance volume was required for volumetric work, and in 1991 the concept of the CIO-based proto-D3DSP was born. A VoxelGeo-type, volume visualization workstation acted as the catalyst.

2. A few years later, Vital Images sold the marketing and development rights for VoxelGeo, to CogniSeis, who subsequently (another few years later) sold them to Paradigm Geophysical Company.  Willhoit now uses a Paradigm-licensed Linux Operating System version of VoxelGeo, on a fast Dell desktop workstation.

3. Many petroleum companies, seismic data processors and brokers, and software developers, offer and use products that look similar to the patented D3D Seismic Process, but are fundamentally different.  Most continue to be locked into the layered-earth model, and the use of its theoretically complex 3D velocity field and tensor-elastic rock properties.  Such AVO attributes are measured on long- to very-long source-receiver-offset, recorded data sets.  Their goals are mainly to find prospective, 3D-depth-migrated or time migrated petroleum traps, along with supporting amplitude anomalies, using AVO, four-component (4C) acquisition-processing-interpretation methods, and other structural and volumetric uses of depth migration and elastic inversion. No apparent infringement here. The D3DSP is simply based on:

-- Acquisition, to preserve the reflected-diffracted echo responses of buried objects, using accurately documented, (approximate) point-source/point-receiver, broad-band recording;

-- Processing (at the “heart” of the D3DSP), to image and focus these buried CIOs in 2-way-time (with depth migration preceding depth-to-time-reconversion, for areas where severe lateral velocity variations would result in time-migration lateral positioning errors), using an object-filled-earth model and many non-seismic sources of information to reduce uncertainties; and to avoid hasty judgments of signal versus random noise until a final broadband (D)3D-migration has been performed; and

-- Interpretation of the resulting (using short-offset traces to arrive at a scalar) "logarithm of the relative acoustic impedance" VOXEL distribution, on a volume visualization/analysis workstation (converting to depth and thickness when a valuable object is prepared for drill-bit testing), in order to measure and evaluate (by calibrating with non-seismic information, such as well logs, production histories, published geologic cross-sections, etc.) the many CIOs found in a D3D-impedance Time Cube (T'ube). Depth conversion and drilling prognosis thicknesses and volumetric estimates  use the D3D Estimated Depth T'ube (EDT) to convert 2-way-times to depths, and other non-seismic sources of velocity data (nearby transit-time logs, etc.) and reasonable estimates to convert time thicknesses to actual thicknesses.





Answer to #2 (How are D3D seismic volumes different from more conventional 3D volumes?)

1. The 3D volumes to which D3D volumes are normally compared, are 3D pre- or post-stack, TWT-migrated or Depth-migrated volumes, in which a collection of seismic traces are displayed, and whose trace-sample amplitudes represent recorded and "processed" responses to reflection echoes.  These can be very difficult to interpret for geological (stratigraphic) information, due to the complexity of the laterally and vertically varying processed wavelet, and the effects of deconvolution filters, which distort the actual geological reflectivity (TWT or Depth) series. These filters are designed to both de-reverberate (shorten) the wavelet and to improve the lateral continuity of reflections ...  once upon a time, seenn as black-and-white, variable-area peaks and troughs, on paper records. Typical goals for the interpretation of a set of inlines, crosslines, and time slices, displayed in TWT or Depth (a ) are:

     Time and Depth structure maps for tops or bases of layers, called reflection horizons (usually named after a color, TWT, formation, or "paleo bug"), including mapped faults with fault-traces (where the horizon surface would be found to be faulted-out) shown clearly;
     Amplitude maps, drawn from extracted values associated with the samples picked along a horizon (or some average, extreme value, or calculated attribute, within a time window around a horizon);
     Dip-magnitude, dip-strike, trace coherency, Hilbert attribute, and other calculated layer-based maps.

2. A D3D volume, on the other hand, is a relative seismic-impedance TWT-volume.  Within it, the wavelet has been severely shortened and simplified (zero-phase, using known-geological markers, like gas shallow sands or an isolated high-density anhydrite deposit) and the spiking deconvolution procedure was designed to have the smallest effect on the geology possible. Then the huge number of recorded and D3D-processed reflection wavelets (one at each change in rock acoustic impedance) is transformed from "Changes in Reflectivity" indicators, into "Relative (logarithms of) Acoustic Impedance" at each sampled TWT. In the virtual-earth, D3D-impedance T'ube, these samples are referred to as VOXELs.  So, wavelets and traces and samples are no longer significant to the D3D-trusting interpretation team, as they try to measure and evaluate low-, high- (and "constrained"-) impedance, detected clusters of common-impedance (CIOs) that they believe to represent buried, three-dimensional hydrocarbon reservoirs ... or sealing shales, or salt, or other non hydrocarbon bearing rocks (objects). Because no attempt has been made to enhance trace continuity, it will be more difficult to use automatic horizon tracking, but easier to grow geologically important D3D-CIOs.

3. Digital polyline well paths, with approximate TDs, are already imbedded inside a D3D-impedance volume, or are known and easily imported.  All velocity versus depth ("checkshot" and transit time DT log) information, within and nearby, has been correctly referenced in the D3D processing flow. Non-seismic information, such as outcrops, published cross-sections, and synthetic seismic trace curves, are critical to a true D3D processing flow.

4. Tuning effects are present in both types of volumes, but the D3D impedance volume exhibits fewer than the conventional, band-limited reflectivity volume, in general.  This is because of the D3DSP's emphasis on a short, broad-frequency-band, zero-phase wavelet, rather than the conventional attitude of accepting a longer, more complexly side-lobed wavelet, in order to eliminate non-layer-looking "random, noise", by doing signal-to-noise analyses early and continuously in the acquisition and processing flow.  Because the D3D processor does not try to cosmetically enhance any layers, and known shot-generated noise trains will be handled as carefully as possible,  early on (shot record deletion, surgical and outside muting, etc.), rather than wholesale attenuation of any and all non-layered-looking energy, the processor will be trained to err on the side of,


    "Let the geologically trained, processing-QC (interpreter)
    decide if this energy is CIO-signal or true noise."


Diffraction and "side-swipe" off of small economically important objects are important to the final D3D-impedance volume's integrity, so the best time to make this important (Signal or Noise?) call, is after an accurate (D)3D migration. In any case, CIO targets that "tune" in a D3D-impedance volume are usually too thin (one or two samples thick?) to be of economic interest.

5. Finally, volumetric analyses are easier with the D3DSP. In this new process, seed-planting using various D3D-impedance "cutoff" (detection threshold) parameters, is followed by geological analysis of the size and geometric shape of the detected CIO (and comparison with what was found in nearby wells), and once an acceptable CIO has been judged to warrant it, counting detected VOXELs, conversion to an estimated volume, and estimating recoverable reserves via an engineer-supplied recovery factor. All this previously un-geophysicist-like "interpretation" (more like a "geophysician"), takes the place of "amplitude-to-thickness" conversion, for common thinner-than-tuning-thickness targets, that must use a reservoir interval velocity estimate and an estimate of the shape of the conventionally processed wavelet ... and its dangerously interfering side-lobes.




Answer to #3 (How is the D3DSP different from working with conventionally "inverted" seismic volumes?)

     Use of the D3DSP results in a "Relative Impedance" T'ube, or volume.  There is no attempt to create absolute acoustic impedance values (in grams per cubic-centimeter multiplied by feet or meters per second) for each sample, or VOXEL.  The D3DSP considers this effort to be fairly futile, because of the normal uncertainty in both the acquisition parameters (e.g., source and receiver positions, field filtering, recording system impulse response), and the effects of even the most basic, "well-understood" processing algorithms applied to real seismic traces.  Except for very poorly acquired data sets, these recorded seismic traces always contain a mixture of both important (CIO and layer) "signal" and unwanted "noise". It is not easy to generate valid, accurately positioned, D3D-impedance traces that represent, in some robust sense, the relative logarithm of the rock VOXELs' average acoustic impedance (i.e pseudo AI logs, in TWT).  The D3DSP states that any time spent trying to arrive at absolute impedance for final trace samples, would be better used quality-controlling the acquisition parameters and the D3D-processing (especially the rock interval velocity field, to be used for pre- or post-stack migration).


     Also, conventionally inverted seismic volumes (if there are such things) can differ dramatically, one from another.  Without the "object-filled earth model" assumptions and the subsequent creation of a broad-band, zero-phase wavelet, conventional inversions can be quite unreliable, away from their calibrating well control.  And well log curves are subject to both interpretation errors and variability measurements (upon later repeat-logging), probably to a degree similar to the astounding variability in conventionally processed 2D and 3D seismic data. In decades past, "Turkey Shoots" were commissioned by oil company clients to decide on a preferred processing vendor (and/or processing sequence) for 2D seismic lines.  A typical Turkey Shoot would occur when an oil company geophysicist gave five processing centers the exact same 2D line's field data (with or without well infomation), and then a few weeks later received five (usually very) different, processed lines to review and compare to well control ... and their own pre-conceived ideas.


     This disturbing (to some, but not all, because 2D geophysisists could pick and choose the picture that best sold a prospect) result happened frequently, and the question usually whispered by non-geophysicists were, "How could this happen?  How can the same digital seismic field-recorded tapes and the same support data be used to create such drastically different pictures of the subsurface ... and all by competent Processors?" The D3DSP answer to this family skeleton in the closet of every thoughtful geophysicist, is that this occurs when different processors, all businessmen, are permitted and encouraged to guess at what the client wants to see. And then they each take these field tapes full of the seismic responses from an "Object-filled Earth" and apply different algorithms and processing sequences (with minimal or no geo-interpreter input), for the express purpose of creating "High Signal-to-Noise Ratio" (read:  clean-looking, laterally coherent) 2D cross-sections, that the processor believes looks like a reasonable "Layered-earth" picture.  It was the fundamental "Layered-earth" assumptions that allowed the creation of so many different 2D seismic "Layered Pictures", all consistent with what was believed to be the client's desired, layered-look product. If the client liked a certain version, and was willing to pay to process the rest of its data set(s) similarly, this version was judged the "winner" ... whether or not it portrayed an accurate subsurface picture, in either a structural or stratigraphic (or "objective") sense.



Answer to #4 (Is a special type of acquisition technique required for the D3DSP, or can any 3-D seismic volume is re-processed to create a D3D-impedance T'ube?)

     A founding principle of the D3DSP is that every effort must be made to know, as accurately as practical) the X, Y, and Z coordinates of each source and receiver in the recording layout.  Single-point sources and single-point receivers are preferred, but stationary arrays seem to work well if they do not cover too large an area.  The larger the area covered by a source or receiver array, the more subsurface "smearing" of the seismic image (D3D-impedance values) will occur. High lateral resolution requires very localized source positions and receiver (array) coverage.


     Land data, diligently acquired by sober surveyors, shot-hole drillers, and Vibe truck drivers, seems to work best. There is no "type of source" requirement, except that high resolution requires high (and low) frequencies. Even broad-sweep-frequency-band Vibroseis 3D surveys are D3D-useful, if the source arrays are not too large (typical survey parameters call for summing multiple Vibrator- sweep records into each field record, as the vibrator trucks are rolled along each source line) and the vibrators are correctly synchronized and similarity tested. Vibroseis data processing under the D3DSP can be a gamble because Vibroseis is a surface source (as opposed to deep shot hole explosives), and surface traveling seismic energy is always generated. Relatively long source and receiver arrays are commonly used to attenuate its strength. Fortunately for the D3DSP, recent high-resolution Vibroseis surveys have been recorded, using increased dynamic range systems that allow the "ground roll" noise (whose large amplitudes formerly would swamp the weaker reflection signal) to be recorded, and then digitally filtered later in the processing phase.


     All of the above assumes that broad-band acoustic sound waves are able to be sent downward beneath or through the sometimes thick near-surface "weathered" layer(s).  Dry weathered rocks, above the water table, can make such deeply penetrating sound waves, with high frequencies, very difficult to record


     Marine streamer cable data has been problematic for the D3DSP, although recent contractors have purported to be able to measure (small arrayed) source and receiver positions with land-data accuracy. At this writing, no attempts have been made to D3D-verify these claims, and cosmetically layered marine streamer-cable data are the last choice for an acceptable D3D acquisition method.




Answer to #5 (How is D3D processing different from conventional 3D processing?)

     An example of a typical D3D-processing sequence, for the U.S Gulf of Mexico shelf, is shown under the "D3D versus Conventional Seismic" button on VTV's WELCOME Home Page. The processing flow was developed with WesternGeco processors, and uses readily available processing algorithms (although some software packages, written on and for a microprocessor CPU, may have layered assumptions incorporated into their framework and may be unsuitable for the  D3DSP).  The D3DSP contains no new processing algorithms, but many unorthodox variations in the parameters and in the exact sequence, are applied to established, layer-based processing steps. The following are some of the most notable steps:

1. For land data, the D3DSP will match source and receiver elevations with published topographic maps, to validate the lateral position (and azimuthal orientation) of the survey.  There is no sense proceeding, if you really don't know where the survey is located.

2. Try hard to remove the digital recording system's (geophones, cables, field anti-alias filter, low-cut filter, notch filter, amplifiers, etc.) instantaneous impulse response.  Note that a Vibroseis operation (correlation of field records with the instrument-filtered sweep) inherently performs most of the instrument "de-phasing" (to zero-phase). Only the Fourier frequency amplitude spectrum will need to be whitened (balancing the amplitudes of low and high frequencies) using a time-varying spectral balancing algorithm. This was a dynamic new concept in the late 1980's.

3. Avoid applying weighted mixes or smearing of clusters of trace samples (such as by the use of frequency-wave-number (F-K) Fourier filters and F-X deconvolution) to remove pre-conceived "noise" ... much of which may be subsurface-CIO-generated signal, needed in the D3DSP.

4. Use manual and (controlled) automatic trace editing, early in the processing sequence, to delete obvious, especially source-generated, non-geological "noise" energy.  It is better to delete entire records and clusters of traces, than to leave them in the data set in order to enhance stack-fold. And DO NOT JUDGE "SIGNAL vs. [random?] NOISE" UNTIL AFTER D3D-MIGRATION!  Tying D3D-impedance traces to logged acoustic impedance curves, or TWT-converted gamma ray, resistivity, or other well log curves, is really the only way to judge if high frequencies are signal or noise.  The D3DSP contends that one man's (layer-based) noise is another man's (buried CIO) signal.

5. Use time-varying spectral balancing (whitening), with frequency end-members as low as possible (3 Hz?) and as high as possible (close to the Nyquist/folding frequency).  For example, the Nyquist frequency is 250 cycles per second (Hz) for 2-ms sampled data, that records 500 digital samples every second.  Modern telemetry recording systems use few cables, and thus have less chance of recording cross-feed and other sources of aliased high-frequency noise. This TVSB step makes high and low Fourier frequencies have nearly the same amplitudes with notches in the “balanced” amplitude spectrum representating the amplitude spectrum of the real geology. It also helps stabilize the wavelet in TWT, making the deeper wavelet look more like the shallow wavelet ... which is good for the spiking-deconvolution step that follows. If near-surface static time shifts allow the stacking of such high frequencies, the D3DSP tries to preserve them, because they may hold useful geological information, after D3D-migration.

6. Next, apply a surface-consistent Wiener-Levinsen spiking deconvolution algorithm, with the shortest possible (interpreter-chosen)  filter length that attacks wavelet side lobes, but not cyclical geology (this is a tricky judgment). This filter uses an autocorrelation of each whitened trace to identify "reverberations" or "ghosts" (peaks and troughs), that always seem to trail behind the main lobe of the acoustic pulse.  The Weiner-Levinsen algorithm makes some reasonable assumptions (that the TVSW above tries to insure), and creates an attenuation filter that attacks the side lobes within the filter's (two-way time) length, following every peak or trough. The preceding whitening step tries to push as much energy up into a dominant lobe of the unknown-phase wavelet, thereby helping to avoid destroying any real (cyclical) geology, that may LOOK like a reverberating seismic wavelet, but is ACTUALLY what the D3DSP is trying to enhance:  a geological objects' high resolution SIGNAL. Historically, these high frequencies and the order of the whitening and decon, have been the most contentious part of the D3DSP. The rest of the procedure is often seen as simply more intense, geologically guided processing QC effort than is normally applied.

7. Conscientiously monitor the static-time- and normal-move-out- (NMO) velocity- corrections, using shot- and, especially, CMP-sorted collections of TWT traces, called "shot-gathers" and "CMP-gathers, respectively, when working in the post-stack migration mode.  Actual breaks and discontinuities (e.g., faults) will move through a set of CMP-and shot-gathers quite differently than breaks caused by sudden, artificial trace-to-trace time shifts ("busts" in the "static solution"). Also, NMO-corrected reflection events, required to be essentially flat on these display panels, will exhibit tell-tale "smiles" or "frowns" if the stacking velocities used are too slow or fast, respectively.  Minor, residual move-out errors will be handled by the choice of an "outside mute" function (TWT vs. source-receiver offset distance, where samples are zeroed out above the mute line for each CMP-gather).   In the case of pre-stack depth or time migration, these QC steps must be performed using analogous sets of shot- and CMP-gathers, but these may be depth-migrated and thus displayed in depth rather than TWT. And the common-midpoint concept is irrelevent, replaced by a common “depth point” ot common reflection point”.

8. Apply as severe a pre-stack mute (in the post-stack migration mode) as possible, remembering that severe outside mutes make shallow data look like it was acquired by widely spaced shots with many serious "skips" in the data coverage. But severely muting the outside (long source-receiver distance) traces will insure that the D3D-impedance volume is nearly a (D3D-desirable) Offset Variations Absent (OVA) product. D3D-impedance volumes should be considered to be OVA data sets; in many ways the opposite of AVO (Amplitude Variations with Offset) data.

9. In the post-stack migration mode, interpolate both the trace-spacing and the sample-spacing to create additional, finer-sampled cells (VOXELs) into which the improved D3D-migration velocity field (below) is allowed to migrate/move the weighted seismic amplitude values. Because the D3DSP assumes that, in a stacked data volume, every subsurface "point" is equivalent to an "exploding reflector" (or "diffractor", which produces its own 3D diffraction pattern), that means that precise, and accurate, velocities will focus the data more precisely, if it is allowed to fill a finer mesh of VOXELs.  If the D3D-migration-velocities are well matched to the shape of the diffraction hyperbolas, an almost magically high-resolution product will result. If the 3D diffractions are distinctly non-hyperbolic (e.g., beneath fast salt or permafrost, or under slow glacial till, etc.), then pre-stack depth-migration, with advanced, non-hyperbolic-assumption, ray-tracing algorithms may be required. But in every case, the D3DSP-goal is to focus the continuously reflected and diffracted, seismic wave-field into a high-resolution VOXEL representation of the rock (relative acoustic) impedance. While obeying the digital signal processing "laws of aliasing" from Information Theory, the D3D-processing flow has demonstrated that the acquired lateral-trace-spacing and vertical-sample-interval, do not necessarily limit the final D3D-migrated VOXEL size ... or resolution. The traditional expectations, by most geoscientists and thus their processors, of a "noise-free" seismic picture on which trace-to-trace reflection-event continuity is the rule, seems to be a much more severe limit to detecting and resolving buried "objects" (e.g., reservoirs).

10.  Post-stack time- (or pre-stack depth-) migrate using a one- or two-pass (possibly cascaded) migration algorithm that preserves the relative amplitude character of the data set.  This will rule out many frequency domain migration algorithms, where reflection continuity is a dominant (conventionally desired) goal. Kirchhoff-type algorithms are often the most useful, but there are many good ones available.  To more accurately match true rock interval velocities, migration should be from a horizontal "datum plane" near or just below the surface, because the T-squared-X-squared (hyperbolic) "velocity" associated with a given diffraction curve, depends on both the speed-of-sound in the overlying rock, and the TWT of the apex of the diffraction energy.  Changing or tilting datum planes changes the TWT to the apex, and the slope of a diffraction “hyperbola”, and therefore the apparent migration velocity (Vmig) needed to collapse/focus the diffraction energy.  For better or for worse, anisotropy is ignored in the (mainly near traces) patented D3DSP.

11.  Using post-stack migration, the velocity functions will be pairs of TWT vs. Root-Mean-Square Interval Velocity (Vrms) values.  Each TWT vs. Vrms pair is supplied to the processor by the geo-interpreter, and is calculated from non-CMP-seismic sources, such as check-shot or velocity surveys and integrated transit-time logs. Stacking velocities (Vstk) are only used to indicate possible velocity gradients, and as an upper limit to the Vrms functions.  If the Vrms is faster than the Vstk, at any point, re-calculate both the Vrms tables and the Vstk values (especially check the datum planes, and confirm that they are the same ... and correct). Using pre-stack migration, a (geo-interpreter-supplied) 3D layered interval-velocity region model should be employed, perhaps to create a 3D matrix of (ray-traced, or iconal equation derived) two-way-travel-times from every  (X, Y) point on the "datum plane", to every subsurface (x, y, z) point. And the rock interval-velocities for the velocity region model must be estimated from non-CMP-seismic sources, too.  Using this one-way travel-time matrix allows accurate 3-D depth-migration if and when the velocity field is accurate, for arbitrary (non-hyperbolic?)  diffraction curve shapes.

12.  Apply a RUNning-SUM integration to each final D3D-migrated trace, to arrive at an "inverted" D3D-impedance T'ube (Time Cube, or volume).  If a seismic trace time-series is somehow proportional to CHANGES in rock acoustic impedance under that surface position, then its integral (summation) can be shown to be approximately proportional to the natural logarithm of the rock acoustic impedance, itself. A low-cut filter must be applied to eliminate the inevitable DC (i.e., "direct current") component from each (logarithm-of-the-relative -acoustic-) impedance trace, and it should be as low as possible (2-5 Hz?) because low frequencies on the D3D-impedance trace, are necessary in order to seismically portray thick CIOs.  With few low frequencies, a thick low-impedance pay zone will look like a thin, low-impedance layer suspended above a thin, high-impedance layer ... instead of one, wonderfully thick, low-impedance target.  It takes low frequencies to faithfully portray thick zones, and high frequencies to portray their boundaries.

13.  The D3D-processing sequence requires intense geo-interpreter quality control in many phases of the processing work, but usually proves to be quicker and less expensive (even when geo-man-hours are included) than conventional processing. The most important by-product of D3D-processing is that the geo-interpreter (and his geo-engineering team) that QCs the D3D-processing, gains increased confidence in the subtle, high resolution images and CIOs it provides. This usually translates into lower drilling risk (looking ahead of the drill bit) and higher per-well reserves found.





Answer to #6 (Does AVO [Amplitude vs. Offset] play a role in the D3DSP?)

     No. The D3DSP is an OVA technique, which is the opposite of AVO: "Offset Variations Absent" vs. "Amplitude Variations with Offset". The D3DSP teaches that it is best to design a rather severe outside mute function, and then keep it constant (or slowly varying) over the entire survey.  Changes in D3D-impedance need to be interpretable as changes in subsurface geology, not possible changes in the mute pattern (and the attendant far-trace AVO effects). The D3DSP teaches that a too-severe outside mute function is better than a too-lenient one. And this (too-lenient) is often the case when stack-fold is highly desirable. High stack-fold data will almost always look more layer-like because of the "smearing" effects and the (ostensibly corrected) trace mixing, of the CMP stack.




Answer to #7 (Of what use are synthetic seismograms and wavelet analyses to the D3DSP?)

     Conventionally, synthetic seismograms are used to correlate known geology (logs) to nearby seismic traces, assumed to be composed of the TWT-reflectivity series (proportional to changes in acoustic impedance) convolved with a (possibly temporally and spatially varying, mixed-phase) estimated wavelet. There are also many layered-earth assumptions incorporated into most conventional, Convolutional Model synthetic seismogram algorithms. "Trial-and-error" is normally employed to find a wavelet that gives the "best match" of the well log synthetic to the seismic trace.


     In the D3DSP, the goal is not to match one well to nearby seismic, but to confirm or adjust the D3D-reflectivity-stack wavelet's phase and polarity ... to be used at many drilled wells or proposed locations. Synthetic seismograms are also D3DSP-useful for analyzing the effects of multiple reflection noise, but a Vertical Seismic Profile (VSP) is superior, when available. An isolated, low-impedance gas sand, or coal bed, or a high-impedance volcanic flow or salt sheet, can serve the D3DSP as an excellent wavelet "tattletale".  When a zero-phase wavelet has been achieved by the processor, the top of a thick salt body should show up on a Positive Physical Polarity (PPP, SEG standard reverse polarity) seismic data set, as a single peak (positive amplitude) with symmetric trough side lobes on either side.  Likewise, a single, thin, low-impedance gas sand will show up on the PPP data as a trough-over-peak event, with the trough and peak having equal amplitude-magnitudes (when encased in a fairly uniform sale).  If such tattletale geology exists for a D3D-processing project area, it can definitely be used (with or without a synthetic seismogram) to phase-rotate the stacked traces to obtain a PPP, zero-phase reflectivity volume.  And this is what the D3DSP requires as input to the "inversion" (integration) step.


     Synthetic seismograms create synthetic seismic traces, in TWT, from transit-time and density logs (and other logs, if no DT or RHOB logs are available), in depth.  A synthetic makes "Seismic TWT traces from acoustic impedance depth logs". The D3DSP creates "Relative natural logarithm of the rock acoustic impedance log traces (D3D-impedance), in TWT, from acquired seismic records, in TWT.  The time-to-depth conversion is done after desirable, potentially valuable, CIO targets have been imaged and measured.


     Other than what is mentioned above (whiten-and-deconvolve to create a broad-band, zero-phase wavelet, then possibly phase-rotate it), no "wavelet analysis" is performed by the D3DSP. The near final step of trace integration essentially obliterates the wavelet and leaves a high-resolution distribution of the D3D-impedance of the buried rocks and fluids.




Answer to #8 (Are the high D3D frequencies really "signal", or merely a processing artifact?)

Most of the D3D high frequencies have been seen to correlate well with high-frequency "events" measured by  well logs.  Interval transit time logs and bulk density logs are obviously the ones that will show a correlation, because it is the combination of their properties that control reflections and transmission of propagating sound in the earth (and in the water and air, too).  Other common well logs, such as natural gamma radiation, resistivity, (permeability), porosity, etc., do not directly influence seismic reflection strength (echoes), so it is a real pleasant surprise when (D3D-impedance or any other) seismic data correlates very well with these logs.  Aside from the ability of a processor to make almost any picture he wants out of a conventionally processed seismic data set, this is probably the most frustrating thing about geophysics (seismic) to a geologist, engineer, lawyer, and (especially) an investor.


     It must be noted that many high-frequencies on a D3D-impedance display ARE processing artifacts, but a geologically educated (or inclined) interpreter will usually recognize them as such, and understand why a processor could have removed them, but chose not to, in order to protect the signatures of real (potentially valuable) buried, common-impedance objects.  Processing artifacts do not normally form familiar-geology-shaped, compact, three-dimensional objects, or geo-bodies.




Answer to #9 (What are [thickness] resolution and detection limits of D3D-impedance data?)

     The lower limit to both the lateral and vertical resolution is unknown at this time.  As mentioned (#5, above, in the answer to the "Difference between D3D and Conventional Processing"), the combination of
     [A] A very broad-band, zero-phase wavelet,
     [B] The use of accurate, interpreter-supplied migration velocities (Vrms), and
     [C] Pre-migration trace- and sample-interpolation,are extremely powerful resolution         enhancing tools. New examples will be published by VTV, as they are produced.


     The EI-27 CIB CARST sand example shows re-sampling from 3 to 2 ms, and the results seem to add real information, not evident on 3 ms conventional processing (flat base, possible Gas-Water-Contact, on the gas sand CIO and accurate volumetric predictions).


     The Northern Louisiana Lower Tuscaloosa sand example shows the result of 2 ms to 1 ms sample interpolation as well as 110x110-foot to 55x55-foot stacked trace interpolation. There is surprisingly good correlation between the one (yellow) low-D3D-impedance VOXEL (1 ms = 5 feet @ 10,000 feet/second) and the three-foot perforated interval in the low-gamma-ray sand, in the productive well. Because of the thin producing zone, and the parameters used to process (prior to our discovering this thin pay example), this is probably more an example of "detection" limits than "resolution" limits. The ST-26 “O” sand in the D-31 well also matches a single low-impedance VOXEL with a very thin, unswept pay zone


     The limits on resolution are probably due to the static time corrections for stacked data (no such limits on single-fold data), the knowledge of the correct (generally non-hyperbolic) migration velocity diffraction curve shapes, and the 3D migration algorithm's ability to deal with laterally and vertically changing velocities. But  the largest contributor to the limit is probably over-emphasis on stack-fold, to achieve that "layered look".  Stacking means mixing and smearing, and thin, "high-frequency" geology is not normally found over widespread layers.  Lateral resolution and vertical resolution are complementary to one another. The creation of "layers" leads to low lateral resolution, which necessarily interferes with any attempt to increase vertical resolution.  The "object-oriented" goals of the D3DSP have no such obstacles to higher resolution.




Answer to #10 (What is most difficult about locating valuable buried objects using the D3DSP?)

     It is difficult to get a D3D proposed well drilled without conventional seismic support (amplitude, structure, AVO, etc.). And if the target CIO has such support, it will be difficult to claim that it was drilled using the D3DSP.


     But, seriously, depth conversion and lateral positioning confirmation in complexly overburdened areas (salt overhangs, overthrusted older rocks, shallow gas, etc.) pose the most difficult challenges ... after the D3D-impedance volume has been generated from raw field data, loaded onto a volume-visualization and -analysis workstation, and then measured and evaluated VOXEL-volumetrically.


     Also, understanding what "valuable objects" will be the eventual targets, is not always easy. Arriving at the final CIO (a seeded and detected sub-volume) requires considerable trial-and-error adjustments to the "impedance cutoff" (VoxelGeo's detection threshold), and comparisons to drilled, logged, and possibly produced wells ... especially if any of these may have penetrated a CIO being analyzed. This calibration process is tedious at first, but becomes quite intuitive after an interpretation team works a D3D-impedance T'ube for any length of time.


     It is easy to see that the real heroes in the D3D-location of valuable CIOs, are the "computer scientists" ... that create, load, and verify the enormous quantity of digital   (x,y,z) and (x,y,TWT) values that make up a three-dimensional Geographic Information System, or a so-called "D3D-impedance T'ube".  Some of its constituents include:

     Land, legal, topographic, and other cultural data;
     Geographically varying, time-to-depth relationship data;
     Stacking- and migration-velocity data;
     D3D-impedance seismic (including historical ownership) data;
     Drilled-well surface location (including historical) data;
     Subsurface 3D directional well path data;
     Productive and nonproductive "formation tops" (and "bases") data;
     Contoured TWT and depth structure map data;
     Geological fault cuts and contoured "fault plane" map data;
     Production history (start and stop dates, fluid types, initial and cumulative
     production, estimated ultimate recovery, depletion mechanism, pressures) data;

     The planet earth is old and beat up, and today's environmental condition (weather, volcanic and earthquake activity, meteor impacts, etc.) is probably a mild version of what was happening when many "valuable CIOs" were deposited ... or erupted ... or crash landed.  It can be difficult for the interpretation team to maintain educated and open minds, about the ancient history of the planet Earth.




Answer to #11 (Is a volume visualization workstation [e.g., VoxelGeo, GeoViz, Magic Earth, etc.] required for D3DSP interpretation?)

     Yes.  If no computer workstation volume visualization-measurement-analysis capabilities are available, then the D3DSP cannot be used to its fullest commercial advantage. Certainly, conventional "structure" and "amplitude" mapping can be performed using a D3D-impedance volume, but it will not be a full D3DSP volumetric product. And it will not be nearly as easy to generate, or as thoroughly evaluated, or as ultimately valuable, as results arising from the use of one of the many "volume visualization" software packages now available.




Answer to #12 (What roles can [or should] a petroleum-industrial geophysicist, geologist, and engineer, play in the profitable application of the D3DSP?)

     There are some clear recommended roles for each of the disciplines, and probably even more, once the D3DSP has been embraced as an accepted tool for looking into the opaque earth.  Many concepts will be new to the uninitiated. Two long-standing habits will be understandably difficult to break:

1. Searching for traps in a faulted, layered earth. The idea of searching directly for signs of hydrocarbons is not new, but looking for hydrocarbon-filled, buried "3D-objects", by understanding the effect of oil and gas on the acoustic impedance (density multiplied by velocity) of normally brine-filled rocks, will definitely be a new and an uncomfortable change. Petroleum is lighter and slower than water, and tends to inhibit the destruction of porosity by many diagenetic processes (aka, cementation) and these facts make relatively low-impedance CIOs with the right size, shape, and relationship to nearby logged zones, potentially attractive candidates for D3D-target-reservoirs ... whether or not a conventional trap can be identified.  It will be difficult to break the habit of searching for "traps" and "prospects" (short for "prospective hydrocarbon traps"), because the presence of a mappable "trap" has long been one of the "Three Commandments" for an acceptable exploration drilling prospect:  Reservoir, Trap (aka, Seal), and Hydrocarbons (aka, Hydrocarbon-Presence and Timing-of-Migration).

2. Incorporating a large quantity of validated, quality-checked, non-seismic, digital data into the seismic volume.  See the answer to FAQ #10, above, for a list of some of these data types.  The D3DSP cannot be used to minimize risk and maximize per-well recoveries, without proper calibration using surrounding (and imbedded) information. And this requires the recognition of the value of the Computer Scientist (or Information Technology Engineer) in the successful search for petroleum, particularly onshore in the U.S., today.

     The D3D-geophysicist should study the techniques and pitfalls of the D3DSP. He should collect and understand all of the various types of non-seismic information, to supply to the D3D-seismic processor (and D3D-seismic acquisition contractor, when possible) ... and then monitor their work, continuously and conscientiously. The measurement and evaluation (interpretation) phase should be performed in a collaborative, computer assisted geologist-geophysicist-engineering team environment.


     The D3D-geologist should educate the geophysicist and processor about the types of valuable objects being sought, and help them overcome in-grained "layered-earth" assumptions. The geologist will help identify and evaluate seeded CIOs, as well as assist in the prediction of reservoir age and type, logged thickness, depth, and total volume (from the D3D-impedance VOXEL analysis).


     The D3D-engineer should educate the above teammates about the rock- and fluid-physical parameters (density, velocity, porosity, HC saturations, cementation, etc.) and provide a range of estimated Recovery Factors for the seeded CIOs (targets reservoirs). The engineer will also run "economics" (a conventional statistical analysis of the risk-reward profile for a business venture) that incorporate the D3DSP-volumetrics analyses.

[VTV (Voxel Terra Vision)] [What does VTV do?] [What is a D3D VOXEL?] [D3DSP - The Patent] [D3D vs Conventional] [Objects vs. Layers] [Multimedia Animations] [FAQ's] [About Us] [Contact Us]