Bootstrapping Machine-Learning based Seismic Fault Interpretation

Steve Purves, Behzad Alaei, Erik Larsen

Earth Science Analytics As, Prof. Olav Hanssensvei 7A, 4021, Stavanger, Norway

AAPG ACE, Salt Lake City, USA, May 2018

Seismic Interpretation represents quite a unique problem, we have a great density of data within seismic datasets; 2D, 3D, prestack, multicomponent, broadband. A wealth of measurements. However, we have very little in the terms of interpretation on which to train machine-learning models. This “labelled” data, as the term goes, is
scant and insufficient for training a general-purpose seismic interpretation model, and this is unlikely to change. So, we need to find a different approach. Being able to use existing machine learning architectures on seismic images directly is very attractive, and results of using CNNs to detect salt bodies [Waldeland, 2017] are encouraging, if unlikely to be as successful away from the well-defined textural image contrasts that we see in and out of salt. A very different approach is pursued by [Araya-Polo, 2017], who are exploring how to teach a learning system how geological faulting works. This is a very interesting approach and likely an important part of some larger, probably adversarial, multi-network system that balances what networks can ‘see’ in seismic versus what other networks ‘understand’ about geology. In this work we focus on the ‘seeing’.

Our approach is to have a machine learning system interpret faults from seismic data in a supervised manner where the only supervised inputs to the system are algorithm-generated labels (seismic attributes) and some constraints over how we expect faults to be represented in seismic.

We are focusing on faults although the broader goal is to determine geological structure from seismic data. Our primary fault-attribute inputs will be derived from “fault likelihood” analysis [Hale, 2012], where we can generate dense fault-probability and orientation volumes over a data set. We use this data to train a supervised system such as a CNN
or DCGAN. Immediate questions arise; by training on seismic attributes, will the learner just learn how to produce the same seismic attributes

How do we enable a system to generalize beyond the input machine-generated data? The goal of this work is to explore concepts from the field of semi-supervised learning and how they might be applied to allow us to “bootstrap” training schemes for seismic with other measurements such as attributes. In this work we attempt to answer these questions and hope to drive further study in the community.