Artificial Intelligence – Neural Networks Applications in Seismic Prospecting

TUSHAR website


Global map

Introduction

Large volumes of hydrocarbons remain to be found in the world. Finding and extracting these hydrocarbons is difficult and expensive. We believe that under-utilization of data, and of the existing subsurface knowledge base, are at least partly responsible for the disappointing exploration performance. Furthermore, we argue that the incredibly rich subsurface dataset available can be used much more efficiently to deliver much more precise predictions, and to thus support more profitable investment decisions during hydrocarbon exploration and production.

In this section we will argue that Artificial Intelligence (AI), i.e. Machine Learning-based technology, which leverages algorithms that can learn and make predictions directly from data, represents one way to contribute to exploration and production success. One key advantage of AI is the technology’s ability to efficiently handle very large volumes of multidimensional data, thus saving time and cost and, therefore, allowing human resources to be deployed to other, perhaps more creative tasks. Another advantage is AI or Machine Learning applications is ability to detect complex, multidimensional patterns that are not readily detectable by humans.

We will show in detail how Deep Neural Networks can automate a process in seismic velocity analysis, which usually take days to be done manually. Firstly there will be a brief section describing the process of velocity analysis, then we will move on and integrate the process with Neural Networks and Deep Learning.

Seismic Velocity Analysis

Understanding of the subsurface velocities are very important as they are indicators or various features and also the presence of hydrocarbons or not. With the understanding for the subsurface features we can interpret the traps where hydrocarbon or gases maybe be present.

A seismic source and a receiver is kept on a surface. Source produce a wave which penetrates the earth and gets reflected or refracted at several boundaries under the surface and are recorded back by the receiver on the surface. The data that we collect have information of the subsurface structural variations and we then use various methods to interpret them. Seismic Velocity Analysis is a process of getting subsurface velocities at different depths using the data that we just collected.

We will be using Semblance Curves to get the stacking velocities from the Seismic Section. Semblance peaks are picked using Neural Network trained on previously handpicked data.

Semblance Curves

Semblance analysis is a process used in the refinement and study of seismic data. The use of this technique along with other methods makes it possible to greatly increase the resolution of the data despite the presence of background noise. This new data is usually easier to interpret when trying to deduce the underground structure of an area.

Semblance Curve has velocity at its horizontal axis and time at its vertical axis. What our goal is to get a maximum value for each time unit in vertical axis. The values in the curve range from 0 to 1. Theoretically there can be just one maximum value at a time unit but in practice due to noise in our data we get bands with many maximums. It becomes hard to interpret which value is the correct velocity value, so we have to try out each value and see which one flattens our Seismic traces the best. It is a very tedious job to manually pick the value, so here we have trained a neural network which will automatically pick the maximum in each time unit.

Generating Semblance Curves

Semblance Curves were generated using an open source software called Madagascar. The data that we used is also available open source and is the data from Viking Graben Region. You can check the instructions on how to generate Semblance Curves by looking at instructions provided at Madagascar official website. Madagascar comes with an API for attaching the code written in software to any custom build program in many languages. In our case, we will attach this with our tensorflow neural network written in python.

Deep Neural Network

The data that creates the semblance curve is put into numpy array. We have a set of hand picked peaks for our training data set.

We created a simple Multi Layer perceptron with 4 hidden layers and used Adam optimization algorithm to fit the parameters by learning from the hand picked values for our data set. For this we utilized tensorflow functions as below:
optimizer=tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

The data set has 2400 CMP gathers and 1500 time units in each CMP gather. A gather is a collection of seismic traces that have some common geometric attribute. In our case we use common mid-point (CMP) gather (See Figure below).

The data is huge and it took 15 hours to train and after training the parameters were tested on Nankai dataset (Comes with madagascar software), it gave the output in 10 minutes and achieved an accuracy of 85.67%.

To get python source code, click this link to deep neural network code

Following are the graphs for Actual Velocities of the region recorded then the velocities estimated by the Neural Network and lastly the learning curve of our neural net.

Conclusion

We can see from the graphs above that Estimated velocities are very close to the actual velocities. Training a neural network takes hours but once trained it reproduces results in minutes. This method saves days of manually picking the peaks which will give us the velocity hence speeding up the process.

References
1. VG Data Madagascar
2. Link to data from Viking Graben Region
3. Gather
4. Neural Networks – Applications Seismic Prospecting Neural Network Code



Everyday Examples of Machine Learning Applications

Artificial Intelligence and Machine Learning applications is one of the most hottest topics in the industry today. Robots, self driving cars, intelligent chatbots and many other innovations are coming to our work and life.
In this post we will look at few machine learning less known applications that were covered in some of previous posts. We will see how machine learning impact our work and life and how we can benefit from this.

Artificial Intelligence – Neural Networks Applications in Seismic Prospecting

Machine learning allows to automate different processes that we use in the work environment. Even if the process is complex like seismic velocity analysis, which usually take days to be done manually. Neural Networks and Deep Learning can be used for speeding up the process and save days of manually picking the peaks.
For more details refer to Neural Networks Applications in Seismic Prospecting post with the example of Multi Layer perceptron.
As result of this saving workers can use their time for more interesting tasks. Thus due to AI and ML the jobs will become more interesting and creative but will require more skills for people.

Correlation Data Analysis Between Food and Mood

food and mood

In the post Machine Learning for Correlation Data Analysis Between Food and Mood machine learning was applied to estimate correlation between eating sweet food and mood state. The moderate correlation (0.4) was detected. Also it was detected some delay (5-6 days) between food intake and change of mood. This corresponds with observation that swing mood may appear in several days, not on the same or next day after eating sweet food.
How we can benefit from this – by controlling food we can control our mood at some degree. One thing is just know that this is bad food that need to be avoided or minimized, another thing to know some numbers – in how many days it will be affected, how strong is the impact? The latest leads to more motivation take actions on food. This helps avoid excuses like “yes, I know it is a bad thing, but may be small quantity is not counted”.

Inferring Causes and Effects from Daily Data

Sample of data after one hot encoding
Sample of data after one hot encoding

In the post Inferring Causes and Effects from Daily Data we applied machine learning techniques for learning relationsip bettween data. Here our interest is the date from our actions.

Doing different activities we might be interesting how they impact each other. For example, if we visit different links on Internet, we might want to know how this action impacts our motivation for doing some specific things. In other words we are interesting in inferring importance of causes for effects from our daily activities data.

In this post we will look at few ways to detect relationships between actions and results using machine learning algorithms and python. We create python code to use two machine learning algorithms helped us to estimate the importance of our features (or actions) for our Y variable.

With more data getting collected and available, we will be using more machine learning for making better actions.

Topic modeling

Topic modeling with textacy

We use topic modeling for discovering topics that occur in a set of documents. Topic modeling applications help us organize collection of text documents (posts, search results, articles), provide quick overview of contents and useful insights. Machine learning has different methods and modules to solve this task. For example, textacy module has a lot of functionality for processing data after applying NLP. This is why python textacy module looks very promising.
And this is also why in the post Topic Modeling Python and Textacy textacy was used for topic modeling. And it showed that it is very easy to do than with other module, for example gensim. This is the trend that we can see – as new module come out we can do more with less.

Conclusion

The above 4 examples of machine learning applications confirm that in the coming years more processes will be automated, some manual labor tasks will be changed to more creative and interesting processes and we will be getting more insights for actions and decisions from our data.

References
1. Topic modeling