Skip to content

Supercomputer analysis of fracking

The United States Department of Energy has, for the last several decades, routinely been in competition for operating the most powerful supercomputers in the world.
Shawn Bennett
Shawn Bennett says big data analysis will be used in oilpatch applications. Photo by Brian Zinchuk

The United States Department of Energy has, for the last several decades, routinely been in competition for operating the most powerful supercomputers in the world.

In June 2018, it fired up Summit, which, according to NBC News, “has been clocked at handling 200 quadrillion calculations a second (or 200 petaflops). That's more than twice as fast as the previous record-holder, China’s 93-petaflop Sunway TaihuLight, and so fast that it would take every person on Earth doing one calculation a second for 305 days to do what Summit can do in a single second.”

But what does that have to do with oil? Shawn Bennett, deputy assistant secretary for the Office of Oil and Natural Gas at the U.S. Department of Energy (DOE), says the DOE is looking at applying its big data computational abilities to analyzing geology and completions in the oilpatch.

The 2018 Summit computer is capable of 10 times the number of calculations of the Sequoia supercomputer. The DOE still operates Sequoia, and it is currently ranked 10th in the world, according to the DOE. In total, the DOE operates five of the ten most powerful computers on the planet. That’s the level of computer power the DOE has at its disposal, and it is now looking at applying that to the geology of the American oilpatch.

“When we’re looking at that big data, we’re trying to see how we can use that supercomputing process, to see how we can, if there is an opportunity for us to use supercomputing in oil and gas development,” Bennett said.

“Not for an individual company’s basis, but to unlock some of these questions that we have. When you look at predictive analytics and you look at big data, you need that very fast supercomputing power to potentially unlock some of these mysteries in the shale. So we are in the early stages of developing a program where we can hopefully utilize the supercomputer capacity to unlock some of these universal mysteries of oil and gas.”

When asked how soon they could do this, he joked that “My boss asked how quickly we can get it done, too.”

“In order to compile the data, work out the data with companies, and have that conversation, we have to gather that data, big data. It means a lot of data has to be acquired. So we’re in the very beginning process of acquiring data and seeing if there’s an opportunity to start looking at different algorithms to go at it.

“It’s not going to be a next year thing. But hopefully, in the next few years, we’ll have some questions answered.”

As an example, he said taking a subset of data from a basin to look at anomalies and similarities

“There’s been a lot of data that’s been acquired by these companies over the last decade of field development. Being able to clean up that data, use that data, and to start to see similarities and new predictive analytics through algorithms and physics-based analysis, and hopefully be able to increase the EUR through that big data approach, through these supercomputers,” Bennett said.

“When you look at big data, we know, right now, what works. But ultimately we want to improve resource recovery, the EUR, the estimated ultimate recovery, of these wells. And by doing that, going through these massive amounts, reams and reams of data. The problem with all these reams of data is it takes months, even years, to compile that data and to be able to understand it better. With those supercomputers, if we can do it in a more real-time manner, we could have real-time changes to the drilling program, whether it’s the drilling portion or completions, for each well.”