By SUSAN LAHEY
Reporter with Silicon Hills News
That seemed to be the bottom line on the panel hosted by TechAmerica Foundation for its Big Data Roadshow on Energy.
Brian Jones, a principal of SAP Energy and Utilities, moderated the panel. He explained that big data and visualization were responsible for much of the U.S.’s ability to find sources of oil and gas previously thought impossible to extract. SAP’s database configuration, Hana, can process raw, structured and unstructured data up to 30,000 times faster than most other applications. This speed of processing information is also speeding up the decision making process, Jones said.
“Because of the big data insights we’re able to figure out where, when and how to explore for resources,” through hydraulic fracturing and other methods, Jones said. This means that by the year 2025 we will have surpassed Saudi Arabia as an exporter of crude oil.
But in order to make this data useful, it has to be translated into a visual image that people can cognitively use to solve problems.
“The data themselves are not actually a solution to anything, they’re more of a problem,” said Michael Baldea, assistant professor at the University of Texas Department of Chemical Engineering. When it comes to studying energy, he said, physical systems, power plants, wind turbines all generate data. And recent research into energy usage by household demonstrates that the data coming from disparate households varies widely—much more widely than commercial energy use. A popular new format for capturing and graphing this wildly disparate data, Baldea said, looks to him like hair caught in the drain.
On the other hand, when people can visualize their energy use in a way that’s relevant to them, it’s likely to change behavior. If thermostats, for example, didn’t tell homeowners how many kilowatt hours they were using—since no one knows what a kilowatt hour looks like—but measured energy use in dollar signs, he thinks it would have a dramatic effect on the way people conserve or use energy.
John Boisseau, Director of the Texas Advanced Computing Center at UT explained the TACC’s visualization display system is one of the largest and highest resolution tiled display centers in the world and helps scientists visualize synthetic images of data. This visualization helps them understand and work with enormous data sets that would otherwise be too difficult to extract meaning from. Among research being done are the conversion of CO2 into a substance used in industrial applications and the study of brain tumor growth and brain tumor surgery.
UT’s visualization resources, named things like Stallion, Mustang and Stampede, are available to scientists not only in the U.S. but internationally to further research.
Other ways that visualization aids in energy and resource industries includes the use of overlaying visual representations of data in countries where companies may have oil operations that are at risk either from people who merely want to destroy the oil for political reasons or to sell it.
David A. Reynolds, chief of staff for the intelligence and security sector of BAE Systems, said that company may use political data, meteorological data—that tells something about people’s behavior during certain seasons and weather episodes—social media intelligence and many other layers of information about an area to ascertain the risks. Some of these areas may be in remote or rural parts of the country where data from news sources is scarce.
The problem, he said, is partly that finding the relevant data in the noise has always been a matter of finding a needle in a haystack. As a former intelligence analyst for the CIA, he said this big data problem is not new for him or anyone in the intelligence community. Currently there are so many organizations and individuals generating and analyzing data, it’s as if the haystack itself has expanded exponentially.
The panel took place at the AT&T Executive Education and Conference Center Thursday morning.