(CNN) -- At a supercomputing convention that just wrapped up in New Orleans, Louisiana, the potent effects of Hurricanes in big plastic cups paled in comparison with the raw power of a tiny silicon chip.
"We're geeking out about exascale computing right now," John Shalf, the Advanced Technologies group leader at Lawrence Berkeley National Laboratory, said of the computer architects, software developers and engineers in the high-performance computing community.
Exascale computing is computing at 1,000 times the pace of the current fastest supercomputer.
"We have to completely rewrite the book on how we do computing," Shalf said. "Every 20 years, you get this chance to reinvent the universe, and that is the number one thing that everyone is excited about. We have to come up with the new world order in computing,"
And they want to do it in 10 years.
Today's supercomputers work at the "peta" scale, meaning they can measure their performance in terms of 1015 FLOPs(floating operations per second). Exascale computers would process at the rate of 1018 FLOPs.
"We're talking about machines with millions of processors where each processor has 1,000 cores," said John West, special assistant to the director of the U.S. Army Engineer Research and Development Center Information Technology Laboratory.
"Not only is that going to be a huge change in our chips and our power architectures and our memory architectures, but we're probably going to have to completely redo the way we program these things as well. And 10 years is a really short time to come up with a new model of programming."
Modern computer processors have several processors all on one chip (each subprocessor being called a "core"). These days, four, six, eight, even 12 cores are common on one chip. Compare that with a supercomputer, which has hundreds of thousands of cores.
A supercomputer -- or its software, for that matter -- isn't something you can pick up at Best Buy.
"Fifty-six percent of the fastest computers in the world are being used in an industrial setting," like the automobile, energy, pharmaceutical and gaming industries, said Jack Dongarra, a distinguished professor at the University of Tennessee's department of Electrical Engineering and Computer Science.
Dongarra should know: He maintains a list of the 500 fastest computers in the world, according to his benchmark program.
This year, Dongarra's list is ruffling feathers. For the first time, a Chinese computer has surpassed American models for most FLOPs.
As the high-performance computing community deals with issues surrounding the leap to exascale, like finding a way for supercomputers to use less power and figuring out programs that can test new processing capabilities, they took a quick break to catch up with one another.
For 23 years now, the Super Computing conference has allowed this community to gather and discuss their work. SC, as it's known, is not your average computer conference.
Although it does have a show space for merchants with slick new products and presentation of technical papers, tutorials and workshops, what it mainly has is more Gordon Bell Prize winners per square foot than any other exhibit floor in the world.
SC10 (this year's SC) was "homecoming" for high-performance computing experts, Dongarra said.
"We get together once a year to exchange ideas and really show off what we have been doing over the course of the year," he said.
"You know, our community is really small. We did a back-of-the-envelope calculation a couple years ago that seems to have stuck, and we think it's about 100,000 people in the entire world that do what we do," West said.
"Now there are many, many, many more people that use our computers, but there's about 100,000 of us that build them, run them and program them. And that's a really small community when you think about 10 million C [computer language] programmers that are working on everything from games to SAP [software] installation."
More people than ever are working to surpass the limits of today's technology. Last week's SC10 had record-breaking attendance for the second year in a row.
Patricia Teller, a computer-science professor at the University of Texas at El Paso, feels there's a bit of a misconception about the high-performance computing community, especially for women. This is not a line of work that just any solitary computer geek should get into, she says.
"If anything, it's very different from [the solitary computer geek], because with such massive systems, it requires teams of people. Teams of scientists from different disciplines along with technologists are working together to solve what are called grand challenge problems," Teller said.
But Shalf, who created a supercomputer program that solves Einstein's equations about black holes, said it helps to be a bit of a nerd if you're going to get into high-performance computing.
"When we really get into the details is when we start to separate ourselves from the rest of the world -- when we start deep-diving into things that nobody else has the context for even conversing about. You can lose people that way," Shalf said.
"When I talk to my parents about what I'm doing, or my sister for that matter, and I get excited, 'Oh! I met the architect of this supercomputer!' and they say, 'Oh, that's nice, dear, does everybody have one of those?"
"And it's like, 'No, there's only three of them in the world,' and they say, 'Wow, must not be too successful then.' "