Here at Ars, when we write about assembling computing clusters in the cloud, it tends to be on a grand scale. Think endeavors in high-performance computing (HPC) on Amazon’s Elastic Compute Cloud (EC2) like in 2013 when a chemistry professor and software company Cycle Computing assembled 156,314 cores for an 18-hour run that reached a theoretical speed of 1.21 petaflops. Or when simulation software firm Schrdinger rented 50,000 cores on EC2 in 2012 for $4829 per hour.

But sometimes just several dozen cores from a computing fairy-godmother will do. That was the case for an undergraduate Hyperloop team from the University of California, Irvine (UCI). The team assembled in 2015 to compete in a series of contests sponsored by SpaceX after the company's CEO, Elon Musk, drew up a whitepaper envisioning

a super-fast form of transportation that ran on magnetic skis in a low-pressure tube. After UCI's team, called HyperXite, won a technical excellence award in an early 2016 design competition, the team had to actually build the thing in time for the January 2017 pod contest at SpaceX's headquarters in LA. Thismeant a lot of computer modeling would have to be done.

Around that time Nima Mohseni, a mechanical and aerospace engineering undergraduate at UCI, joined HyperXite as the team's simulation lead. He told Ars on a phone call that HyperXite was originally looking at doing its modeling on a 24-core system owned by the University―a system that limited students to 72 hours of time per project. But HyperXite was sponsored by Microsoft (among others), and the software giantintervened to introduce the undergraduates to the people at Cycle Computing.

Cycle Computing takes computing capacity from cloud services and provides its clients―often research teams and private companies―with software to enable large-scale simulations in everything from risk modeling for life insurance disbursements to testing potential solar cell materials . For HyperXite, Cycle Computing helped assemble a 128-core cluster on Microsoft Azure to support all the necessary data and compute workload. To do the actual modeling, HyperXite used simulation software ANSYS running the Azure instances, which each had 16 Intel Xeon 2.4Ghz cores and 7GB of RAM per core.

This allowed the team to model the aerodynamics of the pod as well as its braking system. Just after the 2016 design competition, HyperXite decided to revamp the entire shape of the pod because their original design had been too heavy. The team decided to change the shell of the pod from aluminum to carbon composites, but before building a first draft of this shell on a real pod, Mohseni ran simulations to make sure the change would allow the pod to keep its structural integrity. "Right when we changed one part, something else had to change," Mohseni told Ars. With the new shell, some parts of the pod were too weak, and would break upon the pod's braking. Mohseni said the team built several iterations that broke in real life too, but having the benefit of a lot of computing capacity allowed the team to iterate faster and throw out clearly unworkableshell shapes.

"If we did not have access to the computational servers, we’d have to run smaller simulations and not get as accurate a result," Mohseni told Ars. "That’s the big difference." Having more than five times the computational capacity cut work that would have taken weeks down to days, and allowed the team to model its pod in far more detail.

Jason Stowe, CEO of Cycle Computing, told Ars that HyperXite's experience isn't unusual outside of academia. "What we’ve noticed, and the thing that Nima mentioned, was that by getting answers back quicker, people tend to benefit from that in two ways," Stowe said. "First, you’re accelerating the time to answer... By being able to get answers back far quicker, you can do more iterations in less turnaround time. Second, you can say 'great, we’re done' and get [your project] done faster." Stowe noted that Cycle Computing helped HyperXite pair ANSYS with the Azure cores, but if UCI had had an internal cluster it wanted to add computing resources too, the HPC software company could have helped it do that too.

Ultimately, the HyperXite team took its pod through the preliminary tests at the SpaceX Hyperloop competition, but it didn't get a chance to run its pod in the near-vacuum environment due to time limitations. (Actually, only three of the 27 teams competing got vacuum track time because the length of three-quarters mile track had to be depressurized and repressurized between each run―a process that took an hour, or sometimes more.)

"It more of an educational experience," Mohseni said, "it was [about] getting a whole team together of over thirty, forty students," to teach them how to become engineers. And with sponsors donating an awful lot of computing resources, HyperXite was able to do just that.

本文系统(windows)相关术语:三级网络技术 计算机三级网络技术 网络技术基础 计算机网络技术

主题: UC4G
分页:12
转载请注明
本文标题:The high-performance computing that made one Hyperloop team’s pod possible
本站链接:http://www.codesec.net/view/534987.html
分享请点击:


1.凡CodeSecTeam转载的文章,均出自其它媒体或其他官网介绍,目的在于传递更多的信息,并不代表本站赞同其观点和其真实性负责;
2.转载的文章仅代表原创作者观点,与本站无关。其原创性以及文中陈述文字和内容未经本站证实,本站对该文以及其中全部或者部分内容、文字的真实性、完整性、及时性,不作出任何保证或承若;
3.如本站转载稿涉及版权等问题,请作者及时联系本站,我们会及时处理。
登录后可拥有收藏文章、关注作者等权限...
技术大类 技术大类 | 系统(windows) | 评论(0) | 阅读(86)