KISTI-logoNational Supercomputing Center

KOR
KISTI National Supercomputing Center

Super Science

Simulating Turbulent Boundary Layer Interaction in a Supersonic Shock-wave/Turbulent Boundary Layer Using the NURION Supercomputer
Professor Sang Lee of KAIST’s Department of Aerospace Engineering published a paper titled “Direct numerical simulation of turbulence amplification in a strong shock-wave/turbulent boundary layer interaction” in the January 12, 2024 issue of Physics of Fluids. The article was selected as the “Editor’s Pick” in that issue. “Supersonic Aircraft”  Professor Lee stated that “This paper is part of fluid physics research, and its applications pertain to all types of supersonic aircrafts.” He added that “As this is a theoretical study, it may seem difficult to apply in the short term, but it can be implemented in the aerospace sector. Shock waves occur when jets travel at supersonic speeds. However, these waves result in flow instabilities.” Professor Lee used the term “shock-wave instability” to describe this phenomenon. Shock waves should remain stationary and stable. However, he pointed out that shock waves do not remain stably fixed. He explained that “In some cases, the shock wave oscillates at low frequencies near the engine inlet before being pushed forward and dissipating. When this instability is amplified, the shock wave’s behavior becomes more severe. If the shock wave dissipates at the inlet, the primary compression of incoming air fails to occur, resulting in engine ignition failure.” He continued: “When such unstable flow is amplified inside a hypersonic aircraft engine, it can result in engine shutdown and ultimately cause the aircraft to crash. This type of unstable flow presents a critical challenge to the stable flight of supersonic aircrafts.” Countries around the world are currently competing in the development of not only supersonic but also hypersonic aircraft. The United States, China, and Russia have developed aircraft capable of Mach 5 speeds and are now aiming to develop even faster aircrafts. Designing such high-speed aircraft requires a deeper understanding of fluid physics mechanisms. Professor Lee emphasized, “ To develop and operate supersonic or hypersonic passenger aircraft, our analytical understanding of shock wave and turbulence interactions must be substantially higher than the current level.” Shock Waves and Turbulence Intensity Professor Lee explained that when a ramp with an angular surface is present on an aircraft flying at supersonic speed, a “primary shock wave line” is generated. “Ideally, the primary shock wave would remain fixed in position, but it does not,” he said. “There are multiple hypotheses regarding the cause, but none have been statistically or theoretically verified.” The instability in the shock wave position is associated with the phenomenon of flow separation.1) When flow separation occurs, the flow patterns around the object change significantly, potentially destabilizing the flight. Professor Lee stated that “The low-frequency behavior of the primary shock wave is induced by the interaction between incoming turbulence and the shock wave, but the precise cause remains unresolved.” Noting the elevated turbulence intensity at the point of flow separation, he added, “This prompted us to investigate where turbulence intensity peaks and how to delineate those zones.” Zones of Amplified Turbulent Intensity  What specific engineering question did Professor Sang Lee and Ph.D. student Yuju Kang2) pose at the start of this research? According to the literature, there exists a TKE3) hot spot where turbulence intensity surges at the shock-turbulent boundary layer interaction zone. Professor Lee remarked, “While the cause of the first hot spot is well-established, previous studies have not adequately explained the discovery and formation mechanism of the second turbulence hot spot.” Based on their findings, Professor Lee and Yuju Kang concluded that while the intensity of the second hot spot is weaker than that of the first, the aggregation of shocklets4) significantly contributes to the formation of the second hot spot. A shock wave refers to a surface of abrupt pressure change occurring within a fluid, while a shocklet is a similar phenomenon generated by the same principles, but with relatively lower intensity and localized occurrence. According to Professor Lee, although the presence of shocklets in supersonic turbulent flows was widely recognized among engineers, no prior research had proposed their potential correlation with the second hot spot. He explained that “This is because the type of physical phenomenon cannot be observed without ultra-high-resolution simulations, such as the ones we performed.” “We leveraged abundant computational resources from KISTI’s supercomputing systems, which allowed us to run simulations at extremely high resolution. This enabled us to identify shocklet aggregation near the zone where the second hot spot occurs.” Professor Lee continued: “The academic community found our discovery that shocklets are responsible for the surge in turbulence intensity at the second hot spot highly intriguing. They viewed it as a novel perspective, which I believe is why the editor of Physics of Fluids journal selected our paper as an ‘Editor’s Pick’.”   Identifying the Cause of the Second Hot Spot  The presence of a second hot spot, where turbulence intensity surges, was first reported in a 2020 paper by Jian Fang of the UK’s STFC Daresbury Laboratory, published in the Journal of Fluid Mechanics. The theory proposed by Jian Fang attributes the phenomenon to a combination of Kelvin-Helmholtz instability and flow deceleration, implying that the starting point is highly associated with a free-shear layer.5) To verify whether the free-shear layer was the cause, Professor Lee and doctoral student Yuju Kang examined the vorticity6) levels. They observed that vorticity at the second TKE hot spot was markedly weaker than at the first, prompting them to conclude that the second hot spot was not caused by the free-shear layer. They spent several months contemplating the true cause. Professor Lee recalled that around December 2020, they first suspected that shocklets, rather than the free-shear layer, as the potential cause for the second TKE hot spot. Professor Lee explained, “After the end-of-year conferences, I sat down with Yuju Kang and examined the data thoroughly while formulating various hypotheses. Eventually, we identified signs of shocklet aggregation and resolved to investigate it further.” Since then, they continued running simulations to validate the shocklet hypothesis by utilizing KISTI’s supercomputing resources and acquiring additional data. Professor Lee noted, “It underscored the importance of abundant computing resources. We continued submitting proposals to KISTI for access to computational resources. As we were selected through these proposals, we could conduct even high-resolution simulations.” Professor Lee joined the faculty at KAIST in December 2019 and has been applying for and utilizing KISTI’s computing resources since the following year.     Shocklet Professor Lee elaborated again using Schlieren7) visualization of the simulation results: “Shocklets refer to weak shock waves generated by the complex surface corrugations along the turbulent boundary layer.” In the visualizations, turbulent structures called coherent structures can be observed within the flow field, and between them, short white filaments appear, resembling threads fluttering in the wind. These exhibit significantly weaker compression intensity than conventional shock waves. Professor Lee noted, “Because they are weak, it was difficult to expect they would cause any meaningful changes in the flow field. However, we identified them due to highly accurate numerical analysis tools.”Professor Lee continued, “When we examined the visualization further, we discovered that, statistically, these shocklets aggregate at a specific point. The shocklets were clustering at a single local site. This flow deceleration at the aggregation site had a significant impact on the formation of the second hot spot. That is essentially the core discovery of our paper.”   [Figure 1. Amplification rate of turbulent components in the direction of the flow.] [Figure 2. Phenomenon of interaction between streamwise velocity fields and shock waves in the buffer layer.] “We need more computing resources” Professor Lee utilized computing resources provided under the support of the KISTI Supercomputing Exploration for R&D Innovation. He said, “This research required extensive computations, but even more computing resources from KISTI will be necessary for future work.” The total number of grids used for this paper amounted to 380 million. He estimates that approximately three billion grids, which is nearly eight times the current scale, would be required for more detailed data analysis. Professor Lee commented: “While an eightfold increase might sound substantial, it is not considered large in the field of computational fluid dynamics.” Because resolution scales cubically in 3D, doubling each axis (X, Y, and Z) results in an eightfold increase in total grids. Professor Lee explained that “A calculation involving three billion grids would represent the largest-scale computation currently in existence in the field of shock wave / turbulence interaction. We expect KISTI’s supercomputing center will help make this a reality.” He added: “When that occurs, our team could produce leading research outcomes in the shock wave and turbulence boundary layer interaction domain.” Professor Lee is currently conducting joint research with KISTI on multi-GPU parallelization and application of an incompressible turbulent flow solver. In preparation for deploying the software on KISTI’s supercomputer No. 6, he is closely collaborating to advance large-scale computing technologies.
Developing Catalyst for Green Hydrogen Production Using NURION Supercomputer
Professor Do Hwan Kim of Jeonbuk National University (Department of Chemistry Education, School of Science Education) has developed a catalyst for green hydrogen production using the Korea Institute of Science and Technology Information (KISTI’s) supercomputer. (Project Number: KSC-2023-CRE-0091) He designed a catalyst that can produce hydrogen from water cost-effectively and efficiently and published the results in the Chemical Engineering Journal in April 2024. In a press release issued by Jeonbuk National University, Professor Kim stated: “We aim to introduce this technology into real-world industrial applications through rapid commercialization efforts.” His remarks reflect confidence that the study has high potential for practical application. We met with Professor Do Hwan Kim at his laboratory at Jeonbuk National University to inquire about how he developed the catalyst that is expected to expedite the commercialization of green hydrogen technology. Hydrogen’s Many Colors Most hydrogen found on Earth exists in compound forms with other elements. Water is the most representative case. Water is a compound made of hydrogen and oxygen bound together. To obtain hydrogen from water, the latter must be broken down. This process requires energy. The lower the energy input required, the more efficient the process. Catalysts serve to lower the energy barrier, enabling easier decomposition reactions. The term “hydrogen economy” has existed for some time. It refers to an economic system powered by hydrogen as an energy source. For example, Hyundai Motor Company’s hydrogen fuel cell vehicle, NEXO, has been on the road since 2018. The problem, however, lies in how hydrogen fuel is sourced. Hydrogen, though inherently colorless, is classified by production method into grey, blue, and green hydrogen. Currently, the most widely produced form is grey hydrogen, primarily because of its low production cost. Grey hydrogen is obtained as a byproduct of petrochemical processes or through the reaction of fossil fuels, such as natural gas, with steam. The main concern with this process is the substantial emission of carbon dioxide. Producing one ton of hydrogen results in the emission of 10 tons of carbon dioxide. This presents an ironic contradiction: while the hydrogen economy aims to reduce carbon emissions and achieve carbon neutrality, the production of hydrogen increases carbon dioxide emissions. Blue hydrogen is produced through the same methods as grey hydrogen, but the produced carbon dioxide is captured and stored instead of being released into the atmosphere. While blue hydrogen is more environmentally friendly than grey hydrogen due to lower carbon dioxide emissions, the carbon capture process adds significant cost to production. Green hydrogen is produced without emitting carbon dioxide, making it the most environmentally friendly form. Professor Do Hwan Kim stated that “The direction we need to take is toward green hydrogen. However, green hydrogen is more expensive to produce compared to other forms. My goal is to reduce the production cost of high-quality green hydrogen and enhance Korea’s competitiveness in hydrogen production.” [Figure 1. Conceptual diagram of water electrolysis. The cathode and anode are separated by a membrane placed between the two electrodes submerged in water. At the cathode, the hydrogen evolution reaction occurs, whereas at the anode, the oxygen evolution reaction occurs.] Searching for Alternatives to Precious Metals When voltage is applied across a cathode and anode submerged in water, electrolysis occurs. Hydrogen is produced at the cathode, and oxygen at the anode. For the reaction to occur, a minimum voltage difference of 1.23 V between the two electrodes is required, known as the cell voltage. The closer the cell voltage is to 1.23 V, the better the catalyst, as less electrical energy is needed to drive the electrolysis reaction. Researchers are actively developing catalysts that can split water and generate hydrogen at even lower cell voltages. Commercial catalysts rely on expensive precious metals, such as platinum, iridium, ruthenium, and rhodium. The catalyst developed by Professor Kim’s team demonstrated a cell voltage of 1.58 V, outperforming commercial catalysts which average 1.61 V. Moreover, it reduced the ruthenium content to just 0.3%, improving economic feasibility. Synthesizing Catalysts Electrochemical catalysts used in water electrolysis are synthesized by combining various substances. The material used by Professor Kim follows the A₂BC structure, composed by combining three elements. The A₂BC structure indicates a compound formed with two atoms of element A, and one each of B and C, with A and B being metals and C a non-metal. Professor Kim explained, “There are a vast number of possible combinations of metal and non-metal atoms, which naturally results in a surge in the number of related research publications.” He continued, “You have to determine the atoms to choose, what ratio to use, how to pair the metals and non-metals, and what crystal structure to apply. There are numerous factors to consider.” The Evolution of Catalyst Research  Professor Kim began his catalyst research in 2019 by conducting theoretical calculations to explain why a catalyst developed by experimentalists exhibited superior performance. Over time, he progressed to participating equally from the conceptual stage, working alongside experimental researchers. When other research teams published experimental results, he would run simulations to verify whether the data aligned. In several instances, experimental results were later contradicted. His research has since evolved to the point where he independently develops new catalysts. “We now propose new catalysts,” said Professor Kim. “Through computer simulations, we can model and propose a wide range of catalysts.” It has become possible to propose new combinations within the A₂BC structure or to explore alternative configurations, such as AB₂C, adjusting the elemental ratios to identify high-performance catalysts for electrolysis. Professor Kim added the following: “In some cases, we have identified extremely effective catalysts through simulation and suggested experimental researchers synthesize them. However, many experimentalists struggled to validate the new materials developed through computational chemistry, as they rely on conventional research methodologies to produce the new catalysts. Therefore, in 2022, our team also began conducting experimental research.” Computational Challenges of Metallic Materials that are More Complex than Semiconductors “To achieve good research results, I faced the challenge of having to compute large volumes of catalyst structures on my own,” said Professor Kim. He added, “The KISTI Supercomputing Center granted me access to their supercomputing resources, which allowed me to overcome the limitations of a suboptimal research environment and focus fully on my work.” He continued, “Compared to semiconductors, metals have a higher number of electrons. Computing the electronic structure of these materials is extremely time-consuming. With only my local cluster, the computations would have taken far too long; therefore, utilizing KISTI’s supercomputing resources was immensely helpful.” Why do metals have more electrons compared to semiconductors? For semiconductors, such as silicon or germanium, only four valence electrons per atom need to be considered to understand the material’s properties. In contrast, metallic materials require the consideration of a greater number of electrons. “Sometimes, it is necessary to consider not only the outermost orbitals but also electrons in deeper shells,” Professor Kim added. “This is the reason local clusters are insufficient. The total number of electrons increases drastically and, consequently, the computation burden, making it impossible to manage without a supercomputer.” He applied for KISTI’s supercomputing resource grants nearly every year, and when selected, he received free access to the system through KISTI’s support. He expressed his gratitude to KISTI for significantly accelerating the progress of his research. Detailed Catalyst Synthesis Process The catalyst developed in this study was synthesized through a three-step process. Step 1 involved annealing the material (NiCo precursor) at 300 °C for 2 h. This yielded NiCo₂O₄ nanowires, comprising the two transition metals nickel and cobalt, combined with the non-metallic element oxygen. Step 2 required transferring the synthesized nanowires into an autoclave (a high-temperature, high-pressure reaction chamber), where they were heated at 140 °C for 4 h. During this stage, nanosheets of nickel and iron, along with a trace amount of ruthenium, were deposited onto the nanowire. Layers of nickel, iron, and ruthenium formed deposits on the surface of the nanowires. Step 3 involved an etching process, in which selected surface atoms of the catalyst were deliberately removed to introduce structural defects. These defects—vacant atomic sites—play a crucial role in enabling catalytic activity within the core-shell architecture. Hydrogen Evolution Reaction and Oxygen Evolution Reaction  To initiate water electrolysis, cathodic and anodic electrodes are immersed in water and a voltage is applied across them. The hydrogen evolution reaction occurs at the cathode, while the oxygen evolution reaction occurs at the anode. As described previously, the cell voltage—the potential difference required for current to flow between the two electrodes—is a key factor that indicates the electrolysis catalyst. Theoretical analysis using simulations further estimates the Gibbs free energy, density of states relative to the Fermi level, and identifies the orbitals and atoms that primarily influence the material’s properties. Professor Kim noted that the structural stability of the synthesized catalytic crystal is also assessed theoretically.  [Figure 2. Catalyst development process followed by Professor Do Hwan Kim’s Research Team: def-Ru-NiFe LDH/NiCo₂O₄ Catalyst Step 1 involves thermal annealing at 300 °C to synthesize NiCo₂O₄ nanowires (NCO NWs/CC) from nickel, cobalt, and oxygen precursors. Subsequently, nanosheets of nickel and iron, along with a trace amount of ruthenium, are introduced to the nanowires. This leads to the synthesis of Ru-NiFe LDH. Step 3 involves an etching process, resulting in the final product: def-Ru-NiFe LDH.] Furthermore, his research team performed calculations to determine the extent to which each constituent element contributes to the catalyst’s properties. For example, the team assessed how catalytic performance varies depending on whether crystal defects are present or absent. They also computed the catalytic properties when only the core is present and when only the shell is present. Durability tests were conducted to assess how long the catalytic reaction can remain stable over time. After examining whether the catalyst's performance degrades rapidly or remains stable over extended periods, they found that their hydrogen evolution catalyst maintained its initial performance even after 60 h of continuous operation. Through this process, the optimal catalyst composition and structural configuration for the best performance was identified. Professor Kim stated that “Through computational modeling, we identified the optimal combination of metals and non-metals, and designed crystal structures with high catalytic activity. We then theoretically predicted the catalyst’s water-splitting performance and synthesized the catalyst materials in the lab based on these predictions. We then demonstrated the excellent performance of the synthesized catalysts through experimental validation. The theoretical modeling preceding the experimental synthesis results in a paper of higher rigor.” Considering the synthesis yield, the team believes that the practical production cost of the catalyst could be reduced to approximately half that of commercial products. Patent Application Professor Do Hwan Kim filed a patent for the newly developed catalyst in October 2024. He emphasized, “This novel catalyst developed in our study shows strong commercialization prospects, particularly through collaboration with an on-campus startup founded by Professor Joong Hee Lee from the Department of Polymer Nano Science & Technology.” He plans to fabricate a prototype water-splitting stack incorporating the catalyst. Professor Kim stated that “This catalyst is easy to synthesize and highly reproducible. Each synthesis consistently yields identical results, with no anomalous results.” He further remarked: “Most catalysts are used either as anode or cathode materials, but ours performs effectively on both electrodes. This implies we can simplify the synthesis process by producing only one type of electrode material rather than two, thereby lowering production costs.” Wrapping up the interview, Professor Kim stated that “While there are numerous studies that experimentally demonstrated high-performance electrocatalysts, there remains a lack of theoretical studies that explain the mechanisms of catalytic activity at the atomic or molecular level. If we can precisely describe those mechanisms, we could overcome the limitations of current catalyst performance and enable the design of novel catalysts. That is why the role of computational chemists is vital. I hope that more computational researchers will join the field of electrocatalysis, contributing to the advancement of next-generation energy materials.”
Exploring CO₂ Electroreduction Catalysis with the NURION Supercomputer
Professor Seoin Back from Sogang University has presented a methodology for developing electrocatalysts for the CO₂ reduction reaction with high activity and selectivity based on an artificial intelligence (AI) model. This research was published in the international journal ≪Nature Communication≫on November 11, 2023, titled “Data-driven discovery of electrocatalysts for CO₂ reduction using active motifs-based machine learning.” Various methods have been investigated to reduce atmospheric CO₂, which is considered a major cause of global climate change. This study proposes a method of searching for electroreduction catalysts that convert CO₂ into useful compounds using simulations and AI. It is a major study conducted with the support of the Supercomputing Exploration for R&D Innovation (SERI) for Creative Research (KSC-2023-CRE-0134). Let us explore the research conducted using the KISTI supercomputer by a theoretical chemist to solve climate problems. Catalyst development research with AI A catalyst is a substance that accelerates a chemical reaction without being consumed itself. “To develop catalysts, numerous experiments on various candidate materials are necessary,” Professor Back said. “For the past 20 years, it has become possible to predict catalyst performance on a computer using first-principles calculations.” The method of catalyst development using computers so far is called “forward material design.” A theoretical chemist theoretically predicts catalyst performance on a computer, followed by validation by an experimental chemist. This was the method used during Professor Back’s doctoral and first postdoctoral research. Professor Back’s current research focuses on using AI to make forward material design faster and more accurate. While traditional methodologies based on first-principles calculation predicted catalyst performance using only a few simple surfaces, he is developing a method to predict catalyst performance more accurately using thousands of surfaces. As it is impossible to perform first-principles calculations on such a large number of surfaces, AI is used. The reason he uses the KISTI supercomputer is for the first-principles calculations needed to train AI models.   First-principles calculations The term “first-principles calculation” may be unfamiliar to some readers. This term refers to calculations using the fundamental principles of quantum mechanics without relying on experiments or empirical data. The Schrödinger equation is solved using a computer to represent interactions between electrons and nuclei, and between electrons themselves. The calculation of material properties, such as structure, thermodynamics, electromagnetism, and optical properties, can be done using first-principles calculation methods. Thus, the first-principles calculation method is widely used in the fields of chemistry, materials science, and physics. Various methods have been developed, but the most commonly used method is Density Functional Theory (DFT). DFT employs quantum mechanics to calculate the distribution and energy of electrons in materials and molecules. This allows for the prediction of the existence of certain molecules, as well as their shapes and properties.   Traditional catalyst design methodologies are simple  The catalyst that converts CO₂ into other useful substances must activate the chemical reaction which convert CO₂ into another substance. Moreover, the catalyst should selectively produce only the desired compounds to produce useful substances. To predict activity and selectivity, catalyst designers use factors such as the adsorption energy of carbon monoxide (CO) and hydrogen (H) as predictive indicators. Adsorption energy calculations are necessary to predict the catalytic properties. “Traditional catalyst design methodologies mainly modeled only one type of surface, predicting catalyst activity through the calculation of adsorption energy on that surface,” said Professor Back. This methodology has a large discrepancy from reality. Viewing only one surface and simulating it can be very different from experimental validations. Professor Back’s recent research has developed a method that models a wide variety of surfaces to reduce the gap between simulation and experiment. Calculating the adsorption energy on all modeled surfaces is challenging; hence, this part has been substituted with an AI model. Adsorption, the binding of substances to a surface, is crucial as catalyst reactions occur on the surface of the catalyst. This makes adsorption energy as a key factor influencing the reaction. The activity and selectivity of CO₂ catalyst reactions are predicted using the adsorption energy of various compounds, with CO and H adsorption energy being good predictors, as revealed by Professor Jens K. Nørskov’s research team at Stanford University. Why is modeling various surfaces important? Even for a catalyst made of one material, the structure of its surface can vary. For example, the most stable surface form of a copper-made catalyst is called the (111) surface. If we prepare a copper catalyst without specific intent, most of the catalyst surface will be the (111) surface. However, other structural surfaces exist. The second most stable form is called the (001) surface, which also contributes to the catalyst reaction. Reactions also can occur at the edges connecting the (111) and (001) surfaces. Therefore, when predicting the experimentally measured activity of catalysts, the effects of these diverse surfaces must all be considered. To date, theoretical chemists have typically considered a single surface or selected a few for modeling. Such models fall short of representing the diverse surface structures of a catalyst. The limitation was due to the enormous computational cost of modeling every single surface present in a real catalyst. Even in the case of a catalyst particle as small as 5 nm, it contains thousands of atoms, making first-principles calculations nearly impossible. Fortunately, with the help of supercomputers, it became possible to create and simulate much more diverse models that provide a more detailed and accurate representation.   Professor Back’s methodology for viewing 5000 surfaces Traditional methodologies have been implemented using a single surface model, whereas Professor Back has developed an AI model and methodology that consider 5000 surface models. To create an AI model, data for training the AI are required. Publicly available data in the field of catalysis simulations is scarce. Professor Back stated, “Recently, many AI techniques have utilized texts or images. Such information is plentiful online. However, the catalytic surface adsorption energy data I use as training data are not publicly available.” He continued, “I had to create the data myself. I created data calculating the adsorption energy of carbon monoxide (CO) and hydrogen (H) using DFT. It was challenging to generate these data with my computer cluster. KISTI’s supercomputer resources were essential. I used the KISTI supercomputer’s resources to perform DFT calculations and create the data. Then, I trained the AI model with the data I created. Now, with the AI model, we can evaluate the catalyst properties. Considering the contribution of 5000 catalyst surfaces, we could predict activity and selectivity more accurately compared to the conventional method.” For training the AI model, various input values can be considered. In previous research conducted by Professor Back, only elements near the catalyst’s adsorption site were considered. “There are elements that directly interact, i.e., bond with CO. Around those, there are second nearest elements. Some are on the same layer, and others are on the atom layer right below,” he said. The second nearest elements are referred to as “second nearest neighbors to the adsorbate.” Upon inputting various input values (e.g., the positions of elements up to second nearest elements) with weights, the newly developed model showed advantages in speeding up AI training while maintaining the same level of accuracy as previous models. “The training was much faster because it was streamlined to include only necessary information without superfluous details, maintaining the accuracy at the level of my previous models,” he said. “Exaggerating a bit, while previous approaches were total inspection, the 2021 research focused only on important parts, allowing for quicker discovery.”   What calculations can be performed using KISTI’s computational resources?  If a problem requires computation using a supercomputer, it is likely to be complex. Readers may question what makes “catalyst material simulation” research computationally intensive. What are the specifics of chemical research using a supercomputer? Professor Back elucidated, "For example, one (DFT) calculation on a single node of KISTI’s supercomputer takes approximately two days. For reference, KISTI’s 5th supercomputer has a total of 8305 calculation nodes. The factors that contribute to longer computation times include both the size of the data being handled and the complexity of the equations involved. In my case, it's closer to the latter. As the system size increases, the calculation time increases roughly in cubic proportion. I explained about simulating a single surface earlier. If we model the catalyst size used in experiments and consider all adsorption sites, the simulation results will be similar to reality. However, as the size increases, the calculation takes longer, and considering various adsorption sites necessitates a supercomputer, not just a cluster.” The new AI model proposed by Professor Back was intended for reducing CO₂ to other substances using electrochemical reactions, and the Back’s research group examined 465 metal combinations. They suggested that Cu-Pd and Cu-Ga alloy catalysts are favorable for producing C1+ compounds (such as ethylene) and formic acid, respectively. Professor Back’s theoretical research was verified by experimental chemists. The lab synthesized the substances directly to check if the catalyst’s activity and selectivity matched predictions, conducted by Professor Kun Jiang at Shanghai Jiao Tong University. “We proposed two catalysts out of 465 combinations through a year of theoretical research, and the experimental group took six months to verify them,” he said. This research demonstrates that AI models and methodologies developed using supercomputers can significantly reduce the cost and time required to test hundreds of material candidates. Theoretical chemists start from fundamental physical theories to simulate realistic systems and predict their properties. For such complex calculations, large amounts of data and high-performance computing resources are essential, highlighting the increasingly important role of supercomputers in this field. [Figure 1. (a) Predictive process for the selectivity of CO₂ reduction reaction products. (Top) Predicts the adsorption energy of reaction intermediates on various catalytic surfaces. (Bottom) Calculates the productivity for each product based on the predicted adsorption energy. (b) Selectivity and activity prediction for 465 different element combinations. / Mok et al., Nature Communications (2023)] [Figure 2. (a) AI-predicted performance results for Cu-Pd alloy catalysts, showing the changes in selectivity at different voltages. (b) Experimental verification results for the performance of Cu-Pd alloy catalysts, confirming the AI predictions. Additionally, the methodology developed in this study can predict changes in performance based on (c) the structure of the catalyst surface and (d) composition changes. / Mok et al., Nature Communications (2023)] Figure sourceMok et al., Data-driven discovery of electrocatalysts for CO₂ reduction using active motifs-based machine learning, Nature Communications (2023). https://doi.org/10.1038/s41467-023-43118-0
Ringdown Gravitational Waves from Close Scattering of Two Black Holes Using the NURION Supercomputer
Professor Gungwon Kang specializes in general relativity, with gravitational waves being one of his main research areas. In collaboration with Dr. Yeong-Bok Bae and Dr. Young-Hwan Hyun, he published a paper in Physical Review Letters (PRL) in June, a journal highly sought after by physicists worldwide. The black hole simulation that forms the core of the paper was conducted with support from the Creative Research category of KISTI Supercomputing Exploration for R&D Innovation (SERI) (KSC-20202-CRE-0352). To gain deeper insight into the term “ringdown” mentioned in the title, we visited Chung-Ang University on October 16. Black Hole Simulation Research  Professor Gungwon Kang stated: “I have been studying black holes for 15 years, and a particularly intriguing result from a project that began three years ago resulted in the publication in PRL.” He continued: “Because of repeated selection for KISTI’s Creative Research Projects, I was able to perform this simulation using KISTI’s supercomputing resources. We identified a previously unobserved gravitational wave signal emitted when two black holes scatter closely and published a paper exploring its origin.”  Publication of the 2015 Gravitational Wave Discovery in PRL  Research on gravitational waves generated by black hole collisions was first initiated by Albert Einstein in 1916. After 99 years, on September 14, 2015, this distinctive waveform of gravitational waves was captured by the Laser Interferometer Gravitational Wave Observatory (LIGO) in the United States. The finding was published in a paper on February 11, 2016, 100 years after the research was initiated. After a century-long wait, Einstein’s hypothesis had finally been validated. The announcement of the gravitational wave detection drew the attention of numerous editors from top-tier academic journals, who visited LIGO. Given the media attention such research would garner, the editors actively competed to publish the paper. Although journals such as Nature and Science had higher impact factors, the LIGO research team voted to submit the paper to Physical Review Letters (PRL). This decision ultimately resulted in the awarding of the 2017 Nobel Prize in Physics.1) Black Holes Create Gravitational Waves Even in a Near-miss  Professor Gungwon Kang focused his research on the dynamics that occur when two black holes bypass one another instead of merging. He explains that such events are more common and produce gravitational waves that are weak but still distort spacetime. What motivates this research? “There’s a field called numerical relativity,” he began. “It numerically solves Einstein’s gravitational field equations, which are otherwise difficult to solve.”2) This field primarily focuses on intense gravitational waves, such as collisions between black holes and neutron stars. Simulating neutron stars is highly complex due to the need to account for their matter, whereas black holes, being void of matter, allow for relatively simpler modeling. “Outside a black hole, only spacetime curvature needs to be considered, making the gravitational field equations easier,” he added. “Numerous previous studies used approximations, and therefore, I found a niche to explore as a latecomer in gravitational wave research.” “The third-generation LIGO, known for its sensitivity, can detect the weak gravitational wave signals emitted by two black holes that pass each other. We studied the waveform of the gravitational waves emitted in such scenarios. There is a parameter known as a collision variable. The smaller the parameter, the closer the two black holes approach each other. If the value is too large, they pass by. There is a critical threshold value that determines a binary system (restraining system). We conducted simulations using a threshold value that varies depending on velocity, considering approach distances ranging from six to eight times the diameter of a black hole.” The black hole diameter mentioned here refers to the event horizon—the boundary that separates the black hole from the surroundings. Once light or matter crosses this boundary, it cannot escape due to the immense gravitational pull. Gravitational Waves from Black Hole Collisions  Professor Gungwon Kang pulled out his laptop and showed a picture. “This is the gravitational wave detected by LIGO in 2015. The top area is cluttered with noise, but once the noise is filtered out, it appears nearly as shown below.” <Figure 1-a> He continued: “Gravitational waves are generated as two black holes spiral around each other (inspiral), and the waveform fluctuates in sync with their orbital period. As the two black holes draw closer, both the orbital period and wavelength shorten, and the amplitude increases. The most dramatic moment occurs just before the merger. After the merger, it rapidly stabilizes and the signal weakens. This phase is referred to as the ringdown, and the signal is called the ringdown signal. It is akin to striking a bell: the bell rings and then gradually fades. The ringdown signal encapsulates the characteristic oscillations of the newly formed black hole after the merger.” <Figure 1-b> Oscillating Black Hole Event Horizon  When two black holes scatter closely from each other, they can follow diverse trajectories depending on their respective energies and angular momentum. Professor Kang’s research team hypothesized that if two black holes pass each other at high velocity along a hyperbolic trajectory, they would emit a single burst of signals. <Figure 2-a, red dashed circle> However, simulations conducted using KISTI’s supercomputers, as well as systems at the Korea Astronomy and Space Science Institute and the Institute for Basic Science (IBS), revealed unexpected results: following the initial single signal, a signal resembling a ringdown phase was observed. This has resulted in additional analysis. <Figure 2-b> Professor Kang explained, “The event horizon continued to change from a circular to an elliptical shape. Just as tidal forces between the Moon and Earth cause high and low tides due to gravitational differences between the near and far sides of the Moon, when two black holes approach each other, their mutual gravitational pull induces tidal interactions that cause oscillations in the black holes. These distort the event horizon, and in addition to the initial burst of gravitational waves generated by their orbital motion, additional gravitational wave signals are produced.” <Figure 2-c> He added that his research team was the first to detect such signals. “We were thrilled to be the first. However, during drafting the paper, we came across a study by Professor Frans Pretorius at Princeton University in the United States, whose team explored tidal deformations in neutron stars. In neutron stars, tidal deformation occurs due to internal matter deformation; however, in black holes—devoid of internal matter—tidal deformation cannot be explained using approximation alone.” It was only by employing precision-level numerical simulations that they witnessed the event horizon of a black hole oscillating visibly due to gravitational forces. He concluded, “The essence of this paper is that even black holes experience tidal deformation, producing gravitational waves that carry ringdown signals.” Ringdown Gravitational Waves Caused by Tidal Interactions  There was some confusion about how the ringdown signal observed by Professor Kang’s team differed from the standard ringdown that occurs after two black holes merge. He clarified that “When two black holes pass by each other, their masses remain unchanged, but the spacetime is distorted. This distortion propagates along the trajectory as gravitational waves. In addition, oscillations of the black hole induced by tidal deformation generate an additional gravitational wave signal—its intensity is approximately 5% of the single orbital burst. This was a joint research effort conducted over three years by Dr. Yeong-Bok Bae (currently a researcher at Chung-Ang University, formerly of the IBS Center for Theoretical Physics), Dr. Young-Hwan Hyun (currently a researcher at the High Energy Physics Center at Chung-Ang University, formerly of the Korea Astronomy and Space Science Institute), and me.” Research Background If their discovery was unforeseen, what was Professor Gungwon Kang’s original research objective? He responded, “There is a model developed by French theoretical physicist Thibault Damour (Professor Emeritus at IHES) in 1999 that approximates the gravitational waveforms emitted when two black holes pass each other on a hyperbolic trajectory (EOB Hamiltonian, Effective One-Body). The waveform varies depending on the black holes’ mass, spin, and distance, and due to the vast number of possible configurations, approximation methods, such as EOB are essential for rapid analysis.” He went on to explain that a recently enhanced approximation model—third-order Post-Minkowskian (3PM)—had become available. While validating its precision and improving parts of the approximation formula, he conducted simulations across various combinations. It was during this process that he unexpectedly observed the ringdown signal. How did he come to associate the oscillating ringdown waveform—central to the paper’s thesis—with tidal deformation in black holes? “Initially, I was unaware of its nature,” he admitted. “The realization emerged during a discussion with an astronomer.” He continued: “During galactic collisions, intense tidal interactions may arise. If an object within one galaxy rapidly approaches a celestial body in another galaxy, tidal interactions are induced by the intense gravitational force. In such cases, the object’s angular momentum decreases, causing a temporary reversal in its rotational direction—a phenomenon known as a ‘flip’.” This insight was shared by Professor Hyung Mok Lee—currently President of the Korean Gravitational Wave Group (Professor Emeritus at Seoul National University’s Department of Astronomy and former President of the Korea Astronomy & Space Science Institute). [Figure 1. (a) Orbital trajectory of two merging black holes. (b) Gravitational waveforms emitted during this process. After the merger, the newly formed black hole rapidly stabilizes and emits a ringdown signal—an oscillatory waveform that decays over time. ] [Figure 2. (a) Single burst gravitational waveform emitted when two black holes scatter closely. (b) Gravitational waveform discovered by Professor Gungwon Kang’s team—although the trajectory does not involve a merger, a signal resembling ringdown is observed in addition to the single burst during the close encounter. (c) Deformation of a black hole’s event horizon caused by tidal interactions—resulted in the first-ever prediction of a gravitational waveform not previously observed.] Research Significance  Professor Gungwon Kang emphasized the value of this study and stated that “Discovering such gravitational waveforms enables us to better understand astrophysical phenomena.” He explained that this requires not only developing gravitational wave data analysis techniques and advancing detection methods for new gravitational wave signals but also conducting feasibility studies on potential observability. When asked whether such studies have begun, he answered: “Not yet; however, it is a matter I will eventually have to undertake. If we can demonstrate the possibility, both experimental physicists and waveform modelers will be drawn to the challenge of this new gravitational waves concept. The U.S. is working on the Cosmic Explorer, a detector ten times larger than LIGO, while Europe continues efforts to complete the Einstein Telescope by around 2040.” Korea’s Gravitational Wave Research  When asked about the current state of gravitational wave research in Korea, Professor Kang replied: “There are numerous challenges due to a lack of funding, research manpower, and experimental infrastructure. Nevertheless, conditions have improved to some extent. In the past, the field was dominated by data analysts and theorists dominated the field; however, we are now witnessing a growing presence of young experimentalists, such as Dr. Sung-ho Lee at Korea Astronomy & Space Science Institute, Professor Kyung-ha Lee at Sungkyunhwan University, and Professor June Gyu Park at Yonsei University. Nevertheless, our resources remain significantly behind those of other countries, and no domestically-led experimental projects.” He went on to share his experience in Japan, saying, “Last September, I visited the KAGRA observatory, which houses Japan’s gravitational wave detector.” His visit was to attend the LVK (LIGO–Virgo–KAGRA) Collaboration Meeting, an international gathering of gravitational wave researchers held biannually in March and September. Located near Toyama, a city known for neutrino detection experiments, KAGRA has 3-km-long arms—approximately 1 km shorter than those of LIGO—but is situated 200 m underground to improve sensitivity. It was completed in 2019 and began observations in 2020. Korean gravitational wave researchers also attempted to build a detector called “SOGRO” using superconducting phenomenon, following the formation of the Korean Gravitational Wave Group; however, the effort failed due to a lack of infrastructure. Subsequent proposals to establish a gravitational wave research group within IBS were submitted twice, but both were turned down. This reflects the current state of affairs in Korea. After novelist Han Kang received the Nobel Prize in Literature, the media began to claim that “a Nobel Prize in the natural sciences should be next.” However, based on the current state of research, that claim appears distant. For now, only cold wind continues to pass through.
Origin of the Universe Examined with the NURION Supercomputer
Professor Changbom Park at the Korea Institute for Advanced Study published a paper titled “Formation and morphology of the first galaxies in the cosmic morning” in ≪The Astrophysical Journal≫, a top-tier international astronomy journal, on September, 20, 2022. The James Webb Space Telescope (JWST), launched on December 25, 2021, is showing us the wonders of the universe what we have never seen. In the meantime, Professor Changbom Park, a leading astronomer in Korea, published his simulation research results on early stage of the universe, ‘Cosmic Morning’, which increased public interest in the origin of our galaxy. This simulation, named ‘Horizon Run 5’(hereafter HR5), is a major research project supported by the Supercomputing Exploration for R&D Innovation (SERI) for Grand Challenge (Large-scale) (KSC-2018-CHA-0003, KSC-2019-CHA-0002, KSC-2021-CHA-0012), and was led by Professor Park, who is considered one of the leading theoretical astrophysicists of Korean’s time. Let us explore his outstanding achievements using the KISTI supercomputer and the relevant findings. The standard model of Our Universe Modern cosmology, constructed by the leading astronomers of our time, explores the origin, present, and future of the universe. The standard model of modern cosmology is ΛCDM. Λ (Lambda) represents the cosmological constant, and CDM represents cold dark matter, which together are believed to constitute our universe. The universe is undergoing accelerated expansion, with its expansion rate increasing over time, and modern cosmology has introduced dark energy to explain this phenomenon. Modern scientists believe that the vacuum is not empty but contains energy, leading to the hypothesis of dark energy as the core of this. Moreover, there is an unidentified substance known as dark matter, which moves much slower than the speed of light and is believed to possess a cold nature and gravitational attracting force. Over the years, Professor Park has studied using the redshift data of observed galaxies using the Sloan Digital Sky Survey (SDSS) telescope at the Apache Point Observatory in New Mexico, USA. By examining redshift values, it is possible to determine how fast spatial objects are moving away from the Earth, with higher values indicating objects that are farther away in the early universe. Analyzing redshift data enables us to understand how galaxies are distributed in space, thereby allowing us to confirm how our universe has balanced gravity and expansion to shape its current cosmological history. Collaboration with young astronomers In the early 2000s, Professor Park began intensive research to understand the history of universe expansion. He discovered that, despite the evolution of the universe, the distribution pattern of galaxies in space does not significantly change statistically, leading him to devise a method for measuring the rate of universe expansion. Working with fresh postdoctoral researchers from prestigious universities around the world and senior researchers from national institutes, he developed a combined method with the galaxy distribution measurement technique and the Alcock–Paczyński measurement to determine the state equation values of the cosmological constant, and applied it to observational data. However, there were limitations in fully utilizing the information contained in the spatial distribution of galaxy, so after several years of repeated improvement of the measurement method, it could finally be applied and analyzed across all observational data. By integrating Alcock and Paczyński’s initial idea with his own ideas and specifying theoretically detailed methods implementing it in practice, he discovered a new way to verify the universe model solely based on the spatial distribution data of galaxies. Furthermore, “To validate the effectiveness of this method, it is important to perform large-scale cosmological simulations for comparing and analyzing with the data,” stated Professor Changbom Park, initiating a new study on cosmological simulations utilizing supercomputers. Validation of observational data with the KISTI supercomputer Professor Changbom Park said, “All observational data are quite messy owing to the complex interplay of indirect selection effects. To correct for systematic distortions that occur in observations, it is essential to simulate the observation process meticulously.” Without accurate simulations, it is impossible to correct for the distortions in observed data. So, this is led him to apply for the SERI, one of KISTI’s research support projects, through which he gained access to the NURION supercomputer to perform simulations that could validate the observation results. This is the genesis of the large-scale cosmological simulations, known as “HR series.” Starting with HR1 in 2007, through to HR2, 3, 4, 5, and finally HR+ in 2021, each iteration represented the most extensive simulation of large-scale structure evolution of its time. HR1 through HR4 were N-body simulations, studying the evolutionary model of the universe under the premise that N bodies existed in the universe, with each version increasing the number of particles. The more particles there are, the higher is the accuracy of the universe evolution model. However, the gravity of particles alone could not examine the process of star formation on small-scale structures on galaxies and/or clusters. To address this, subsequent studies HR5 and HR+ incorporated hydrodynamics. Professor Changbom Park’s research team conducted, for the first time, a large-scale cosmological numerical simulation that considers both gravity and hydrodynamics, being one of the largest simulations in the world. “Until HR4, the fluctuations in the distribution of matter were represented solely by the gravity of particles. Gas was not included,” he remarked, emphasizing that “HR5 incorporates both the process of gas and star formation alongside the gravitational characteristics of particles, thus enabling a precise comparison with observations.” HR has consistently been one of the largest cosmological simulation in the world, made possible with the help of the KISTI’s superior supercomputing resources. The NURION supercomputer, established in 2018, boasts a computational power of 25.7 petaflops. A petaflop represents the capability to perform a quadrillion floating-point operations per second. Professor Changbom Park expressed his gratitude toward KISTI, stating, “For this simulation, I exclusively utilized a quarter of the total capacity of KISTI’s supercomputer. Without such significant support, this research would have been impossible.” He underscored the necessity for KISTI to secure even more powerful supercomputing resources for domestic researchers to achieve global research milestones. When undertaking simulations of such an immense scale, it is imperative for researchers to be present on-site. To conduct the HR5 simulation, 170,000 cores of the NURION supercomputer were employed, leveraging parallel computing techniques. This scale surpasses the capability of an average personal computer (around 10 cores) by over ten thousand times. Running the supercomputer sometimes leads to errors, causing problems where other cores cannot perform calculations owing to a faulty core. Therefore, it is necessary to reboot the system safely before a larger problem develops. Consequently, co-researchers Dr. Jaehyun Lee (Korea Astronomy and Space Science Institute) and Dr. Yonghwi Kim (KISTI Senior Researcher) stayed up all night monitoring the situation. Beyond this, once the simulation is complete and the data have been generated, they must be quickly transferred and analyzed using the 10 Gbps national research and education network (KREONET), managed by KISTI. This task was undertaken by Dr. Juhan Kim, a research professor at the Korea Institute for Advanced Study’s Center for Advanced Computation. The valuable outcomes could not have been achieved without the efforts of not only the researchers but also the supporting institutions and funding organizations. [Figure 1. Simplified representation of the HR5 results—from left, the same region showing a) dark matter, b) stars, c) gas, d) gas temperature, and e) metallicity. Materials are interconnected like a web around two specific areas with a high concentration of stars, referred to as galaxy clusters, with the webbed parts called Filaments, and the seemingly empty regions called Voids.] [Figure 2. (Left) schematic diagram of merger trees constructed between snapshots n − 1, n, and n + 1. The gray shades are self-bound structures, the diamonds present the MBPs in the self-bound structures, and the colored ellipses are galaxies in them. The black solid arrows indicate main progenitor/descendant relations, dotted arrows mark the mass transfers between the structures with no such relations, and the dashed lines indicate mergers along the main descendant branches. The green solid arrows show the backtracking of the MBPs to repair the branches broken by structure misidentification, and the red solid arrow marks the main progenitor–descendant branch repaired by the MBP backtracking scheme. (Upper right) Fractions of disk (blue), spheroid (red), and irregular (black) morphological types, at redshifts 5, 6, 7 as a function of galaxy stellar mass. (Lower right) Morphological type fractions of galaxies as a function of galaxy stellar mass at z ≈ 2. The blue, black, and red solid lines are for disks, irregulars, and spheroids, respectively.] Infinite applications of HR simulations The world’s largest cosmological simulation, contains 1.5 petabytes of cosmic object data.1) Professor Changbom Park has previously published papers in astrophysics journals on the characteristics of proto-clusters formed in early universe and the new cosmological model based on ‘the fifth element’, which may be a groundbreaking theory of the standard LCDM model. All these studies were possible because they uniquely possess a vast amount of simulation data capturing the detailed physical characteristics of cosmological objects formed over the 13.8-billion-year history of the universe. Today, astronomers live in an era where a tremendous amount of observational data is being produced by high-performance telescopes, such as the James Webb Space Telescope (JWST), the Giant Magellan Telescope (GMT), and the Large Synoptic Survey Telescope (LSST). Unfortunately, the HR performed by Professor Changbom Park, as well as the field of numerical astrophysics including cosmological simulations, has not generated as much enthusiasm from around the world as observational astronomy. Despite various research outcomes being published by computational groups around the world, new research results always need to undergo endless verification before gaining recognition. What will be the fate of Professor Park’s cosmological research? Will this be the first clue to a great discovery that shake up cosmology? Only time will tell. [Further reading] The first paper on "Horizon Run 5," published in ApJ on February 8, 2021, “The Horizon Run 5 Cosmological Hydrodynamical Simulation: Probing Galaxy Formation from Kilo- to Giga-parsec Scales.” ‘The Fifth Element’, published in ApJ on August 8, 2023, “Tomographic Alcock–Paczyński Test with Redshift-Space Correlation Function: Evidence for Dark Energy Equation of State Parameter w>−1”
Metal-Organic Frameworks Based on Large Language Models Using NEURON Supercomputer
Professor Jihan Kim from the Department of Chemical & Biomolecular Engineering at KAIST published a paper in the journal Nature Communications in May 2024. The title of the paper is “ChatMOF,” with the subtitle “An AI system that predicts and generates MOFs using a large language model,” Large language models refer to AI models, such as ChatGPT, developed by the American company OpenAI, which sparked a revolution in artificial intelligence. The title of the paper, “ChatMOF,” signifies the convergence of the latest AI research and the field of chemistry. On the way to meet Professor Jihan Kim, this question arose: Even chemists are now using ChatGPT for research—and with it, they are discovering new metal-organic frameworks (MOFs) that meet their specific requirements. I met Professor Kim on the first floor of the W1-3 building at KAIST. Following Professor Kim into the lab, there were nearly 20 graduate students, each seated at their desks, working on their research. I was struck by the size of the lab. We stepped into a side room inside the lab and sat down for the conversation. Young-Hoon Kang, a fourth-year doctoral candidate and the lead author of the Nature Communications paper was also present. What is MOF? Professor Jihan Kim stated that the material studied in this research is MOF. “When metal ions and organic ligands bond, they form a material with crystallinity,” he said. “The performance of an MOF varies depending on the choice of the metal and organic ligand.” MOF research accounts for 70 to 80% of Professor Kim’s entire research activities. MOFs are drawing particular attention in the chemical industry due to their large surface area and tunability. They can serve many applications including gas adsorption (such as in carbon dioxide capture), catalysts for , and batteries. Thus far, several hundred thousand MOF types have been synthesized, which is an immense number. He added, “MOFs can be synthesized with relative ease by altering the combination of metal ions and organic ligands. However, the enormous number of possible combinations makes it time-consuming to identify the materials that exhibit the exact properties researchers are pursuing.” Development of ChatMOF  After the release of ChatGPT in November 2022, Professor Jihan Kim developed an interest on whether it could be utilized to screen MOFs and identify their material properties. He conducted tests but the outcomes were unsatisfactory. Because ChatGPT is based on a large language model, and large language models are not inherently specialized in identifying the properties of MOFs, he presumed this was the underlying cause for the poor performance. “ChatGPT has some knowledge about MOFs though not extensive,” Professor Kim explained, “Therefore, we modified the language model slightly and retrained it to significantly enhance its knowledge of MOFs.” He continued, “Now, ChatGPT is well-versed in MOFs and can generate accurate suggestions for materials with the characteristics we are seeking. Our ChatMOF allows users to easily access information about MOFs.” For those with limited experience attempting to identify MOF properties or search for suitable structures, it is not always clear on the tools to use. Moreover, the necessary information might not be available in databases. Even the latest version of ChatGPT, version 4.0, might not provide the desired information. “That is why we developed the ChatMOF program—to deliver the answers that ChatGPT cannot,” Professor Kim emphasized. ChatMOF executes tasks in three steps Database search, followed by property prediction, and finally, inverse design to generate materials with desired properties. Graduate student Young-Hoon Kang explained, “Consider ChatMOF as similar to the human brain. It is a device that makes decisions about what actions to make, similar to how the human brain operates.” Professor Kim’s team sequentially applies three tools: one independently developed tool for database retrieval, one for machine learning-based property prediction, and another for inverse design. Kang added, “In addition to these, we have a variety of tools. When a user inputs a query, ChatMOF decides which tool to use and makes the appropriate selection.” Professor Kim’s team has published a demo version of ChatMOF on Streamlit. As of the date of the interview, approximately 760 users had accessed it. Data Necessary for Training ChatMOF I have often encountered the notion that building AI requires access to robust big data for training. Did Professor Kim’s team train ChatGPT using MOF-related big data? Professor Kim responded with the following explanation. “Machine learning models require a large volume of data. However, rather than feeding vast amounts of data into the AI model, we focused on training the language model on how to obtain and use information related to MOFs. Because the model lacked knowledge on what to use, we trained it: ‘When asked this type of question, respond in this manner; when asked that type, respond in that manner’. That is how we developed ChatMOF. Now, because it is aware of the tool to use and what to refer to for any MOF-related queries, it can better answer user queries. Somewhat, we fine-tuned the model.” When asked if he could provide a specific example, Professor Kim responded with an example: “Consider the name of a particular MOF and ask ‘What is the surface area of this MOF?’” “To obtain surface area, a theoretical chemist would have to run codes and understand simulations. However, what makes the ChatMOF research meaningful is that now even non-experts can easily access information. When a user asks about the surface area of a certain MOF, the system autonomously deduces what type of method to apply and how to derive the surface area. For example, if the answer might be found in a database, it autonomously determines to “search the database”, and ChatGPT then generates the code required to query that database. It executes the code, checks the surface area of the MOF from the database, and responds to the user with the surface area of that MOF it computed.” Prior Research on ChatMOF Has any system similar to ChatMOF previously existed? Have others in the MOF field built similar systems? Professor Kim answered, “No, it has not been done before,” and noted that the study most similar to their work might be the ChemCrow paper. The ChemCrow paper (Augmenting Large Language Models with Chemistry Tools) was published in the May 2024 issue of Nature Machine Intelligence. ChemCrow focused on organic molecules and conducted a study analogous to what Professor Kim’s team did with MOFs. By providing guidelines, rather than posing questions, better responses could be obtained from GPT in the organic molecular field. Graduate student Young-Hoon Kang added, “You could say that AutoGPT is a precursor to ChatMOF.” “When we ask GPT a question, it answers based solely on its built-in knowledge. However, AutoGPT can use various tools to obtain information and then generate responses. For example, if a user asks for the current temperature in San Francisco in Fahrenheit, AutoGPT can visit the weather bureau’s website, search for the weather in San Francisco, and then use this information and a calculator to convert Celsius to Fahrenheit. It then delivers the answer to the user. A system that can utilize various tools and provide responses based on information obtained from these tools is AutoGPT. This allows the system to generate more accurate and comprehensive responses than those provided by a standalone GPT model.” He added, “We examined similar models and carefully considered how their ideas might be applied within the MOF domain. The model development process involved extensive reflection and analysis.” Challenges During the Research Process Professor Jihan Kim explained that the ChatMOF project posed certain challenges because it was a completely new area of research. Professor Kim’s expertise lies in predicting material properties through simulations based on quantum mechanics and classical mechanics, which he has used to identify optimal materials for specific purposes. Considering his background in computational chemistry, he can identify the causes of anomalies in calculation results with relative ease. However, if GPT goes off-track and generates an inaccurate result, it becomes difficult even for him. He referred to this strange behavior of GPT as “hallucination.” “It is not only us; Others also do not seem to know why GPT exhibits such behavior,” Professor Kim said. “Because the language model is complex, it is difficult to predict when hallucinations will occur or not. It sometimes just produces wrong answers.”   Development of MOF Machine Learning Model Using KISTI Supercomputer Professor Jihan Kim completed his undergraduate studies at the University of California, Berkeley, and pursued his master’s and doctoral degrees at the University of Illinois Urbana-Champaign. From his undergraduate years through to his Ph.D., he majored in computer science. He later conducted postdoctoral research at Lawrence Berkeley National Laboratory, where he began integrating computational science with chemistry. “I developed a strong interest in GPUs back then,” he noted. “During my postdoc, GPUs began to gain widespread use in scientific computation, and I explored their potential for material screening.” His research team heavily utilized GPUs in the development of machine learning models. “With extensive support from KISTI, we developed ‘MOF Transformer’,” he said. Professor Kim further noted: “We had to perform an enormous amount of calculations. The resulting research on MOF Transformer was published in March 2023 in Nature Machine Intelligence, a prestigious sister journal to Nature (A multi-modal pre-training transformer for universal transfer learning in metal–organic frameworks).” “Until now, MOF machine learning models were designed to only predict a specific property,” he stated. “For instance, if the goal was to study CO₂ absorption, researchers would develop a machine learning model tailored to CO₂ absorption. What we have created, in contrast, is a universal model that can predict any property of MOFs. In addition, our MOF Transformer demonstrates superior performance even in single-property predictions compared to property-specific models. A model this large can also be used to develop multiple specialized models.” Professor Kim emphasized that training and implementing such a model required extensive computational resources and acknowledged the significant support received from KISTI. He also mentioned that KISTI's computational resources and technical assistance (KSC-2022-CRE-0515, KSC-2023-CRE-0065) played a critical role in the development of ChatMOF as well.   [Figure 1. Example of a chatbot system for prediction and inverse design of metal-organic frameworks using a large language model. When a user inputs a text-based query about MOF properties, ChatMOF provides an appropriate response. When a user wants to generate a new MOF, ChatMOF can create one that meets the desired conditions.] [Figure 2. Overview of a chatbot system for prediction and inverse design of metal-organic frameworks using a large language model. ChatMOF consists of three core components: an agent, a toolkit, and an evaluator. The agent receives a user query, formulates a plan, and selects appropriate tools. The tools then generate outputs based on the proposed plan, and the evaluator finalizes the output as the response.]
Increase in Convective Extreme El Niño Events During CO₂ Reduction Periods Performed (or calculated) by the NURION Supercomputer
In June 2023, Professor Jong-Seong Kug published a paper on convective extreme El Niño (CEE) events related to climate mitigation scenario, titled “Increase in Convective Extreme El Niño events in a CO₂ removal scenario,” in ≪Science Advances≫. This study was conducted with the support of the Supercomputing Exploration for R&D Innovation (SERI) for Grand Challenge (Large-scale) (KSC-2021-CHA-0008, KSC-2023-CHA-0001). A press release from Pohang University of Science and Technology (POSTECH) summarized the contents of his paper as follows: “Professor Jong-Seong Kug of the POSTECH Division of Environmental Science and Engineering and the research team, including Gayan Pathirana, used Earth system models to simulate the CO₂ concentration changes, projecting that the frequency of CEE events would increase even in situations of CO₂ reduction.” The academic community evaluated that this result suggests a need to complement current climate change policies/CEE events, characterized by a rise in ocean temperature and daily rainfall exceeding 5 mm, causing global extreme climate events. Simulations have shown that an increase in atmospheric CO₂ concentration leads to an increase in the frequency and intensity of CEE events. Professor Kug and his colleagues posed a new question: What happens when the atmospheric CO₂ concentration decreases toward carbon neutrality? What about the frequency and intensity of CEE events? Let us revisit the POSTECH press release. “Even if CO₂ is reduced, CEE events can still occur frequently. The intertropical convergence zone (ITCZ) moves southward, and rainfall in the Eastern Pacific responds sensitively to sea surface temperature changes, triggering CEE events. This projection implies that, despite carbon reduction policies like carbon neutrality, the occurrence of CEE events is inevitable owing to the already emitted CO₂ into the atmosphere.”.   Research with supercomputers at POSTECH Upon visiting the third floor of the Jigok Research Building at POSTECH, one finds the “Center for Abrupt Climate Change.” The center is led by Professor Jong-Seong Kug (Division of Environmental Science and Engineering). Professor Kug is a long-time user of the KISTI supercomputer. He first used the KISTI supercomputer during his master’s program in atmospheric sciences at Seoul National University (1998–2000). “When I conduct climate modeling experiments, I received a lot of help from KISTI,” he said. Running Earth system models for climate change research on the KISTI supercomputer generated a vast amount of data. The data size was a petabyte, which was a very big file at the time. "The computation time on the supercomputer was similar to the time it took to download the data," Professor Kug explained. Now that there is a dedicated line (KREONET) between Daejeon KISTI and POSTECH, the speed has increased, but "it still feels slow," he said. Moreover, there's the issue of archiving the data generated by the simulations. Professor Kug stated, “The storage is always insufficient from a user’s perspective.” Professor Jong-Seong Kug started working at POSTECH in 2014, and has continued to use the KISTI supercomputer. “A few years ago, the irreversibility of climate change emerged as a new research topic. Irreversibility questions whether the current climate can be restored if CO₂ levels were to increase and then decrease,” he explained. “Climate change may progress to a certain point and then suddenly shift. To study such phenomena, one must use a whole Earth system model, which requires running computer simulations for a very long time. Whereas previous climate change studies looked at what might happen over the next 100 years, abrupt climate change studies have to be looked at several hundred years ahead. It means the simulations have to run much longer, which means the amount of supercomputing resources needed has increased dramatically," he said. What is the difference between traditional climate change research and abrupt climate change research? “Previously, the research focused on what the climate would be like in 100 years if atmospheric CO₂ concentration doubled from the current levels. However, the research has changed owing to the introduction of carbon neutrality,” he noted. Carbon neutrality means not increasing the amount of CO₂ released into the atmosphere. The Paris Agreement adopted at the 2015 UN Climate Change Conference declared carbon neutrality. Following the declaration of carbon neutrality, a new scientific question arose among climatologists: Could the climate recover after achieving carbon neutrality?   Super El Niño The paper published by Professor Jong-Seong Kug in ≪Science Advances≫ in 2023 focused on CEE research. What is El Niño? According to the Korea Meteorological Administration, “El Niño is a phenomenon where the sea surface temperature in the equatorial Eastern Pacific remains higher than average for several months.” When El Niño intensifies, it is referred to as super El Niño or CEE. According to Professor Jong-Seong Kug’s 2023 research in ≪Science Advances≫, CEE is a phenomenon that causes global extreme climate conditions, with daily rainfall exceeding 5 mm alongside a rise in ocean temperatures.   The Progress of El Niño Research  In the early stages of his career, Professor Kug focused on predicting El Niño, and later, on the diversity of El Niño. The research direction has since shifted toward Super El Niño and the irreversibility of climate change. Super El Niño, in terms of sea surface temperature, indicates an increase of 2 to 2.5 degrees. El Niño is defined as a rise in the sea surface temperature in the Eastern Pacific by 0.5 degrees or more for at least five months. The Eastern Pacific, where sea surface temperatures are measured, is defined as the area (5S-5N, 150-90W). This area is referred to as “’Niño 3’”. The criteria for the super El Niño vary among researchers, with some considering a 2-degree increase as the threshold. Professor Kug views it as a 2.5-degree increase or more. When Professor Kug published his paper in ≪Science Advances≫, the press release did not describe Super El Niño solely in terms of a rise in the sea surface temperature. CEE was described as a phenomenon causing global extreme climate conditions with daily rainfall exceeding 5 mm alongside a rise in sea temperatures. How does defining El Niño by rainfall rather than sea temperature affect the interpretation? “It is a slightly different definition,” Professor Kug said.   El Niño and Extreme Climate Professor Kug stated, “During the El Niño phenomenon, changes in precipitation in the region are more important than changes in sea surface temperature. In terms of climate impact, precipitation is more crucial.” Although Professor Kug defined CEE as daily rainfall exceeding 5 mm, he clarified that this was not the first time such a definition had been used. “There have only been three instances where averaged daily rainfall exceeded 5 mm, observed in the El Niño events of 82/83, 97/98, and 15/16,” he explained. The term ‘ITCZ’ appears in press releases for Professor Kug’s research on “Intensifying CEE despite CO₂ reduction” which states that “CEE occurs as the ITCZ shifts southward and precipitation in the eastern Pacific Ocean becomes more sensitive to sea surface temperature, A reference describes the ITCZ as ‘a low-pressure belt formed by the convergence of the Northeast and Southeast trade winds near the equator where air rises in the equatorial region, also known as the doldrums’, ’The movement of the ITCZ is the cause of monsoons owing to the change in wind direction with seasons.’ Why does the ITCZ move southward when a CEE occurs? “There are several reasons for the ITCZ's southward shift, the biggest of which is related to CO₂” said Professor Kug. "Reducing the atmospheric CO₂ concentration would cool the Earth. The northern hemisphere cools faster than the southern hemisphere, which takes longer to cool because it has more oceans. Imagine there are warm and cold places - once the warm air rises, the cold air will converge into the place. In the current climate, the ITCZ is slightly skewed north of the equator. Depending on the season, it lies between 5 N and 10 N latitude. But in the future, if we reverse CO₂, the rising air will move toward the Southern Hemisphere, which is relatively warmer, and the ITCZ will be pushed southward. This causes an increase in precipitation at the equator, resulting in El Niño." Equatorial rainfall will increase, which provides favorable conditions for frequent occurrence of CEE." How far can the ITCZ be pushed into the Southern Hemisphere? " If it moves far enough, the ITCZ could go from the northern hemisphere to the southern hemisphere. Simulations show that the peak of the southward movement is about 200 years from now," said Professor Kug. What are the climate impacts of CEE on Northeast Asia and the Korean Peninsula? According to Professor Kug’s research, East Asia is expected to have more rain, whereas South Asia will have less rainfall and potentially experience drought. Regarding this, he stated, “For every 1-degree-Celsius increase in Earth’s temperature, the overall rainfall increases by 2%–3%. Regionally, it rains more in areas where it already rains a lot, and less in areas where it rains less.”   Irreversibility of Climate The 2023 ≪Science Advances≫ paper focuses on climate irreversibility as a key topic. The research begins with simulating a 'scenario for irreversibility.' If the atmospheric CO₂ concentration increases by 1% annually, it will quadruple the current concentration in 140 years, reaching its peak. It is then assumed that CO₂ concentration decreases by 1% annually from that point onwards. The study examines what would happen up until the point 280 years later. A comparative analysis was conducted between the 140 years of increase and the 140 years of decrease. The simulation results indicated that during the 140 years of decreasing carbon dioxide concentration, extreme El Niño events would occur 2 to 3 times more frequently than during the 140 years of increasing concentration. This is the crux of Professor Kug’s research. It spent over six months conducting the 280-year-long model simulation. Conducting such long-term climate simulations was possible only with access to the KISTI supercomputer. The availability of KISTI’s computational resources made this research feasible. To study the variability of future climate changes, one must use climate models of a virtual Earth and perform integrations for hundreds of years. Professor Kug plans to calculate climate changes over the next 500 years. “If KISTI allows more access to the supercomputer, I could conduct more significant research. Supercomputers should be utilized more in climate change studies,” he stated. The movie “’The Day After Tomorrow’” (2004) featured a scenario where the North Atlantic current stops, leading to abrupt climate change and New York freezing over instantly. Professor Kug emphasized the importance of predicting such events and said that this is his subject of study. “Research on abrupt climate change is something I've wanted to explore for the last decade,” he mentioned. “I aim for a long-term approach by running Earth system models on the KISTI supercomputer.”     [Figure 1. Relationship between Niño3 (5°S to 5°N, 150°W to 90°W) boreal winter total rainfall and meridional SST gradient [average SST over the off-equatorial region (5°N to 10°N, 150°W to 90°W) minus the average over the equatorial region (2.5°S to 2.5°N, 150°W to 90°W)] for (a) PD climate, (b) ramp-up (Year 2005), (c) ramp-up (Year 2070), (d) ramp-down (Year 2210), and (e) restoring (Year 2350). During the period of decreasing CO₂, CEE events (in red) increase sharply.] [Figure 2. Changes in the land precipitation during convective extreme El Niño (CEE). Difference in the composite of land precipitation anomalies (shading) and 850-hPa winds (vector) between Year 2210 and Year 2070.]  
Wave Function Matching for Solving the Quantum Many-Body Problem Using the NURION Supercomputer
Dr. Youngman Kim published a paper titled “Wave function matching for solving the quantum many-body problem” in the May 15, 2024 edition of Nature. A paper authored by a Korean researcher in nuclear or particle physics being published in Nature is rare. In the field of physics, publishing a paper in Physical Review Letters is considered a high honor. For a paper to be accepted in Nature, the research must be not only scientifically exceptional but also of interest to the general public to some extent. As most physics research caters primarily to physicists, it is rare for such work to be published in Nature. Dr. Kim noted that “Researchers from the US, Germany, and South Korea played leading roles in this study.” The team conducted the research as part of the “Nuclear Lattice Effective Field Theory” collaboration. They proposed the theoretical framework of “wave function matching” and validated it. This study is one of the representative works supported by the KISTI Supercomputing Exploration for R&D Innovation (SERI) for Grand Challeng (KSC-2022-CHA-0003, KSC-2023-CHA-0005). Using computational resources at KISTI’s Supercomputing Center, the team verified the theoretical model, finding a striking correlation with existing experimental data. They even confirmed previously unobserved data, showcasing the model’s predictive power. The team was recognized for developing a novel and effective research methodology, which was validated by the publication of their paper in Nature. This raises the question: what fundamental scientific inquiry did these nuclear physics theorists pursue? Let us explore what Dr. Kim discovered by leveraging the KISTI supercomputer in his research. Atomic Nucleus Formation  Dr. Youngman Kim explained that “My primary interest lies in understanding how atomic nuclei are formed starting from the fundamental particles.” He added that “The biggest question that guides my research is how these nuclei evolve into familiar elements, such as carbon, oxygen, and iron.” Dr. Kim is a theoretical nuclear physicist. He leads the theory group within the Center for Exotic Nuclear Studies at the Institute of Basic Science (IBS) in Daejeon. Nuclear Physics  The field of nuclear physics has long focused on the atomic nucleus—comprising protons and neutrons—and the nuclear forces between them. However, explaining the diverse properties of various nuclei solely in terms of nucleon interactions has proven difficult. Specifically, no existing theoretical framework has explained the binding energy, mass, and charge radius of heavy nuclei comprising numerous nucleons owing to two challenges: the computational complexity of using supercomputers and the lack of precise nuclear force models. Traditionally, models were chosen based on the specific purpose and scope of computation, with precision nuclear force-based calculations largely restricted to light nuclei. An international collaborative research team has researched “Nuclear Lattice Effective Field Theory” by computing nuclear wave functions—representing quantum states of atomic nuclei—on space-time lattices using the Monte Carlo method to predict a wide array of nuclear characteristics. However, applying this method to heavy nuclei—comprising dozens of nucleons—has been virtually impossible due to the computational challenge known as the “Monte Carlo sign problem,” which arises from the system’s complexity.   To address this, the researchers introduced a methodology known as “wave function matching,” which adapts the nuclear interaction model such that, in many-body systems afflicted by the Monte Carlo sign problem, the resulting wave functions are computationally feasible at short-range scales. First, the two-body nuclear force, which acts between two nucleons, was precisely calibrated against experimental data from nucleon-nucleon scattering. This calibrated model was then transformed using wave function matching into a form better suited for many-body quantum calculations, while preserving the original two-body nuclear force model’s characteristics over the effective range. Additionally, a three-body nuclear force model was derived to explain the binding energies of nuclei beyond tritium (3H), using empirical mass values from approximately 20 nuclei. The application of wave function matching methodology to both two-body and three-body nuclear forces enabled pure first-principles predictions with theoretical calculations for a range of nuclei, eliminating the need for further approximations or parameter adjustments. Using the wave function matching methodology, the research team computed the binding energies, masses, and charge radii of a range of stable nuclei—from the deuteron (comprising two nucleons) to nickel with 58 nucleons—and confirmed consistency with known observed values. They also performed high-precision calculations of the binding energies and masses of neutron-rich oxygen isotopes up to oxygen-24 (24O). Research has been conducted to extend the application of Nuclear Lattice Effective Field Theory and the wave function matching methodology beyond nuclear structure to include nuclear reactions and a broader range of fields. This is expected to play a critical role in advancing our understanding of rare isotopes—one of the key scientific objectives of the RAON heavy ion accelerator project. Need for Supercomputers  The foundational theory was originally developed by Professor Dean Lee’s research group in the United States, and the IBS team tested it. The team implemented the theory in code and verified that it produced accurate results. In particular, they used the KISTI supercomputer to compute the nuclear binding energies of carbon and oxygen isotopes. Therefore, why is supercomputing necessary? Dr. Youngman Kim explains the following “Consider oxygen-16, for instance. It contains eight protons and eight neutrons. Without any assumptions, we input only the interactions among these 16 nucleons. We program the strength of the interaction based on the distance between each pair. When you imagine computing every interaction among those 16, the computational scale is immense.” Dr. Kim further noted that “We incorporated both two-body interactions, where forces act between nucleon pairs, and three-body interactions, where forces arise among three nucleons.” When considering interactions between pairs of nucleons, there are 16 nucleons, and therefore, the number of possible two-body interactions is represented as a combination: ₁₆C₂ = 120. For three-body interactions, there are 560 possible combinations, as calculated by ₁₆C₃.  The simulation shows how the nucleons interact, stabilize, or decay and provides this data. Once a stable nucleus forms, the simulation charts how stably the nucleons are bound together over time. A nucleus is regarded as stable if its binding energy decreases over time, observed as a downward trend along the y-axis stabilizing at a certain energy level. When the binding energy ceases to decline, that convergence point indicates nuclear stability. [Figure 1. Relationship among ground-state wave functions obtained from three Hamiltonians: original complex Hamiltonian H, unitarily transformed Hamiltonian Hʹ, and simplified, computationally tractable Hamiltonian Hs. At distances shorter than 3.72 fm, the ground-state wave function of Hʹ is proportional to that of Hs. Beyond 3.72 fm, the ground-stage wave function of Hʹ aligns with that of H.] [Figure 2. Comparison of nuclear binding energy calculation methods between the wave function matching methodology and conventional perturbation theory. ] Challenges Faced  Conducting large-scale computations using supercomputers demanded extensive collaboration among multiple research partners. Three global supercomputing centers were utilized in the research. Utilization of Three Supercomputing Centers Worldwide  First, binding energy calculations were conducted using supercomputing resources from KISTI in South Korea. Dr. Youngman Kim’s group was selected for a major project in collaboration with the KISTI Computational Science Team (led by Dr. Cho Kihyeon). In 2021 and 2022, the team used approximately 20% of the total computational capacity of the Nurion supercomputer—equivalent to 1,500 nodes—to conduct the binding energy computations. Second, nuclear size calculations were performed by Professor Dean Lee’s group using the Oak Ridge Leadership Computing Facility’s supercomputing infrastructure in the United States. Third, calculations of nuclear matter saturation energy were conducted at the Jülich Supercomputing Centre (JSC) in Germany. The saturation energy of nuclear matter represents the average binding energy per nucleon in a nucleus with infinite mass number. This study required an extraordinary volume of computation, necessitating the deployment of leading supercomputing facilities from three different countries. Distinguished Achievements  Conventional nuclear physics research has relied on simplified models or ab initio no core shell model has focused solely on light nuclei. With ab initio nuclear strucute theory it is very difficult to describe nuclear mass, charge radii, and nuclear matter saturation at the same time. This study introduces a new methodology enabling quantum many-body calculations to be extended to medium-mass nuclei. Consequently, the research unified nucleon-nucleon scattering experiments, the mass and charge radii of various nuclei, and nuclear matter saturation under one consistent theoretical model. Future Research Directions  The research team plans to further apply nuclear lattice effective field theory and wave function matching methods to a wider range of nuclear structures and reactions. In particular, the utilization of the newly introduced Supercomputer No. 6 is expected to play a critical role in upcoming research efforts. Additionally, it is expected to make significant contributions to rare isotope research conducted at the RAON heavy ion accelerator facility. These efforts are anticipated to play an important role in both nuclear physics and cosmology in the future.
Non-isothermal Phase Change during Cavitation Bubble Pulsations Implemented with the NURION Supercomputer
In October 2023, Professor Chongam Kim from the Department of Aerospace Engineering at Seoul National University published a paper titled “Computational investigation on the non-isothermal phase change during cavitation bubble pulsations” in the journal ≪Ocean Engineering≫. This paper examined the phenomenon of cavitation bubbles that occurs during high-speed fluid motion. A cavitation bubble is an air bubble that forms when the pressure in a fluid moving at a high speed drops below the vapor pressure, causing the phase of the fluid to change from liquid to gas. Here, the air bubble periodically expands and contracts, particularly reaching temperatures of several thousand degrees Celsius during contraction, generating shock waves. This results in noise, vibration, and corrosion in the surrounding structures, which is a well-known engineering problem to be resolved. To conduct long-time calculations of turbulent flow governing equations, including the vapor phase transport equation, computational resources capable of performing large-scale numerical studies were required. The project was selected for the third phase of the KISTI Supercomputing Exploration for R&D Innovation (SERI) Creative Research (KSC-2020-CRE-0220), performing the targeted calculations with the help of NURION’s computational and large-capacity storage capabilities. Curious about what his latest paper means and what was discovered, I visited Professor Chongam Kim’s lab at Seoul National University on November 28th. What kind of researcher is he? Professor Chongam Kim specializes in computational fluid dynamics (CFD) and numerical methods, mainly developing numerical techniques for solving partial differential equations and applying them to the field of aerodynamics. CFD, computational science and technology (CS&T), and scientific computing are his research keywords. Broadly speaking, his research area is aerospace, and more narrowly, fluid dynamics or aerodynamics. To solve the partial differential equations known as the Navier–Stokes equations, an engineer’s perspective prioritizes the assumption that solutions exist and emphasizes applying these equations to solve real-world engineering phenomena. Professor Kim finds it interesting that “by developing algorithms to calculate the equations, we can solve almost all phenomena described by the equations.” This is why this research, although not in his main field of aerospace, is much closer to marine and naval engineering.   Is marine engineering a major field of research? Professor Kim published this research in ≪Ocean Engineering≫, a marine engineering journal. Eighty percent of his research does not deviate from aerospace fields. The motivation for this marine engineering research is also related to “liquid rocket launch vehicles.” A liquid rocket uses liquid fuel and oxidizer and operates under high-pressure conditions for high performance. One of the key elements for creating such high-pressure conditions is the turbopump. The liquid oxidizer is cryogenic at temperatures between -160 and -170 °C, making the operation range of turbopumps using it very sensitive to temperature changes near the cryogenic critical point. Even slight temperature changes can cause phase transitions, affecting the performance of turbopumps; hence, accurately predicting phase changes is crucial. Professor Kim started the research thinking, “There were a few existing cavitation models, but I thought we could do better, and I believe we have solved this problem (of the cavitation modeling issue reflecting temperature effects).”   Research motivation “As I work on algorithms, some people told me that, if I really wanted to tackle something complex, I should delve into multiphase studies,” he recalled the advice of now-retired senior engineers. “Having worked on algorithms, I wanted to extend the algorithms I developed for single phase flows to multiphase, and while attempting to solve the turbopump issue, the problem of cavitation caught my attention.” The cavitation issue involves the local pressure (P) being greater or less than the saturation pressure (Psat), determining the amount of vaporization or liquefaction. A widely used Kunz cavitation model, as an example, empirically determines the amount of vaporization or liquefaction based on the difference between P and Psat. This model has problem-dependent empirically determined coefficients, and Professor Kim knows of more than 10 models that empirically produce reasonable results. Professor Kim said, “The problem I am interested in is that one cannot address the issue no matter how the coefficients are adjusted. The reason is that people are less interested in creating good algorithms and just want to solve the problem somehow. Such approaches do not provide fundamental progresses.” Engineers usually prefer practical approaches and are satisfied so long as they get results, which is often different from the approach pursued by scientists seeking to understand fundamental issues. Regarding this, Professor Kim said, “Nowadays, engineers or engineering scientists also pursue an understanding on basic principles. Is this not the era of interdisciplinary studies? The boundaries between engineering and physics/mathematics do not seem so clear-cut. Particularly engineers working on algorithms do not like some strange coefficients in the formula. I simply do not like it because it raises uncertainty. Developing an algorithm means reducing uncertainty.”   The temperature effect was not included. Professor Kim highlighted the absence of temperature effects in phase change models, emphasizing the importance of temperature. Particularly, he stressed that temperature differences were not considered in various models, including the Kunz model. He cited the failure of a Japanese rocket launch as an example of a cavitation issue arising from temperature effects. In 1999, Japan’s H-II rocket crashed shortly after its 8th launch because the first stage engine suddenly stopped its combustion earlier than planned. It was revealed that the crash was caused by the failure to predict the cavitation instability in the rocket’s turbopump. There are two ways to predict such cavitation issues accurately: conducting precise experiments with the actual working fluid, or performing high-fidelity calculations through detailed numerical/physical modeling. However, the actual medium is a cryogenic fluid at -150 °C, and local measurements are challenging, making experiments expensive. Therefore, it would be beneficial to understand the phenomenon through high-fidelity calculations. What Professor Kim has achieved here is exactly that work. He developed a cavitation model that properly reflects temperature effects.   Successful development of the PCM model The model development necessary for this research was completed three years ago. The model is named PCM (physics-based cavitation model). This research has just been published. The model was developed to solve the problem of cavitation bubble pulsation at cryogenic temperatures, and Professor Kim tried hard to find suitable data to validate this model. The Korea Aerospace Research Institute (KARI) provided only some data, and for the research, cryogenic data from NASA in 1960s were utilized. Through model validation, the PCM was shown to successfully predict the thermodynamic effects in cavitation bubble pulsation problems. Based on the research findings, the paper was submitted to and published in the archival journal. Professor Kim said, “Clarifying the intervention of thermodynamic effects during the cavitation bubble pulsation process is the first case of its kind.” Professor Kim mentioned the difficulty of physical modeling for cavitation problems occurring in ships or submarines, referencing the research of Shima, a Japanese experimentalist. The Japanese researcher measured the multiple contraction and expansion of a cavitation bubble experimentally, and the results of Professor Kim’s PCM model well matched with these experimental data. From these results, Professor Kim’s team presented a new interpretation surpassing existing theories, and the paper was evaluated as a significant achievement. The first author of this paper is a doctoral candidate Kyungjun Choi, who is going to receive his Ph.D. in February 2024.   What was discovered? Being able to predict cavitation bubble dynamics means that it is possible to deal with various engineering problems. This is achievable because it is now possible to accurately predict the timing of cavitation-induced corrosion and the size of cavitation. Professor Kim said, “The paper published is not perfect, and approximately 60% has been figured out,” adding, “We are preparing another paper, which completely covers the effects of cavitation bubbles. The research is almost finished.” Professor Kim stated, “The important thing is to identify what the key problem or issue is. It is difficult to identify the problem, but relatively less difficult to find its solution.” He also stressed that posing good questions leads to good results.   Rayleigh-Plesset equation Professor Kim briefly explained the history of cavitation bubble research. There is the Rayleigh–Plesset equation, which physically models the history of bubble growth. Referred to as the RP equation, it is practically impossible to solve it analytically. It is a nonlinear ordinary differential equation. Thus, people have sought approximations. The growth of a bubble owing to phase change is divided into three regimes: the inertial regime, intermediate regime, and thermal regime, starting with inertial growth and later influenced more by thermal energy. Existing models do not consider thermal effects, failing to respond to temperature changes. Professor Kim developed a new cavitation model based or the PCM model that encompasses all the regimes from the inertial to the thermal regime, thereby effectively modeling the entire growth history.   Was the computation extensive? According to Professor Kim, this research did not involve relatively extensive computations. He manages a relatively large Linux cluster in his research lab. Professor Kim stated, “For large computations, KISTI’s supercomputer is essential,” pointing out that Korea’s supercomputing infrastructure is inferior to that of the US and Japan. He said, “There are a certain class of engineering and scientific problems that cannot be attempted without a supercomputer, but in Korea, only KISTI has a facility to meet such needs. This is why Korea needs to significantly increase its supercomputing infra and resources.”   [Figure 1. Diagram summarizing the results of cavitation bubble pulsations calculated using PCM: (a) initial moment, (b) first maximum expansion moment, (c) first maximum contraction moment, (d) second maximum expansion moment, (e) second maximum contraction moment, and (f) third maximum expansion moment. The left half of each figure shows the volume fraction of the cavitation bubble, and the right half shows the pressure distribution of the flow field. Asymmetric bubble behavior owing to the lower wall structure located near the initial bubble is shown, and shock waves are emitted as the bubble expands and contracts.] [Figure 2. (Left) Computed results of the cavitation bubble radius over time, (Right) Computed results of the pressure inside the cavitation bubble over time. Experimental measurement results (black dots) show the bubble pulsating up to the third cycle, but applying a phase change model without considering phase change (green line, No Phase Change) or thermodynamic effects (blue line, Baseline) shows results different from the experimental values. In the baseline case, even adjusting coefficients and proceeding with calculations led to bubble extinction owing to excessive condensation at the second contraction, but applying PCM accurately predicted up to the third pulsation cycle compared with the experimental values. ]
Exploring Non-toxic New Materials for Next-generation Functional Semiconductors Using the NURION Supercomputer
To advance next-generation semiconductor materials, Professor Han Seul Kim utilizes atomistic-level simulations on supercomputing platforms for material discovery and property analysis. One of her research achievements was published in April 2024, titled “Multi-ion controllable metal halide ionic structure for selective short- and long-term memorable synaptic devices,” in an international journal named Nano Today. This study was conducted as part of the KISTI Supercomputing Exploration for R&D Innovation (SERI) Creative Research Support (KSC-2023-CRE-0252). Let us explore how the KISTI supercomputer was used to discover new materials for next-generation semiconductors. Development of New Semiconductor Materials  Silicon semiconductors are approaching their limits in terms of miniaturization through structural modification. In an interview conducted on June 13, Professor Han Seul Kim of Chungbuk National University (Department of Advanced Materials Engineering) stated “Researchers from multiple disciplines are striving to create smaller, faster, and more versatile semiconductors. Innovation in next-generation semiconductors depends on the discovery and development of new materials that are complementary to or can replace silicon.” As a materials engineer researching semiconductor materials based on computer simulations and theory, Professor Kim focuses on new materials using first-principles simulations and theoretical methods. In her recent research, Professor Kim proposed candidate materials for next-generation semiconductors.  Neuromorphic Computing  Modern computers have inherent limitations in data processing owing to the dichotomous architecture in which processing units, such as CPUs and GPUs, are separate from memory storage devices. In such architectures, even if data are processed quickly, storage is relatively slow. Professor Kim attributed this issue to the “von Neumann bottleneck.” She explained that “Such bottleneck degrades power efficiency. Scientists are working on novel strategies to improve energy efficiency.” One of the newly proposed approaches is the neuromorphic computer, which mimics the human brain. The human brain performs tasks similar to those of computers while consuming significantly less power. Consider the 2016 Go match between AlphaGo, an AI model, and professional Go player Mr. Sedol Lee. Mr. Lee consumed approximately 20 W of energy, assuming he had three meals a day. In contrast, AlphaGo required 200,000 kW—roughly 10 million times more energy. This is a stark contrast in energy consumption even though similar tasks are performed. This disparity results from fundamental architectural differences between computers and the human brain. Professor Han Seul Kim explained that “While computers have separate processing and storage units, the human brain integrates both functions.” She continued, “Neurons in the human brain transmit information to the next neuron through synapses. When specific stimuli are repeatedly applied, memory retention is enhanced. When a specific word is frequently used, the associated memory becomes more active, making the word easier to recall. In contrast, words that are rarely used are less active in memory, making recall slower. A synapse, located between the terminal of one neuron and the next, strengthens this connection when repeatedly stimulated at short intervals. This is how learning occurs. When repeatedly stimulated, short-term memory transitions to long-term memory, a phenomenon known as synaptic plasticity. The brain is a network of neurons responsible for processing data. In particular, the brain is both a computational device and data storage. Strengthening or weakening of synaptic connections is an ongoing process within the brain. Neuromorphic computing aims to replicate this brain architecture in computer systems.” Creating Artificial Synaptic Devices Surpassing the Human Brain Professor Han Seul Kim stated that “The starting point for neuromorphic computing is the development of a unit memory device that exhibits synaptic plasticity. Recent research has focused particularly on developing devices based on the discovery of materials capable of synaptic plasticity.” She further noted that “To expand the potential of computing and Artificial Intelligence and Internet of Things (AIoT) devices, it is crucial not only to mimic the synaptic functions of the brain but also to enhance controllability and multifunctionality.” Professor Kim also remarked, “In our recent paper published in Nano Today, we developed a multimodal artificial synaptic material that, unlike human synapses, can operate in two distinct memory modes depending on the applied voltage.” Memristor The memory resistor (memristor) is a device concept also known as a resistive switching memory element. In conventional materials, when a voltage is applied across two terminals and current flows, the resistance (defined as the voltage-to-current ratio) remains constant. However, certain materials exhibit variable resistance depending on the voltage. Leveraging this property, such materials can function as memory elements that temporarily store data. The device developed by Professor Kim falls under the category of memristors, specifically a type of Resistive RAM (RRAM). It operates by applying a sufficiently high write voltage to an insulator, which shifts (lowers) its resistance and allows current to flow even under a small read voltage. RRAM devices operate through two primary mechanisms. According to Professor Han Seul Kim, when voltage is applied, atoms can be emitted from the electrode, and as they accumulate, they form a nanoscale filament. This filament serves as a pathway for electron movement, facilitating current flow. That is, the resistance decreases. Conversely, when a reverse voltage is applied, the filament can be ruptured. Professor Kim stated that “When stimulation induces the formation or rupture of atomic-scale filaments, the states of increased or decreased resistance are stored as information.” The second mechanism utilizes vacancies in a material. Sometimes, atomic sites in the crystal lattice are unoccupied. Professor Kim noted, “Filament formation can also occur due to these vacancies within the crystal lattice.” This implies that synaptic plasticity can also be implemented through vacancies. She further elaborated, “When atoms are emitted from the electrode, the key is how easily they can be released or transported. Conversely, when forming filaments through atomic vacancies, the key is how readily vacancies can be created and mobilized within the material.” Discovering Materials via KISTI Supercomputer  Professor Han Seul Kim utilized the Nurion supercomputer at the Korea Institute of Science and Technology Information (KISTI) in Daejeon to design new ionic crystal materials. The subsequent phase of the research involved evaluating whether atomic or ionic movement within these materials could be precisely controlled—in particular, whether filament formation would be feasible. The materials theoretically predicted were synthesized into real compounds, which were then used to constitute memristors. Their characteristics were verified through experimentation. The experiments were conducted by the research team of Professor Jin Woo Choi at Kongju National University, who was affiliated with the Korea Institute of Materials Science at the beginning of the collaboration. Professor Kim explained, “Before the advent of supercomputers, computer simulations were used primarily for theoretical research. Over time, they have become useful for collaborative research involving both theory and experiment. Nowadays, simulations are sometimes run in advance to inform and design experiments.” Professor Kim employs the Density Functional Theory for simulation. This theory is used to calculate the electron configurations and energy within a material. About three years ago, under the Grand Challenge Program of KISTI’s Innovation Support Program, Professor Kim leveraged roughly one-third of the Nurion supercomputer’s massive resources for approximately 15 days to investigate the properties of over 3,000 metal halide perovskite compounds. Based on this database, she later proposed humidity and alcohol detection sensors in 2022. In this study, the ionic mobility of a compound known as Cs₂AgI₃ was examined. Verifying Ionic Mobility Properties Professor Han Seul Kim said that “Our key question was whether atoms would move under an applied voltage across both terminals and form filaments.” To determine whether atoms ejected from the electrode or internal atomic vacancies could align and create a filament, their migration behavior when subjected to voltage must be analyzed. First, the probability of vacancy formation or surplus atoms existing within the material was calculated. Next, the amount of additional energy required to mobilize these vacancies or surplus atoms was calculated. Professor Kim explained that “The lower the energy barrier (for ion migration), the faster the movement” and “This material comprises three elements—iodine, silver, and cesium. First, we confirmed that iodine exhibits the highest mobility due to its low energy barrier. It can migrate even under a small applied voltage. Conversely, if a high enough voltage is applied to overcome the barrier for atoms to escape from the silver electrode, the silver concentration within Cs₂AgI₃ increases. In this case, the silver ions migrate more significantly than the others, resulting in silver filament formation.” Novel Multi-Mode RRAM Concept “In most studies, RRAM devices are reported to rely on either the filament formation mechanism or the vacancy migration mechanism after creasing RRAM,” said Professor Kim. “However, we hypothesized that both mechanisms could coexist within a single material.” Both RRAM switching mechanisms can be realized by selectively utilizing silver and iodine ions, depending on the magnitude of the applied voltage. In principle, artificial synapses undergo transitions from short-term plasticity to long-term plasticity. In our study, we confirmed experimentally that this long-term plasticity mode could further be subdivided into two modes—short-lived long-term plasticity and long-lasting long-term plasticity—in response to external stimuli. [Figure 1. Concept and operating principle of the dual-mode memristor: the artificial synapse based on Cs₂AgI₃ exhibits short-lived long-term synaptic plasticity (SL-LTP) in low-voltage mode (top right), enabled by iodine vacancy migration, and long-lasting long-term synaptic plasticity (LL-LTP) in high-voltage mode (bottom left), enabled by silver ion migration.] [Figure 2. (a) Conductance variation under repeated pulses in SL-LTP mode. (b) Conductance variation under repeated pulses in LL-LTP mode. (c) Nonlinearity statistics in SL-LTP mode. (d) Nonlinearity statistics in LL-LTP mode. (e) Schematic of an artificial neural network trained to recognize 28 × 28 pixel MNIST handwritten digits. (f) Pixel projection of trained weights from random initial weights. (g) Recognition accuracy over time for SL-LTP (red) and LL-LTP (blue) modes and comparison with an ideal artificial synapse (black).] Future Research Directions Professor Han Seul Kim is interested in developing novel functional materials for next-generation electronic devices. What research would she like to pursue moving forward? Professor Kim shared that “There are two primary directions. First, I would like to perform simulations that better reflect real-world phenomena. Specifically, it is necessary to develop new simulation methodologies or frameworks.” She expressed particular interest in creating a multiscale simulation methodology that enables seamless integration from material discovery to circuits. The second focus involves designing eco-friendly next-generation semiconductor materials and devices with unprecedented functionalities. She aims to explore ways to control and add new functions by modulating material properties in response to external stimuli. Professor Kim remarked that “The development of multifunctional nanomaterials will be central to realizing novel next-generation semiconductor devices that enhance lives in the AIoT era. Through supercomputer-based material discovery and property prediction research, this can be achieved efficiently.”
Real-time Urban Wind Environment Simulations Using the NEURON Supercomputer
In November 2023, Professor Jung-Il Choi from the Graduate School of Mathematics and Computing (Computational Science and Engineering) at Yonsei University and his team published a paper titled “Multi-GPU-based real-time large-eddy simulations for urban microclimate” in the special issue of ≪Building and Environment≫, Advances in Research on Urban Microclimate and Impacts on the Built Environment. ≪Building and Environment≫ is a prestigious international journal in civil engineering and construction & building technology, with an impact factor (JCR) of 7.4 (top 4.0%). Professor Jung-Il Choi and Dr. Mingyu Yang also won the Best Presentation Award at the Spring Conference of the Korea Society of Computational Fluids Engineering in 2023. Supported by the Korea Institute of Science and Technology Information (KISTI) Supercomputing Exploration for R&D Innovation (SERI) Creative Research (KSC-2022-CRE-0195), the study conducted real-time urban wind environment simulations using the NEURON supercomputer and developed a GPU-based solver. The paper states, “We analyzed the wind environment over an area of 10.49 km² in Yongsan, Seoul, with a resolution of 4 m. We could perform the simulations 2.4 times faster than real-time, ultimately showing that urban wind environment forecasts are possible under these conditions.” For more details, we visited Professor Jung-Il Choi on the 6th floor of the Advanced Science and Technology Center at Yonsei University.   Research area The name of Professor Jung-Il Choi’s lab is “Multi-Physics Modeling and Computation Lab (MPMC Lab).” After browsing the website, I said to Professor Choi, “I understand that there are four research areas: computational fluid dynamics, batteries, physics-based modeling, and data-based modeling.” Professor Choi smiled and corrected me, “The main research areas are computational fluid dynamics (urban wind environment) and battery modeling, whereas the other two are methodologies to facilitate research in these areas.” The name of Professor Choi’s lab includes the term “multi-physics,” but what does it mean? Professor Choi described “multi-physics”: “Often, different physical phenomena come together to form a system. For example, consider a battery or an electrochemical system. To describe a battery system, it is necessary to simultaneously consider various equations representing electrode reactions, charge conservation, ion transport, and energy conservation. Thus, most systems do not comprise a single equation but have a combination of various equations.” He added, “Most natural phenomena and engineering problems have complex physical systems rather than simple physics with a single equation. My research focuses on how to model these complex systems and make the modeling accurate yet simplified for faster computation.” Background of the study on the urban microclimate environment in Yongsan, Seoul Professor Choi showed a slide. Its title was “Development of an LES (Large Eddy Simulation) Solver using Multi-GPU for Urban Wind Environments.” Professor Choi conducted research at the Yongsan, Seoul. Yongsan has a river (Han River), complex terrain (Yongsan Station, Samgakji, Gongdeok residential-commercial district), and large areas slated for skyscrapers (Yongsan Maintenance Depot). There was also an announcement by the Seoul Metropolitan Government about operating urban air mobility (UAM) taxis from Yongsan to Gimpo Airport. In 2022, Seoul City unveiled a future vision named “2040 Seoul Urban Master Plan,” which includes operating passenger drones. Professor Choi mentioned, “Currently, Gimpo Airport and Yongsan are designated as potential UAM Vertiport sites, but as UAM becomes commercialized, Vertiports will be constructed in various locations.” The location of Vertiports, landing sites for UAM, is crucial as UAM-carrying passengers must land and fly safely. It is challenging for UAM to fly in strong or gusty winds. Vertiports must satisfy these conditions. Professor Choi said, “An environmental assessment is necessary before establishing a Vertiport. Although no Vertiport has been built yet, I wanted to perform a preliminary study on the urban wind environment for Yongsan, one of the candidate sites.”   Urban microclimate environment research In urban wind environment research, the area that needs to be analyzed by the computer expands as the target area increases. The number of unknowns to be solved is proportional to the target area accordingly, which makes simulations expensive. Similarly, the number of unknowns also involves how precisely the same area will be observed. When dividing an area into grids, the smaller the grid size, the more precisely the wind environment can be understood. This indicates that the resolution, or resolving accuracy, increases. What should be the size of a single grid? For instance, to observe the wind environment around a building of a height of 100 m, the size of a single grid must not exceed 100 m. The wind environment around a specific building can be understood with the proper resolution. Similarly, if one wants to analyze at the scale of human size, a resolving accuracy of approximately 2 m is required. Professor Choi and Dr. Yang examined the target space size (2 km by 2 km by 256 m in height). This target space was examined using four grid sizes: 8 m, 4 m, 2 m, and 1.33 m. In the case with a grid edge of 1.33 m, the number of grids that fit into the target space is approximately 1 billion. Three directional wind velocity components, pressure, and temperature variables are considered. Thus, five variables are calculated at each grid point, and the total number of unknowns is 1 billion × 5 = 5 billion. Professor Choi said, “Roughly 5 billion unknowns must be solved by the computer. This kind of size is a big challenge for a single computer.”   Development of a GPU-based solver Such calculations are performed using a computer equipped with GPUs. A GPU generally refers to a graphic processing unit in well-known computer graphics cards. NVIDIA introduced a new card, the H100, in 2023. Owing to the artificial intelligence boom, GPUs have become scarce, and this product costs 50 million won each. Professor Choi’s lab has four A100 GPUs. He said, “To buy equipment worth 200 to 300 million won, the total research budget needs to be over 1 billion won.” Professor Choi used the supercomputer at the KISTI, as an alternative. Calculations were performed with 8 A100 GPUs. By running 8 GPUs for 10 hours, the wind environment blowing 24 hours a day over an area of 10.49 km² in Yongsan, Seoul, was simulated. KISTI has approximately 260 GPUs as of 2023. The new supercomputer being constructed in 2024 is planned to be overwhelmingly based on computers with GPUs.   [Figure 1. (Top-left) The domain of an idealized building structure used to validate the solver. (Bottom-left) The mean flow and vertical temperature profile of the analyzed wind field, showing good agreement between the wind tunnel experiment results and the solver profile. (Right) A comparison of the solver's simulation speed based on the number of NVIDIA A100 GPUs, the analysis area, and the analysis resolution, considering the actual domain size. Regions to the left of each line indicate calculations performed faster than real-time, whereas those to the right indicate slower-than-real-time calculations. ] What is important in real-time analysis (information)? Professor Choi said, “The most important thing for a UAM carrying people to take off is the real-time analysis of the wind environment.” Wind environment information must be calculated faster than real-time and provided to the UAM, which aids the decision-making of the pilot. If more GPUs are used, 100 fast calculations are possible. According to Professor Choi, the grid size that approached real-time analysis is 4 m. At this resolution, the number of grids is 40 million, and the total number of unknowns becomes 200 million. Then, computer simulations can predict the wind environment for one day in only 10 hours. The calculations were performed 2.4 times faster than real-time. Professor Choi said, “We want to obtain results about the wind environment through external weather data and our solver and provide them to drones or UAMs. Real-time computations had hardly been achieved when a high-fidelity model was used to analyze urban wind environments with an appropriate resolution. I think our research was a very competitive effort.” He then added, “There is much room for improvement. This time, we only included temperature variables in addition to wind, but many more elements constitute the urban wind environment. More realistic factors- such as atmospheric stability, radiative heat, heat maps, integration with observational data, and pollutant dispersion- need to be considered. Furthermore, we need to overcome the limitations of computational speed through supercomputing technology, particularly GPU parallel computing. Soon, comprehensive information for urban wind environments should be provided in real-time, as this is critical for the safe operation of UAMs.”   Development of the solver The achievement of Professor Choi’s team was to calculate the urban wind environment faster than in real-time. Has such research not been done internationally? The FastEddy model was developed by the National Center for Atmospheric Research (NCAR). It is a study based on GPUs in 2020. The CityFFD model was also developed by a group at Concordia University of Canada in 2022. Professor Choi said, “The model from NCAR has a different method of solving equations than ours. Therefore, it is impossible to say who is superior.” He continued, “We developed our solver independently, and the solver is capable of fast simulations with a high-fidelity model. The solver is a competitive platform in terms of computational performance.” He emphasized, “We are probably the first to quantitatively analyze computational resources combining resolution and target area, explicitly showing that real-time simulations are possible.” [Figure 2. (Top-left) Terrain map and computational domain of Yongsan, Seoul. The total area of the valid computational domain is 4.00 km2. (Bottom-left) 3D visualization of turbulent flow structures (iso-surface of vortical structures) in the computational area. (Right) (a) XZ, (b) Z=10 m, and (c) Z=40 m vertical velocity fields.] Previous research of Professor Jung-Il Choi Professor Choi mentioned his previous research. “This research began in 2012. Since then, we have been developing algorithms. There was a leap in 2016; by 2019, the algorithms had matured to a milestone. In that year, we also developed a parallel computing method for supercomputers. We created a methodology for parallel computing with 250,000 CPU cores using MPI. In 2021, we released the numerical library PaScaL_TDMA. This process was the most challenging but the biggest leap in our research methodology. Then, in 2023, we created PaScaL_TDMA 2.0, a GPU version of the existing algorithms. GPUs are much more powerful than CPU-based computers. Hence, we have had three leaps in our research methodology.” Dr. Mingyu Yang is the first author of the 2023 paper. When asked about the challenges of this research, he replied. “Without majors related to computer hardware, the most difficult one is to become familiar with GPUs. If the target performance was 100%, the initial coding only reached 10% of it. With the help of the professor and others, we reached the target performance of 100%.” Finally, Professor Choi said, “This research was achieved through the collective and continuous efforts of Dr. Mingyu Yang and others in my lab, including Dr. Ki-Ha Kim (Samsung Advanced Institute of Technology), Dr. Geunwoo Oh (Samsung Display), Professor Xiaomin Pan (Shanghai University), and KISTI researchers Dr. Ji-Hoon Kang and Dr. Oh-Kyoung Kwon. It serves as a milestone for real-time urban wind environment research.”    

NURION

  • SummaryA system with compute and CPU-only nodes, an Omni-Path network, burst buffer storage, a Lustre parallel file system, and RDHx liquid cooling.
  • ServiceService has been available since 2018
  • CapacityComposed of 8,305 Intel Xeon Phi ("Knight Landing") compute nodes and 132 CPU-only nodes (Intel Xeon Skylake), with a theoretical performance of 25.7 PF.

NEURON

  • SummaryWith the Nurion system decided to be based on Knight Landing, it can meet diverse user demands through GPU-based system operations.
  • ServiceService launched in 2019, sharing the filesystem with Unit 5, and continuously adopting/extending next-generation technologies (FPGA, AMD EPYC, Optane, etc.).
  • Capacity65 server nodes, 260 GPUs, with a theoretical performance of 3.53 PF.

System Usage Status

  • Inspection status :
  • Nurion
  • Neuron
  • Utilization Node Utilization Node
  • Idle Node Idle Node
  • Repaired nodes Repaired nodes