Skip to content Skip to main navigation Report an accessibility issue

News

Michael Jantz in a server room

Good Memory

The need for ever-faster computing continues to grow, helping those machines play an important role in simulation, modelling, and artificial intelligence in ways that touch almost all aspects of modern life.

Keeping up with those demands has also required finding new, more efficient ways of accessing memory and storing data. Assistant Professor Michael Jantz, of the Min H. Kao Department of Electrical Engineering and Computer Science, understands those needs well.

We are going to have to develop new approaches for how we manage data in memory. Data movement and storage are still major bottlenecks for many computing applications, and conventional memory technologies have already been scaled to their physical limits.”

—Michael Jantz

Jantz has devised a concept that could further fuel computing performance by better managing how data and memory are used.

He explained that conventional memory storage requires constant voltage, using the example of someone losing files or projects when the computer crashes or suddenly restarts. Such memory is called “volatile,” as it can be lost if power is interrupted, but offers low response times and good overall performance.

On the other hand, disk-based storage is typically non-volatile. Files and other application data can be stored permanently on hard drives, but these technologies have the disadvantage of being much slower.

Jantz’s solution proposes taking advantage of select parts of both by better understanding how applications and software interact with and use data in memory.

“New storage-class memory systems allow you to have persistent data, but can also be used like traditional memory,” Jantz said. “Because they don’t draw as much power, you can manufacture them with much larger capacities. A modern volatile memory unit might hold up to 32 gigabytes of storage, but the newer storage class memory units can store up to 512 gigabytes or more in the same physical area.”

To get a better idea of what works best for individual applications, the team is studying how performance is affected when applications use different types of memory storage.

“We first have to understand what portions of an application’s data should go where, in terms of memory use, before we can design a program to better control how memory is utilized,” he said. “Someone doing data analytics might only want a small amount of very fast memory, whereas an application that uses large tables or creates a database might prefer a persistent storage medium with more capacity, even if it’s not quite as fast.”

He pointed out that the way that software interacts with memory and its underlying hardware technologies are largely unchanged since the 1970’s, although, obviously, the demands placed on machines and the expectations of what they can do has increased greatly.

Jantz will make the project’s results, data, and coding public both during the project itself and at least five years after it is set to conclude in 2025.