A security technique to deceive potential cyber attackers – The method protects secret information from a computer program


Researchers demonstrate a method that protects a computer program’s secret information while enabling faster computation.

Researchers demonstrate a method that protects a computer program’s secret information while enabling faster computation.

Multiple programs running on the same computer may not be able to directly access each other’s hidden information, but since they share the same memory hardware, their secrets could be stolen by a malicious program via a “hack per memory synchronization side channel”.

This malicious program notices delays when trying to access a computer’s memory because the hardware is shared among all programs using the machine. It can then interpret these delays to obtain another program’s secrets, such as a password or cryptographic key.

One way to prevent these types of attacks is to allow only one program to use the memory controller at a time, but this significantly slows down the computation. Instead, a team of MIT researchers have developed a new approach that allows memory sharing to continue while providing enhanced security against this type of side-channel attack. Their method is able to speed up programs by 12% compared to state-of-the-art security systems.

In addition to providing better security while enabling faster computation, the technique could be applied to a range of different side-channel attacks that target shared computing resources, the researchers say.

“These days, it’s very common to share a computer with others, especially if you’re performing computations in the cloud or even on your own mobile device. A lot of this resource sharing happens. Through these resources shared, an attacker can search for even very fine-grained information,” says lead author Mengjia Yan, Homer A. Burnell Career Development Assistant Professor of Electrical and Computer Engineering (EECS) and Fellow of Computer Science and Computer Science and Computer Science. Science Artificial Intelligence Laboratory (CSAIL).

Co-lead authors are CSAIL graduate students Peter Deutsch and Yuheng Yang. Additional co-authors include EECS Practice Professor Joel Emer and CSAIL graduate students Thomas Bourgeat and Jules Drean. The research will be presented at the International Conference on Architectural Support for Programming Languages ​​and Operating Systems.

Tied to memory

You can think of a computer’s memory as a library and the memory controller as the door to the library. A program needs to go to the library to retrieve some stored information, so the program opens the library door very briefly to enter.

Malware can exploit shared memory in several ways to gain access to secret information. This work focuses on a contention attack, in which an attacker must determine the exact instant when the victim program crosses the library door. The attacker does this by trying to use the gate at the same time.

“The attacker pokes at the memory controller, the library door, to say, ‘is he busy now?’ If they are blocked because the library door is already opening – because the victim program is already using the memory controller – they will be delayed. Noticing this delay is the information that is leaked,” Emer said.

To prevent contention attacks, the researchers developed a scheme that “shapes” a program’s memory requests into a predefined pattern that is independent of when the program actually needs to use the memory controller. Before a program can access the memory controller, and before it can interfere with another program’s memory request, it must go through a “request formatter” which uses a graph structure to process the requests and send them to the memory controller on a fixed schedule. This type of graph is known as a directed acyclic graph (DAG), and the team’s security scheme is called DAGguise.

Deceive an attacker

Using this rigid schedule, sometimes DAGguise will delay a program’s request until the next time it is allowed to access memory (according to the fixed schedule), or sometimes it will submit a fake request if the program doesn’t. does not need to access the memory at the next schedule interval.

“Sometimes the program will have to wait an extra day to go to the library and sometimes it will go when it didn’t really need to. But by doing this very structured pattern, you can hide from the attacker what you are actually doing. These delays and false requests are what guarantee security,” says Deutsch.

DAGguise represents a program’s memory access requests as a graph, where each request is stored in a “node” and the “edges” that connect the nodes are time dependencies between the requests. (Query A must be completed before query B.) The edges between nodes — the time between each query — are fixed.

A program can submit a memory request to DAGguise whenever it needs it, and DAGguise will adjust the timing of this request to always ensure safety. No matter how long it takes to process a memory request, the attacker can only see when the request is actually sent to the controller, which happens on a fixed schedule.

This graph structure makes it possible to dynamically share the memory controller. DAGguise can adapt if many programs are trying to use memory at the same time and adjust the fixed schedule accordingly, allowing more efficient use of shared memory hardware while maintaining security.

A performance boost

The researchers tested DAGguise by simulating its performance in a real implementation. They were constantly sending signals to the memory controller, which is how an attacker would try to determine another program’s memory access patterns. They have formally verified that, whatever the possible attempt, no private data has been disclosed.

Next, they used a simulated computer to see how their system could improve performance, compared to other security approaches.

“When you add these security features, you’re going to slow down compared to normal execution. You’re going to pay for that in performance,” says Deutsch.

While their method was slower than a baseline insecure implementation, compared to other security systems, DAGguise resulted in a 12% increase in performance.

On the strength of these encouraging results, the researchers wish to apply their approach to other computational structures shared between programs, such as on-chip networks.. They also want to use DAGguise to quantify the threat that certain types of side-channel attacks might pose, in an effort to better understand performance and security trade-offs, Deutsch says.

This work was funded, in part, by the National Science Foundation and the Air Force Office of Scientific Research.


About Author

Comments are closed.