Skip to content Skip to main navigation Report an accessibility issue

News

Himanshu Thapliyal and student working on a 3D printer

Protecting 3D Printers and Minding Your Language

Thapliyal and Kim Improve Industry Cybersecurity

Technologies like additive manufacturing and artificial intelligence (AI) are revolutionizing multiple industries, enabling faster production and development than ever before. However, when software security for these technologies is lacking, malicious users can hijack important industrial devices—and even the code that ties them together.

Two of UT’s cybersecurity experts, Associate Professor Himanshu Thapliyal and Assistant Professor Doowon Kim, are creating new tools and educational techniques to improve the security of emerging technologies at the industry level.

Thapliyal “CAN” Protect 3D Printers

Additive manufacturing, commonly known as 3D printing, has been gaining traction in defense, medical, and other critical industries. The technology holds incredible potential to quickly create more detailed, less expensive, and lighter-weight products than are possible with traditional manufacturing methods.

It also creates new, significant attack surfaces along the supply chain.

“As 3D printing gains traction, it becomes more likely malicious entities will endeavor to exploit the technology for personal benefit,” Thapliyal said.

Thapliyal researches new methods to secure interconnected devices like 3D printers, which are often joined together via unprotected controller area network (CAN) protocols. CANs connect peripheral modules to each printer and printers to each other, but do not offer any protection from compromised modules, hardware Trojans, or other adversarial connections.

Cyberattacks on 3D printer CANs provide multiple angles of sabotage. Attackers can obtain proprietary design files, procedures, and blueprints; force the creation of substandard or unsafe products by injecting imperfections into design files; hijack printer heads to operate at critically unsafe temperatures; and disrupt the supply chain by encrypting crucial files and compelling a payment in exchange for restoration of services.

Himanshu Thapliyal and a student discussing and looking at computer code“Malicious alterations to commands and data can result in severe damage and safety risks to users, private data, and property,” said Thapliyal.

Such attacks are likely to become more significant as 3D printing farms and systems rely on increasingly many interconnected, variably sourced parts with unknown supply chain security to keep up with product demand.

“Modularity and compatibility have contributed significantly to the success of 3D printing in many industries,” Thapliyal said. “However, each new function and degree of connectivity increases the potential attack surface of 3D printers.”

Fortunately, Thapliyal recently proposed a novel solution for securing CANs used for additive manufacturing.

His team restructured a CAN into a hierarchical framework that isolates critical components while retaining the ability to use legacy or third-party devices. The framework also encrypts data, authenticates connected devices, and authenticates and validates messages sent through the network.

The new network has low overhead, reducing authentication costs between 25% and 90% compared with current CAN security solutions, and is equipped to block emerging threats from attackers using quantum computing.

“We expect that our proposed method can be utilized by advanced manufacturing firms and users to secure their operations and reduce the likelihood of cyberattacks,” Thapliyal said.

Kim Minds Your Language

Sophisticated AIs trained on large language models (LLMs), such as ChatGPT, have become almost eerily convincing. An increasing number of professional users are relying on AI assistants to outline emails, speeches, reports—and software.

“AI-powered coding assistant tools like ChatGPT and Copilot could revolutionize the way software is developed,” said Kim. “These tools can enhance efficiency and productivity by generating boilerplate code for developers, eliminating the need for manual implementation from scratch.”

Unfortunately, just as ChatGPT can fabricate fake medical studies and court cases, LLM-based AIs are not necessarily reliable for code creation. While an incorrect citation can be frustrating or embarrassing, AI-generated code can create significant security risks.

LLMs are trained on massive source code snippets, most of which are collected from open-source projects like Github. The snippets undergo no verification process, so insecure snippets could make it into AI-generated code. In fact, attackers can intentionally “poison” the training models with malicious code.

“If the suggested insecure code is blindly accepted, it can be inadvertently integrated into the final software products, resulting in vulnerabilities that attackers can exploit,” Kim said. “Such attacks in the software supply chain could pose a significant threat to national security.”

Still, no one has yet determined how common poisoning attacks might be—or what impact they might have on developers who rely on AI-powered coding assistant tools like ChatGPT.

Doowon KimTo investigate the scope and severity of the threat, Kim’s team conducted a survey of software developers and computer science students. They discovered that participants reported widely using AI assistants to enhance their coding speed, eliminate repetitive code, and quickly gain boilerplate code—but underestimated the risk of poisoning attacks.

Next, Kim’s team invited 30 professional software developers to complete programming tasks using one of two AI-powered coding tools (one of which was secretly poisoned) or without such assistance. Developers using the poisoned tool were the most likely to include insecure code in their final submissions.

Our study results highlight the need for education and improved coding practices at universities across the nation to address the new security issues introduced by AI-powered coding assistant tools,” Kim said.

To fill that need, Kim is developing a cautionary assignment for high school and undergraduate computer science students. In the proposed lesson, funded by a grant from the National Science Foundation, students will be encouraged to use a secretly poisoned LLM to complete a programming assignment.

“If the students simply accept the generated insecure code, their answers to the programming tasks will have vulnerabilities,” Kim said. “This hands-on exercise will let them safely experience and understand the security risks that arise from indiscriminately adopting AI-generated code snippets.”

Contact

Izzie Gall (865-974-7203, egall4@utk.edu)