Strahinja Janjusevic is pursuing a Master’s degree in the Technology and Policy Program (TPP) doing research with the Sea Grant team and the MIT Laboratory for Information and Decision Systems (LIDS) focusing on enhancing the cybersecurity of critical infrastructure through artificial intelligence (AI). He aims to develop AI solutions in the cybersecurity field that will improve the defense of cyber-physical systems. He also wants to bridge the gap between AI solutions in cybersecurity and the practical policy frameworks needed for their safe deployment.
What is the focus of your research? What sort of knowledge and disciplines does it bring together? How will it make an impact?
My research focuses on securing maritime Cyber-Physical Systems (CPS) against sophisticated spoofing of Position, Navigation, and Timing (PNT) data, such as GPS and Automatic Identification System (AIS) signals. My work brings together multiple disciplines, including AI and deep learning for anomaly detection; control theory and physics to create vessel dynamics models that can distinguish between legitimate maneuvers and physically impossible, spoofed trajectories; and cybersecurity to model advanced threat vectors.
The intended impact is to build a hybrid AI framework that is not only more accurate but also more trustworthy to human operators. This can prevent future attacks on ships and stop potential misuse of GPS/AIS spoofing. Another part of this research explores using Large Language Models (LLMs) to translate and aggregate complex, technical alerts into clear, human-readable explanations, ultimately providing the policy analysis needed for the safe and trusted adoption of advanced AI in safety-critical maritime operations.
This summer you interned with Vectra AI. Who did you work with and what did you do?
This summer, I interned with the AI research team at Vectra AI, a leader in Security AI-driven threat detection and response. My project focused on researching ways AI agents can be utilized for offensive security operations (AI hacking). I significantly enhanced an agentic AI framework by designing and implementing an adaptive planner, a hybrid tool installer using LLMs, and a multi-agent coordination protocol (MCP). The innovative part of the research was that the work focused on a proof-of-concept demonstrating how the MCP architecture itself can be repurposed as a covert Command and Control (C2) channel. The research goal was to show how malicious commands and data exfiltration can be hidden within what appears to be legitimate, encrypted AI model communication, using trusted cloud services as the C2 infrastructure to evade traditional network defenses.
How does the internship connect to your current research and future plans?
My internship at Vectra AI was the perfect complement to my thesis research because it provided direct, practical insight into how AI is used at scale for real-world network anomaly detection. At Vectra AI, I observed how their data science teams build sophisticated AI models to analyze massive streams of time-series network data, establish a baseline of normal behavior, and then pinpoint the subtle deviations that signal an active threat. This methodology is directly transferable to my own research, where I am applying the same principles to detect anomalies in National Marine Electronics Association (NMEA) data streams to identify GPS spoofing attacks. This experience has been invaluable for informing the technical design and validation of my own detection models, grounding my academic work in industry-leading practices.
The C2 research, in particular, highlights the dual-use nature of emerging technologies and reinforces the importance of the policy component of my thesis. It underscores the urgent need for governance frameworks to address how legitimate infrastructure can be abused for malicious purposes, a central challenge in securing our critical systems.