After graduating from MIT with a Bachelor of Science, TPP student Alicia Ouyang ’19, SM ’26 worked as a data scientist for four years before deciding to pivot to learn more about the intersection of social science and AI/cybersecurity.
What is the focus of your research? What sort of knowledge and disciplines does it bring together? How will it make an impact?
The focus of my research is evaluating the impact of privacy algorithms used in the U.S. Census Bureau’s Disclosure Avoidance System (DAS) on public policy and government funding outcomes, by using synthetic U.S. Census data. Arguments for privacy often involve two concerns: the first is about the moral right citizens have to privacy, and the second is about security from malicious attackers who want to be able re-identify individuals. Since the U.S. Census is used for many public funding allocation decisions and the distribution of Congressional representation among all the states, social scientists and policymakers often assume it’s the ground truth, when in fact, there is a known privacy and accuracy tradeoff. Because the impact on accuracy by privacy algorithms is not uniform based on state population characteristics, programs and policy decisions that assume accurate population counts could become ineffective or misinformed. The goal is to reveal some of these inaccuracy patterns to inform better future DAS design decisions.
This summer you interned with the Maryland Department of Information Technology. Who did you work with and what did you do?
I worked on the AI Enablement Team led by Nishant Shah under Solomon Abiola. There were two main deliverables I had for the summer:
- AI Governance Cards. Maryland’s Artificial Intelligence Governance Act of 2024 and Artificial Intelligence Executive order enable Maryland executive agencies to use AI with the conditions that their use of AI is governed by certain ethical values and that they keep an inventory of what tools they use. However, the wording is purposefully vague, which often leaves government employees unsure what are best practices and what is prohibited. Governance cards that are focused on certain use cases of AI help alleviate this barrier. I was tasked to rework and expand existing drafts of governance cards on AI-Powered Transcription and AI Coding Assistants.
- Advising AI Work. Since the AI Enablement Team is responsible for Maryland’s AI Inventory and making sure other agencies comply with Maryland laws regulating artificial intelligence, they review AI tool intake tickets. To get a taste of what it’s like being on the team, I drafted replies to a few of these tickets. The AI Enablement Team is also creating an AI Productivity Guide, which is basically a manual for AI tools with best practices outlined by Maryland regulation. I was already running experiments on AI-assisted transcription with Google Gemini and Microsoft Copilot because I needed to know what AI-powered tools were capable of writing guidance in the AI Governance Cards, so I incorporated screenshots and results from those experiments in the productivity guide.
How did the experience connect to your current research and future plans?
I was surprised by how much the work on the AI Enablement Team related to my coursework and research. One of the topics covered in 6.8510, Multimodal Intelligent Interfaces was transcription, so it was fun to test known problems in AI-Powered Transcription taught in class and translate the results into policy guidance. On the other hand, giving guidance on AI-Coding Assistants was hard because many best practices with those tools are general best practices for software engineers. Most state employees do not have a software engineering background, and so there are no organization-wide standard quality checks like code reviews and set permissions. It was a challenge for me to think about the viewpoint of someone who has no engineering exposure and to write technology public policy for that audience. I had to remind myself that I have context that they do not, and therefore have to define terms that are software industry specific. Finally, I got to connect with my research when I wrote a memo for the team recommending to them what their digital governance card on synthetic data for Maryland AI uses should contain. My recommendations included Maryland’s historical usage of synthetic data, ethical concerns and solutions of synthetic data, and degrading model performance when using synthetic data for training.
This experience has affirmed for me that I would love to work in public interest technology, and for a team/organization that cares about the societal impact and ethics. Shortly after graduation, I will be going to Kenya supported by MISTI Africa for eleven weeks, where I will work for Brian Omwenga SM ’09, a TPP alum, at his organization THiNK, which focuses on community driven technology.








