Note: the projects listed below were from summer 2020. We will update this website soon with new projects for summer 2021 if grant funding is renewed.
When you apply, we will ask you to rank your top three interests from the research projects listed below. We encourage applicants to explore each mentor’s website to learn more about the individual research activities of each lab.
Privacy-Preserving Generative Deep Neural Networks
Mentor: Steven Wu
We will leverage deep learning methods, including generative adversarial networks and variational autoencoders, to generate privacy-preserving synthetic data. The goal is to generate synthetic data that retains the important statistical properties of the data distribution while preserving the privacy of the individual data records in the sensitive data sets, such as medical records.
Making Intelligent Systems Fair for All
Mentor: Loren Terveen
Machine Learning systems are being used to automate and/or support decision-making in many domains. However, there has been growing evidence of these systems being trained on biased data, which leads to the systems making predictions that are unfair to some of the affected people. My collaborators and I have been conducting studies and developing interfaces to help non-technical people explore intelligent algorithms, identify situations when algorithms are wrong, and provide input to help retrain algorithms. I would like to host students who are interested in this topic and would like to help create techniques to make intelligent systems effective and fair for everyone.
Testing of Learning Enabled Systems
Mentor: Sanjai Rayadurgam
Increasingly, machine learning approaches such as deep neural networks are being used in a variety of autonomy applications such as self-driving cars and pilot-less air vehicles. Traditional development methods for such systems place a strong emphasis on verification and validation techniques to ensure a high degree of assurance of safety and reliability. However, existing approaches do not easily carry over to systems with machine-learning components such as deep neural networks. In this project we will devise effective ways to test such learning-based systems with the goal of achieving a high assurance that the system will behave reliably and safely. The student will develop code to execute large experiments, collect data and do statistical analysis with neural network components that could be used in avionics systems for prototypical flight and ground actions of autonomous air vehicles. Along the way, we will jointly look out how to formulate hypotheses, collect and analyze data, and answer pertinent research questions.
Virtualizing Trusted Execution Environments
Mentor: Antonia Zhai
A trusted execution environment (TEE) is an isolated execution environment aims to preserve the security of the information in it. A TEE is intended to guarantee the integrity of the application executing in its environment along with the confidentiality of its assets. Hence, TEEs can offer a strong protection for security-sensitive data as unauthorized accesses to such data will be rejected automatically. In general, TEEs are supported through hardware extensions in modern CPUs. For instance, ARM processors equipped with TrustZone can provide a system-wide hardware isolation for trusted software. Similarly, Intel processors use secure enclaves enabled by software guard extensions (SGX) to provide TEEs. As more and more applications and services are built on hardware TEEs, it is necessary and crucial to virtualize TEEs across different ISAs in order to facilitate the migration or computation offloading of such applications through a secure and transparent virtualization platform, particularly, for data centers and cloud/edge computing environments.
Towards Robust Underwater Human-in-the-Loop Robot Autonomy
Mentor: Junaed Sattar
This project investigates methods and algorithms for sensor-driven robust autonomy for underwater robots in a human-in-the-loop settings. This essentially means robots will have autonomous capabilities to engage in a variety of missions, but will also accept human instructions and clarifications alongside sensory data to ensure informative, robust, and safe operation. The suite of problems that are being addressed include accurate tracking and following of humans underwater, model-based detection and tracking of arbitrary targets, free-form gesture based human-robot dialog, and self-diagnostics capabilities to detect and recover from failures. These entail investigations in machine learning (particularly deep neural nets), machine perceptions, human-robot interaction, and robotics systems development.
Perception and Control for Robotics
Mentor: Volkan Isler
The Robotics Sensor Network Lab at the University of Minnesota conducts research on algorithmic and systems aspects of robotics, combining sensing, communication and actuation capabilities. To model robotic behavior and decision making, we perform research on using machine learning to teach robots how to move in and interact with their environment in a safe and reliable way. We are seeking REU interns to work on robotic learning or design tasks including, but not limited to the following topics. Research on developing perception and control algorithms for safe robotic manipulation and navigation. The internship can also include designing and manufacturing of a new robotic platform to perform real world experiments. REU interns who are interested in these topics should be comfortable handling data and able to program in Python, C, C++ or a related language. Prior experience with machine learning tools, e.g., Keras, Tensorflow, PyTorch is not required but a plus.
Virtual Reality Data Visualization for Art and Science
Mentor: Dan Keefe
Join the Interactive Visualization Lab this summer to help us create immersive Virtual and Augmented Reality (VR and AR) data visualizations. Our interdisciplinary research lab collaborates with artists who bring unique visual insights to the problem of data visualization and with scientists who need help making sense of complex data. Our research brings all these interests together to create stunning interactive data visualization environments in virtual and augmented spaces where scientists can literally "walk inside" their data. We are especially interested in working with students with interdisciplinary backgrounds and who are interested in research that combines computer science skills with skills in the arts and/or other sciences.
Mitigating Cybersickness in Virtual Reality
Mentor: Victoria Interrante
Half or more of all people who have ever used VR technology have at some point experienced cybersickness – feelings of nausea, disorientation and eye strain – and cybersickness is becoming a major obstacle to the wider deployment of VR for socially beneficial purposes in areas such as education, psychotherapy, job training, implicit/unconscious bias reduction training, cultural heritage, manufacturing, design, and more. As cybersickness disproportionately affects women, developing strategies to predict and prevent cybersickness onset or mitigate cybersickness severity is especially important to ensure equal opportunity of access to this important technology. This summer I am seeking 1-2 research assistants to help me address this important problem. Students will develop and test software using Unity/C# and/or Unreal Engine and design and run human subjects experiments using an array of VR technologies.
Investigating Virtual Reality for Implicit Bias Training
Mentor: Victoria Interrante
Institutions are becoming increasingly aware of the extent to which implicit bias – unconscious stereotypes that affect our interpretation of factual information and influence the decisions we make in ways that we are not overtly aware of – can lead to negative outcomes in a very wide range of important areas including hiring, promotion and salary decisions, public safety, and the provision of timely and appropriate medical treatment. Virtual reality technology has tremendous potential as a medium for implicit bias reduction training. Yet there are many open questions about how best to deploy VR for maximum benefit. This summer I am seeking a student to help me define the most important unaddressed research questions in this topic area and outline strategies to tackle those questions. Efforts will include developing and pilot testing potential VR interventions.
Redirected Walking in Virtual Reality
Mentor: Evan Suma Rosenberg
The Illusioneering Lab studies and applies techniques that imperceptibly manipulate the laws of physics to overcome the physical obstacles that normally restrict movement in virtual reality. This approach, known as redirected walking, has stunning potential to fool the senses. Experiments have convinced users they were walking along a straight path while actually traveling in a circle or that they were exploring impossibly large virtual environments within the footprint of a single real-world room. Possible summer projects include prototyping novel infinite walking experiences in virtual reality, conducting a scientific experiment to study human perception of spatial illusions, or contributing to the development of an open-source redirected walking toolkit.
Immersive User Interfaces for Human-Robot Interaction
Mentor: Evan Suma Rosenberg
With recent advances in hardware technology and artificial intelligence, robots are becoming increasingly integrated with human activities, performing a multitude of tasks such as image capture and scientific sample collection. However, current robotic planning and control interfaces are often not intuitive and often place a large cognitive burden on those who are not highly trained in their use. This research project investigates the use of immersive technologies, including virtual and augmented reality, to support human interaction with both ground and aerial robots during the completion of task-oriented objectives. Possible summer projects include prototyping novel virtual reality experiences that leverage ground-based robots for haptic feedback and integrating augmented reality technology and aerial quadrotors within a unified motion capture system.
An Epidemiological Model for Analysis of False Information Spread in Social Networks
Mentor: Jaideep Srivastava
We are spending an increasing amount of time on social media. We interact with others and in the process consume and share information. As of 2018, over two-thirds of American adults consumed news through social media, and this trend is on the rise. With major social media platforms like Twitter becoming the ‘go-to places’ for news, where practically all the information is user-generated content, and whose veracity is not guaranteed; society is facing the serious, and well-nigh existential threat of false information, popularly called Fake News. Fake news spread happens both intentionally (i.e. disinformation) and unintentionally (i.e. misinformation). While the former is more dangerous, often being the result of a well-coordinated campaign, both have negative consequences for society. Recently there has been significant interest in developing techniques to handle this problem, with majority of focus on determining the veracity of the information and content analysis. Our proposed approach is complementary to it. We propose a novel fake news detection and control model analyzing social network structure, inspired by the domain of Epidemiology.