UW CSE Capstone Course Videos
This capstone will build projects utilizing computer audio techniques for human interfacing, sound recording and playback, encoding and decoding, synchronization, sound synthesis, recognition, and analysis/resynthesis. Students will work in teams to design, implement, and release a software project utilizing some of these techniques.
We're doing something new this year - a collaborative group project involving everyone in the class. The focus is information extraction, widely believed to be the future of Web search. We will divide into small groups (eg 2 people), each working on a component of an integrated system to "read the Web", augmenting a knowledge base (like Freebase) with entity-attribute-value triples by automatically processing newswire and Web text.
Unlike traditional lecture-based CSE courses, students are asked to work in groups on a single project that parallels the experience of working for a real company or customer. Students will prototype a substantial project that mixes sensing hardware and software components. Students work in teams to design and implement a software project that makes use of RGB-D sensors (e.g. Microsoft Kinect, ASUS Xtion Pro Live).
Students work in substantial teams to design, implement, and release a software project involving multiple areas of the CSE curriculum. Emphasis is placed on the development process itself, rather than on the product. This is a course that tries to have students see how it is to develop for the real world. Students know they've done well in the class when people around the world actually play their game. This capstone shows students that as he/she creates a game one has to develop very quickly, and most importantly, look at the analytics of exactly how the software is received by the general audience and then rapidly adapt the software towards its greatest level of acceptance.
As smart phones become more capable with internet connectivity and sensors, there are many new opportunities to use them as tools for people with disabilities. In the Accessibility Capstone course students worked in teams to create new applications on smart phones for blind, low vision, and deaf people.
This capstone will build projects utilizing computer audio techniques for sound recording and playback, encoding and decoding, synchronization, sound synthesis, recognition, and analysis/resynthesis. Students will work in teams to design, implement, and release a software project utilizing some of the techniques such as those surveyed in CSE 490S.
Capstone design courses are the hallmark of Computer Science & Engineering. In these classes, teams of students design and implement complex hardware, software, and embedded system projects of their own invention. This allows them to further explore the areas they personally care about.
In this capstone, we'll be working on what is conceptually a single project. We'll be organized as a single team, composed of a loosely federated group of sub-teams. The theme this year focused on making your home as available to you everywhere all the time. One question that the course addressed: Can we do that? A second question addressed apps: could all apps be downloaded over the web?
This capstone course is the second quarter in a two-quarter-long design and implementation sequence held jointly between CSE and HCDE. In winter quarter, students formed interdisciplinary project groups to scope and design projects for resource-constrained environments. This quarter students are implementing and evaluating many of those project concepts. The emphasis is on group work leading to the creation of testable realizations and completion of initial evaluations of the software and hardware artifacts produced. Students work in inter-disciplinary groups with a faculty or graduate student manager. Groups will document their work in the form of posters, verbal presentations, videos, and written reports.
The overall goal of this capstone is to design and implement a robotic system that can learn new skills from human demonstration. This involves learning to write software for controlling a humanoid robot (the NAO) using a Kinect RGB+depth camera. Working as a team, students tackle the various sub-problems of (1) human motion capture and interpretation from video, (2) control of a humanoid robot, and (3) application of probabilistic reasoning and machine learning algorithms to the problem of learning from human demonstration. You will gain experience in applying machine learning and probabilistic reasoning algorithms to concrete problems in 3D vision and robotics.