Searching for scenes in movies is a time-consuming but cru- cial task for film studies scholars, film professionals, and new media artists. In pilot interviews we have found that such users search for a wide variety of clips—e.g., actions, props, dialogue phrases, character performances, locations— and they return to particular scenes they have seen in the past. Today, these users find relevant clips by watching the entire movie, scrubbing the video timeline, or navigating via DVD chapter menus. Increasingly, users can also index films through transcripts—however, dialogue often lacks vi- sual context, character names, and high level event descrip- tions. We introduce SceneSkim, a tool for searching and browsing movies using synchronized captions, scripts and plot summaries. Our interface integrates information from such sources to allow expressive search at several levels of granularity: Captions provide access to accurate dialogue, scripts describe shot-by-shot actions and settings, and plot summaries contain high-level event descriptions. We propose new algorithms for finding word-level caption to script align- ments, parsing text scripts, and aligning plot summaries to scripts. Film studies graduate students evaluating SceneSkim expressed enthusiasm about the usability of the proposed system for their research and teaching.