Jason Brogan

— Hello. I'm a creative living in Brooklyn.

I wear many hats: artist, educator, designer, director, engineer, producer, strategist, technologist. In short, I work with emerging technologies across multiple disciplines and modes of presentation.



I hold an MFA in Design and Technology from Parsons School of Design, an MA in Music from Wesleyan University, and a Graduate Certificate in "Media between Data and Experience" from Cornell University. I was an original member of both the Parsons Telepresence Lab and the Designed Realities Lab led by Dunne and Raby). I've conducted UX research and development in collaboration with companies such as Bell Labs, FordIBM Watson, MicrosoftOpenBCI, and Vonage.

I have been Visiting Artist at Florida State University, Harvard University, and Oberlin College, among others, and my work has been presented in the United States and abroad, at the California Institute of the Arts, Eyebeam, Goethe-Institut Amsterdam, Instants Chavirés, ISSUE Project Room, MASS MoCA, and Roulette; and in publications such as Experimental Music Since 1970 (Bloomsbury, 2016) and Word Events: Perspectives on Verbal Notation (Bloomsbury, 2012).

At Parsons, I am Studio Director of Algorave. I’m the founder and director of The Big Ship, a post-planetary creative studio, and Belldog, an emerging media production company. I also compose, perform, and produce music.

CV and portfolio available upon request.





Collider makes a speculative proposal about the future of musical experience, but more broadly, design as generic science. In its simulation component, Base Camp Alpha, elements of Detroit Techno music are generated by an artificial intelligence and experimented with by users within an immersive VR environment.

Collaborator: Jung Seung-ho
Role: concept and ideation, design, development, production
Toolkit: Google Cardboard, HTC Vive, LibRosa, Logic Pro X, Magenta, Swift (iOS), Unity 3D




Drift is a smart guide for urban discovery. It employs AI-driven technology in order to reimagine the female experience of exploring the city on foot. It utilizes conversational interaction and a touch-activated wireless earpiece. Upon activation, Drift makes predictions in order to algorithmically generate personalized walking paths based on its insights. It cross-references multiple sets of data, and it creates unique experiences by drawing upon existing services that provide personality insights.

Collaborator: Mina Rafiee
Role: concept and ideation, design, production
Toolkit: Adobe Creative Cloud, IBM Watson Conversation, IBM Watson Personality Insights, Sketch


Parsons Telepresence Lab


In collaboration with NYC Media Lab and Vonage, the Parsons Telepresence Lab explored and prototyped new concepts and user-centered designs for AI-assisted, multi-modal, video conferencing and remote collaboration.

Collaborators: David Carroll (Principal Investigator), Joanna Chin, Soomi Lee
Role: concept and ideation, design, production
Toolkit: Final Cut Pro X, JavaScript, Node.js, Sketch, Swift (iOS)




See is a video-sharing service for iOS (and watchOS) that offers a quick, panoramic look around a location. Site-specific, panoramic video is uploaded and shared throughout a community of users. Content is accessed by searching within proximity to the user's current location or through notifications about the users whom you follow. Users publish content to a given location's stream.



Collaborators: Raha Ghassemi
Role: concept and ideation, design
Toolkit: Sketch and Marvel


Music for Solo Performer


Music for Solo Performer (1965) by composer Alvin Lucier, generally known as the “brainwave piece,” has become a major work within the experimental music canon. The piece, which musically explores the amplification of the performer’s alpha waves towards the excitement of percussion instruments, has achieved further recognition within growing discourse on artistic sonification practice and the development of brain-computer interface (BCI) technologies. This project migrates the technology employed throughout performances of the piece; it suggests a working model for the preservation of canonical art works that rely upon technologies that are ultimately guaranteed to become outdated and impractical for future utilization; and, importantly, it secures future performances of the work, making it available to be experienced by future generations.

Role: design and development
Version 1: Max, NeuroSky MindWave Mobile headset, openFrameworks, OSC
Version 2: Make Noise 0-Coast synthesizer, Node.js, OpenBCI headset, Tone.js




A nation at the forefront of technologically-driven innovation and automated governance, Barrierland is cooperatively owned and operated by its 650 human inhabitants. Its combination of cutting-edge data science and artificial intelligence (known as BrAIn) affords its citizens a unique post-work lifestyle both centered on and driven by a leisure activity: Data Fishing. Along daily tidal cycles, Barrierlanders traverse the Tidelander environment in order to position and retrieve data fishing devices that employ various sensor and networking technologies. In addition, conoidal data monuments are constructed and serve as physical archives of life in The Tidelands.

N.B. Barrierland was created as part of the Designed Realities Lab. It co-exists with the micronation The Tidelands.

Keywords: AI, automation, data, leisure, micronation, speculative
Collaborators: Jasmine Oh, Magnus Pind, Mina Rafiee
Role: concept and ideation, design, art direction, music production
Toolkit: Adobe Creative Cloud, Logic Pro X, Sketch


Module for Geotraumatic Synthesis


Module for Geotraumatic Synthesis engages participants with the material, i.e., environmental (or ambient), conditions that ground aesthetic experience. Sensors are used to generate musical information, which is distributed to performers via a customized mobile app and used to guide performance.

Role: concept and ideation, design, development
Toolkit: Version 1: Arduino, AudioKit, OpenWeatherMap API, Sketch, SuperCollider, Swift; Version 2: Arduino, Node.js, OpenWeatherMap API, Raspberry Pi, Swift, Tone.js




Created as part of a collaboration among designers from IBM Watson, Microsoft, and Parsons School of Design, Fog.ai is an augmented, cognitive user interface that externalizes the internal mechanisms of mixed cognition caused by the interplay between human and machine intelligences.

Collaborators: Jiyeon Kang, Jungsoo Park, Jack Wilkinson, Bell Labs, IBM Watson, Microsoft
Role: concept and ideation, design
Toolkit: Adobe Creative Cloud, fog machine, projection mapping


Say hello.