Uğur Akagündüz

Artist / Engineer


Uğur Akagündüz is a sound artist/engineer holds a master's degree in sound engineering and design - MIAM and a bachelor's degree in mechanical engineering both from Istanbul Technical University.

He is keenly interested in the interdisciplinary territory of music performance and technology. His primary research interests include human-computer interaction, cognition of the multi-disciplinary art forms and interdisciplinary performance practice.

His practice as both a performer/composer and an engineer comprises a range of fields, including acoustics, electroacoustic, sound design, sound recording, music production and improvisation.

He has performed in many venues and prominent festivals in Istanbul, often in collaboration with contemporary dancers and sound artists, a founding member of the improvisation ensemble powTheNoise. ​

Uğur is one of the co-founders of the acoustical consulting and design firm AKUSTİKA. Also he has been working in collaboration with SAFE Acoustics. He has over 8 years of professional experience. He works collaboratively with architects, structural engineers, mechanical engineers, musicians and Local Authorities discussing and agreeing acoustic project objectives. He has been involved in many projects including airport buildings, radio TV, music hall, residential developments, performing arts spaces and offices.

He has been working with renowned percussionist Okay Temiz as a sound engineer and also is one of the co-founders and the sound director of MonoCrew multimedia production studio that predominantly takes place in feature film projects and advertisements.

fixed media


Interdisciplinary Improvisation of Contemporary Circus, Dance and Electroacoustic Music Performances.

muglak 2
powTheNoise Improvization
Noise Collective XVI



A generative music patch which represents the history of the Earth and humanity.

The aim is to design a generative system which represents the history of the Earth and humanity beginning from the Big Bang and ending at the present. I used a logarithmic timeline to specify eco-spherical and anthropological events. So it encapsulates almost 14 billion years.

This is a stochastic system that submits observed or learned information of generation, evolution, developments and modifications through the history. Scaled logarithmic timeline is projected to real time, I integrated those information as transitions or themes. This is not a deterministic attempt. Efficacy of every implementation inside the system is important. Feedbacks will have affects for self-regulation and self-evolution of this cybernetic environment. Every time it is started, audiences will observe another possibility of interactions of known events as a sonic representation.

To design a self-creative model of a self-creative system I am benefiting from some principles:

-Second law of thermodynamics points the probability of events by ensuring the information of that; entropy never decreases in isolated systems. Disorder of the system tends to get higher to close thermodynamic equilibrium. To stay in order, everything needs to dissipate energy to the environment. By interacting of organisms, system evolves and life maintains life. -According to second order cybernetics, observing a system changes both the observer and the system itself. Thus I am planning to take an input (possibly sound recording) from environment which interacts with the system. Also this will have effects on reproduction of overall output.

I am seeing this approach as an experiment that may provide an output which could trigger new ideas and perspectives about how I comprehend the Earth and everything related to it.



Granular Synthesizer – Max Patch

The main aim was to create a granular synthesizer that uses samples from another patch instantly. An additive based (Hybrid) and a granular (Nebula) synthesizers are implemented in this patch.

Hybrid has some modulation (fm/am) and filter features. Also it contains a drunk arranger, a randomized trigger. Also, it is possible to play it by yourself by QWERTY keys. To change octave of the keypad use the Z and X keys.

-Spacebar opens ezdac~

-Numkey 3 starts and stops recording for 4 seconds long waveform~. It renews the sample always while rec is open.

In addition, it is possible to change Hybrid’s sound character by adjusting poly~ patcher’s values from inside.

You can record a sample without starting Nebula.

I put a preset object for quick start. If you click 1st preset, you will be hearing granular synthesis while recording sample (Do not forget to press Spacebar). Although you will not hear Hybrid because its gain is zero for this preset but you can mix them by adjusting Hybrid’s Gain.

GitHub-Mark-120px-plus github.com/ugurakagunduz/Nebulator.git



Sound Based Horror Game

Installation of game wise design

This application requires a sound library which is not included in this repository. Size of the audio library is around 1.3 gb including of bin-aural recordings and sound design files.


A bicycle (also a stand for back wheel to rise it up) Arduino Uno 2X GY 521 board (MPU6050 chip with accelerometer and gyro which have 6DOF) (one for helmet, one for handlebar) A magnetic speed sensor A projector to screen visuals


Max 6.1 Processing 2.2.1 with Video and OSC libraries Arduino Software with i2cdevlib library for extracting DMP data from GY 521 board


This is a horror type of game project that aims more experiencing what it provides than achieving a goal as typical in simulants. It obtains a sound design work which leads the user while interacting with it and a visual design that creates a scary atmosphere rather than deriving instructions or routing from the screen for users. Interactivity is achieved by two GY 521 breakout board and a magnetic speed sensor through Arduino Uno. Processing is used as a game logic design tool, hardware communicator, visualization processor and a data provider for Max platform. We used DMP in MPU6050 (Digital Motion Processor) to extract processed data from Arduino thanks to i2cdevlib Arduino Software library. This provide us more accurate data than raw ones according to motion of the board. Max calls sound files and process 3D audio effects according to the player’s position on 2D map which created in Processing and forward actions that the user needs to do (users do not see any map information on screen but voices and sounds orient them). There are safety and unsafety regions. According to the player’s true or false decisions, tension in the game increases or decreases. Passing over the threshold, according to the range of being unsafety places may cause kill condition which determined by Max’s stochastic kill condition code. If it happens, also processing engines its kill condition.

Further information can be found on:

GitHub-Mark-120px-plus github.com/ugurakagunduz/psychicCycling.git