A generative music patch that represents the history of the Earth and humanity.
The aim is to design a generative system that represents the history of the Earth and humanity beginning from the Big Bang and ending at the present. I used a logarithmic timeline to specify eco-spherical and anthropological events. So it encapsulates almost 14 billion years.
This is a stochastic system that submits observed or learned information about generation, evolution, developments, and modifications through history. A scaled logarithmic timeline is projected in real-time, and I integrated that information as transitions or themes. This is not a deterministic attempt. The efficacy of every implementation inside the system is important. Feedback will have effects on the self-regulation and self-evolution of this cybernetic environment. Every time it is started, audiences will observe another possibility of interactions of known events as a sonic representation.
To design a self-creative model of a self-creative system I am benefiting from some principles:
-Second law of thermodynamics points to the probability of events by ensuring the information that; entropy never decreases in isolated systems. The disorder of the system tends to get higher to close to thermodynamic equilibrium. To stay in order, everything needs to dissipate energy to the environment. By interaction of organisms, the system evolves and life maintains life. -According to second-order cybernetics, observing a system changes both the observer and the system itself. Thus I am planning to take input (possibly sound recording) from the environment which interacts with the system. Also, this will have effects on the reproduction of overall output.
I am seeing this approach as an experiment that may provide an output that could trigger new ideas and perspectives about how I comprehend the Earth and everything related to it.
Granular Synthesizer – Max Patch
The main aim was to create a granular synthesizer that uses samples from another patch instantly. An additive-based (Hybrid) and a granular (Nebula) synthesizer are implemented in this patch.
Hybrid has some modulation (fm/am) and filter features. Also, it contains a drunk arranger, a randomized trigger. Also, it is possible to play it by yourself with QWERTY keys. To change the octave of the keypad use the Z and X keys.
-Spacebar opens ezdac~
-Numkey 3 starts and stops recording for 4 seconds long waveform~. It renews the sample always while rec is open.
In addition, it is possible to change Hybrid’s sound character by adjusting poly~ patcher’s values from inside.
You can record a sample without starting Nebula.
I put a preset object for a quick start. If you click 1st preset, you will be hearing granular synthesis while recording the sample (Do not forget to press Spacebar). Although you will not hear Hybrid because its gain is zero for this preset but you can mix them by adjusting Hybrid’s Gain.
Sound-Based Horror Game
Installation of game wise design
This application requires a sound library which is not included in this repository. The size of the audio library is around 1.3 Gb including bin-aural recordings and sound design files.
A bicycle (also a stand for the back wheel to rise it up) Arduino Uno 2X GY 521 board (MPU6050 chip with accelerometer and gyro which have 6DOF) (one for helmet, one for handlebar) A magnetic speed sensor A projector to screen visuals
Max 6.1 Processing 2.2.1 with Video and OSC libraries Arduino Software with i2cdevlib library for extracting DMP data from GY 521 board
This is a horror type of game project that aims more to experience what it provides than achieve a goal as typical in stimulants. It obtains a sound design work that leads the user while interacting with it and a visual design that creates a scary atmosphere rather than deriving instructions or routing from the screen for users. Interactivity is achieved by two GY 521 breakout boards and a magnetic speed sensor through Arduino Uno. Processing is used as a game logic design tool, hardware communicator, visualization processor, and data provider for the Max platform. We used DMP in MPU6050 (Digital Motion Processor) to extract processed data from Arduino thanks to the i2cdevlib Arduino Software library. This provides us with more accurate data than raw ones according to the motion of the board. Max calls sound files and processes 3D audio effects according to the player’s position on a 2D map which is created in Processing and forward actions that the user needs to do (users do not see any map information on the screen but voices and sounds orient them). There are safety and unsafety regions. According to the player’s true or false decisions, the tension in the game increases or decreases. Passing over the threshold, according to the range of unsafety places may cause kill conditions which are determined by Max’s stochastic kill condition code. If it happens, also processing engines its kill condition.
Further information can be found on: