but by design. this is thee place where all my work, past and present, is documented and exhibited.
a MAX/MSP (.mxf file) stereo channels extender (stereoizer)
you can edit input source gain and width offset.
audio demo @ https://soundcloud.com/datacode23/stereo-extender
you can edit: input source: can drop samples or using live input from your machine, change frequency of shatter, blur amount, margin portion, offset and feedback amount up to infinite quantity.
audio demo @ https://soundcloud.com/datacode23/shattered-demo
For my Max/MSP course I started working around the concept of visualising audio - In order to take on the challenge to learn how to work with jitter inside Max. My first step prior to that was finding different ways to extract different features from audio - which lead me to discover a great max pack called “Zsa.descriptors” which offered me many different ways of doing audio analysis , and the CNMAT pack which had some great tools as well … I’ve constructed a patch with each of the objects I found interesting commenting how many features they extract and in what ways. first part of the project is done !
after the first part I started going over the different jitter tutorials … after the last max class it really brought me up to speed quickly skipping the first 10 or more tutorials … most of the first ones relate to working with either live camera or pre-recored videos and I was actually really interested in generating my own visuals - and that meant working in gl. quickly just exploring the different tutorials (I’ll do them in time … ) I’ve found tutorial number 37 “Geometry Under the Hood” and I notice that the example is a great starting point for me to explore . I started playing around with it and got to this point :
later I’ve even extended this simple patch a bit more and it seems like a great starting point towards what I was looking for .
after playing around with this patch for a while I came to several conclusions about my project :
1 - less is more - don’t go too deep to places you don’t feel comfortable.
2 - I want to control it using a controller - and change values in real time while it’s being processed and altered by the different audio features - again maybe less features is more.
3 - everything that I’ll be able to control in real time will have an probabilistic alternative, so this patch will be either installation or a live performance.
so far made great progress - and had lots of fun getting there.
武蔵美術大学芸術祭プロジェクションマッピングにて, リアルタイムでTwitterユーザーネームを機械音声で読み上げる等, Max/MSPを用いたSEを担当.
2015.9 - 2015.10