Tumgik
amitstk-blog · 6 years
Text
Video Music Machine
The Video Music Machine takes video clips or live video input and turns them into a musical step sequencer that generates randomized music and visuals. produced by: Amit Segall
check the YouTube for a short video:
https://www.youtube.com/watch?v=ndsxA_quX_k
Concept and background research The concept of my project was to reverse the relationship between sound and moving image, and use a video clips to generate music, and create a video-music instead of a standard music-video.
the 'content' of any medium is always another medium."  Fundamental to the understanding of the concept of remediation is that it is not a one-way process; it is not only a matter of new media incorporating the features of old media but also a matter of old media changing according to the challenges posed by new media [1].
By exploring the visual medium, I was trying to extract data that will drive and generate a different medium. I wanted to create music that is not bound to my musical understanding, aesthetics and limitations. I was experimenting with different types of video clips and explored ways to develop a system that works as an installation but will also incorporate live performance features that will allow me to present my work in different lights.  
Technical
In order to create my machine, I had to understand how to process the video. I decided to scale down the video, changing the resolution to a 16 by 8 matrix. I converted the low-resolution matrix to grayscale and received values I could process easily. I have divided the matrix to 8 different lists of 16 values, representing 8 instruments with 16 steps each. Each list was processed in two ways, the first represented the pitch of the notes by reading the grayscale value and the second went through a filter that allowed only bright pixels to pass, and these represented step activities. If the pixel would be bright enough it will play the note.
Tumblr media
Both processes entered live.step object in order to sequence the lists and play the steps accordingly. The next process was playing the notes in an internal audio system I've developed. The audio was being generated by triggering samples and doing some simple synthesis. I also made it possible to send the information to external MIDI processor. I also created an accompanied Ableton live set to demonstrate the MIDI features. Each instrument in my sequencer could either play only rhythmically the same note or play a melody. The Melodies in my current version are also being processed in a MIDI filter patch (EAMIR SDK pack) that confine the notes to a specific scale and key. For a live performance I've implemented features like holding or changing pattern on specific timings, BPM change and threshold control for the filter of the bright pixel. I also developed a graphical visualizer that revert the data from one type of visual to another. Originally when I approached the project I wanted to explore the different features in jitter and learn how to generate visuals in max. The addition of another layer on top of my video music machine enhanced the live performance aspect. While I explored to option to re-edit the original content in an algorithmic fashion [2] I decided not to go on that path. I ended up generating random geometrical shapes to enhance the rhythmical complexity that my machine was creating, something that the original video or re-editing was unable to.
Future development In future developments I would probably enhance and improve the internal audio system, which at the moment is very simple and minimalistic. By improving the audio system, I would be able to generate different kinds of soundscapes and maybe explore different aesthetics as well. I would also suggest refining the visual system, and make it less responsive, strengthening the audio-visual connection and find a better visual language to enhance a live performance. Also, I would love to work on a live performance that all patterns are generated from a live video feed that I'll be manipulating in real time by interacting with the camera and different lighting settings.  
Self evaluation Audiovisual is a diverse subject and yet I could not find resources that explore this kind of reverse connection between video and music. While my original goals set to explore jitter and visuals I shifted away from it looking for a more creative and artistic idea. I did explore the usage of generative visuals at the end but in a very simplistic way and only as a reinforcing tool. Overall, I am very pleased with the outcome and having the option to perform with my machine during a student event was a great pleasure.
References
1. Korsgaard, Mathias Bonde. "Music Video after MTV : Audiovisual Studies, New Media, and Popular Music". Routledge Research in Music. 2017.
2. Stefan, Julia, and Andrew R. Brown. "Generative Music Video Composition: Using Automation to Extend Creative Practice." Digital Creativity, 2014, 1-11.
Code I borrowed (also credited in the code): Abstraction sampler player idea from: https://www.youtube.com/watch?v=jVVbaJBdbuA MIDI filter Patch from EAMIR SDK pack by v.j menzo
Artistic inspirations: https://www.youtube.com/watch?v=4G4fo5CVSek https://www.youtube.com/watch?v=PXh8Z1kiJnw https://www.youtube.com/watch?v=x8c6uQG35rs https://www.youtube.com/watch?v=Klh9Hw-rJ1M
1 note · View note
amitstk-blog · 6 years
Text
Free-Form: Create A Musical Hit Structure
Project concept and motivation The goal of my project was to help musicians with musical arrangements /forms (In this article forms and musical arrangements used interchangeably). As a musician I felt that arranging my musical ideas to a full-length structure is the place I tend to get stuck the most, thus having a lot of unfinished songs. I think that having an arrangement as a reference could help me produce music more efficiently and to be able to communicate musical ideas better. I envision that every musician could benefit from using this kind of tool. It could easily integrate with any music writing process, allowing easy and simple integration with common music making software. This kind of application will still require me to come up with my own musical ideas but putting them in the right context will make full song form a simple process.
I have explored different academic articles about common songwriting forms [1,2] from a research perspective and from a practical perspective, online magazines with more hands-on approach [3,4]. I found an online app that randomized different musical forms [12] but it leaked a context. In addition, I found an academic article that examined style imitation in music using statistical modeling [11] but it was irrelevant for my research since it discussed style as opposed to form. I was also looking for existing software that is able to help with song forms, however I could not find one that resembled to what I had in mind. Therefore, I understood that the application I wanted to create is relevant and might also help others.
Implementation and Practice I created an application using Max/Msp where the users could insert information about the song they currently write (BPM, genre, key, scale) and their dream goals about it (plays, place in chart, length). Max processes the data and sends it to Wekinator [7]. Using Machine learning Wekinator will return a recommended musical form based on the input and the trained data. The selected musical form also being sent to Ableton live and plays an example song according to the selected form and input information (BPM, genre, key, scale) allowing everyone to explore the suggested form, not necessarily while writing a new song. check the YouTube for a short video: https://www.youtube.com/watch?v=b9c3OBMtCcw&feature=youtu.be
The first part of my project was to understand what kind of data set I'll be using to train Wekinator. I decided to use Spotify's top 200 (April 2018) chart and explore different elements in each song.
Tumblr media
 The elements I explored were: rank, plays, BPM, genre, length, root note and scale. These musical elements are the core part of each musical piece. By inspecting the form and the commercial successes in relation to these core elements, I will be able to gain insight about how to reverse the process of making a hit song. I wanted to examine the relationship between all elements, core and commercial, with their song form hoping to formulate what makes a song successful. Some of the data was available from Spotify's website [5] and the rest I extracted using analyzing tools from a DJ software (Traktor DJ pro [6]). The next part was assigning a class for each song. After exploring the different musical forms, I have defined five distinct musical forms that my application would suggest as output classes (see appendix A). These forms distinct between different structure of parts in a song similar to AABA and others [10]. I have analyzed the songs by listening, looking for patterns (intro, verse, chorus, etc..) and labeling them according to similarities to one of the five different forms I have defined. In order to simplify the data, I have scaled down the numbers and convert the output to a float number between 0.0-1.0. This process allowed me to generalize the data and helped me make assumptions while testing my application with new data compared to the trained system. I have tried several different classification algorithms for machine learning with each having different results. I decided to focus mainly on two: AdaBoost.M1 and Naïve Bayes. Approaching the project, I assumed that Naïve Bayes will be the best algorithm for the task. I thought that the diversity of data will make simpler sense when put through a statistical lens. The pros about using this is having the additional statistical distribution. The distribution chart makes exploring the different values very convenient and showed both similarities to other forms and approximation to the actual class. In general, I felt that having this as a tool in my application would improve user experience and also will invoke users to not be strict to an absolute form and see what else might fit the closest.  The problem with this algorithm was its accuracy. When I performed model evaluation both training accuracy and cross validation tests Naïve Bayes had the lowest scores comparing all other classification algorithms. A training accuracy of less than 40% meant that either I do not have enough data, or this algorithm is not fitting for the job. As I predicted, it added a lot of value looking at the distribution chart. However, after endless tries to predict the machine and verify that Wekinator did learn what I wanted I decided to change approach. Naïve Bayes was too unpredictable and under fit for the task. As an alternative I've experimented with SVM as well. Although it made sense using this algorithm for examining the different probabilities, it felt undefined enough and I was looking for a more pronounce resolute.
Tumblr media
The second algorithm I experimented with and ended up using is AdaBoost.M1 with a decision tree as a base classifier. Experimenting with a simple decision tree (J48) could not comprehend the complicities of the data. The additional iterations of the training rounds using AdaBoost.M1 produced better results. This algorithm broke down the data to several small and simple decision-tree problems and provided great results from the start. When I performed model evaluation the training accuracy was between 95% to 98% (depending of training rounds). While this was almost perfect, experimenting with the application and checking cross validation data again yield less then 40%. Fewer training rounds simplified the results, generalizing the data and made it very consistent. Higher training rounds made the application more dynamic and responsive. Eventually I used hundred training rounds, and this really highlighted the relationship of musical keys, length and BPM with the associated class. Having more training data could probably blur these relationships and could produce more coherent results.
Conclusions Overall, I think my project was successful, but I could easily improve it by providing bigger amounts of data. When I started the project, I explored Top 50 chart but after few experiments I understood that the data is too diverse and not enough. After adding additional 150 songs to my database the resulting forms started to make sense, however it is still complicated to predict. If I had more time I would try to implement an automated system to detect structures [8,9] which would have been a major time saver. Automated system might detect patters I was unable to or unaware of. I am also interested in simplifying the accompanied Ableton live set to make a simpler template to interact with the Max patch.
Credits and Clarifications For this project I have used data gathered from Spotify and YouTube. All music I have listened and analyzed (on my own or through Traktor DJ pro software) in copyrighted to the original owners and accompanied by a full list "Data for ML - full chart.CSV" in the project files. For the record I have not verified the data (BPM/key/Genre). The usage of the data is for the purpose of this project only. In my Max patch I have implemented two features that I did not write the code for (credit in code as well): Reading txt/csv files - https://cycling74.com/forums/importing-from-excel-csv-questions/ Truncate decimal points - https://cycling74.com/forums/progressively-remove-decimal-digits Instructions for running the project In order to run my project, just open the Max patch, Wekinator project and Ableton live project. From the Max patch interface, you can start Wekinator (since it is already trained) and start experimenting with the sliders and dials to produce a recommended musical form. You can also play the form in Ableton live. You can change values (BPM, genre, key, scale) while live is playing, however it is recommended to stop and start again, otherwise the form might be broken. Notice that not all musical keys/ genres work well in this musical context thought it is part of the fun to experiment.
References 1. Tough, David T. "An Analysis of Common Songwriting and Production Practices in 2014-2015 Billboard Hot 100 Songs." MEIEA Journal 17, no. 1 (2017): 79-120.
2. Tough, David. "Teaching Modern Production and Songwriting Techniques: What Makes a Hit Song?." MEIEA Journal 13, no. 1 (2013): 97.
3. MusicRadar. "How to write a hit: structure" Accessed May 01, 2012. https://www.musicradar.com/tuition/tech/how-to-write-a-hit-structure-542126
4. MusicTech. "Songwriting Tutorial: part one - introduction to song forms" Accessed Oct 27, 2014 http://www.musictech.net/2014/10/sw-1/
5. https://spotifycharts.com/regional
6. https://www.native-instruments.com/en/products/traktor/dj-software/traktor-pro-2/
7. http://www.wekinator.org/kadenze/#Install_the_Wekinator
8. Maddage, Namunu C. "Automatic structure detection for popular music." Ieee Multimedia 13, no. 1 (2006): 65-77.
9. Serra, Joan, Meinard Müller, Peter Grosche, and Josep Ll Arcos. "Unsupervised music structure annotation by time series structure features and segment similarity." IEEE Transactions on Multimedia 16, no. 5 (2014): 1229-1240.
10. Von Appen, Ralf, and Markus Frei-Hauenschild. "AABA, Refrain, Chorus, Bridge, Prechorus-Song Forms and their Historical Development." Samples. Online-Publikationen der Gesellschaft für Popularmusikforschung (2015).
11. Conklin, Darrell. "Music generation from statistical models." In Proceedings of the AISB 2003 Symposium on Artificial Intelligence and Creativity in the Arts and Sciences, pp. 30-35. London: AISB Society, 2003.
12. Random Song Form Structure Song Writing app http://learnhowtowritesongs.com/random-song-form-structure-songwriting-app/
Appendix A After examining different musical forms here are five distinct forms I used for classification during this project.
Form I: This form is characterized by having a second verse before going to first course - it's not a pre-chorus but a variation on a verse after the first one.
Examples of full arrangements are: Avicii - Wake me up (13 on the chart): Intro - Verse - Verse II - Pre-Chorus - Chorus - Verse - Verse II - Pre-Chorus - Chorus Camila Cabello - Havana (40 on the chart): Intro - Verse - Verse II - Pre-Chorus - Chorus - C part - Chorus - D part - Chorus Lil Dicky - Freaky Friday (11 on the chart): Intro - Verse - Verse II -Chorus - Verse - Chorus -Verse - Chorus - Outro
Form II: This form is characterized by having a buildup Pre-chorus before the first Chorus.
Examples of full arrangements are: Dua Lipa - IDGAF (18 on the chart): Intro - Verse - Pre-Chorus - Chorus - Verse - Pre-Chorus - Chorus - C part - Verse - Pre-Chorus - Chorus - Outro Dennis Lloyd - Nevermind (22 on the chart): Intro - Verse - Pre-Chorus - Chorus - Verse - Pre-Chorus - Chorus - Chorus ZAYN - let me (42 on the chart): Intro - Verse - Pre-Chorus - Chorus - Verse - Pre-Chorus - Chorus - C part - Pre-Chorus - Chorus
Form III: This form is characterized by shifting from verse to chorus directly.
Examples of full arrangements are: Ariana Grande - No Tears Left to Cry (2 on the chart): Intro - Verse --Chorus - Intro - Verse - Chorus -Verse - Chorus - C part - Chorus - Outro Khalid - Love Lies (14 on the chart): Intro - Verse --Chorus - Intro - Verse - Chorus - C part Drake - Nice for What (1 on the chart): Intro - Verse --Chorus - Verse - Verse II - C part - Chorus - Verse II - Outro
Form IV: This form is characterized by starting from the Chorus.
Examples of full arrangements are: Daddy Yankee - Dura (34 on the chart): Intro - Chorus - Verse - Chorus - Verse - Verse II - Chorus - C part Kendrick Lamar, SZA - All the Stars (25 on the chart): Intro - Chorus - Verse - Chorus - Intro - Chorus - Verse II - C part - Outro Nicky Jam - X (10 on the chart): Intro - Chorus - Verse - Chorus - Verse - Chorus - Verse - Chorus -Outro
Form V: This form represents all other forms that I could not fit into forms I-IV since they are too complex, unexpected and I couldn't detect patterns and similarities between songs. They often miss an intro, have a double verse or a D part and have a unique arrangement.
Examples of full arrangements are: David Guetta, Martin Garrix - Like I Do (30 on the chart): Verse - Pre-Chorus - Chorus - C part - Verse - D part - Pre-Chorus - Chorus - C part Shawn Mendes - In My Blood (29 on the chart): Verse - Verse - Chorus - Verse - Chorus - Chorus - Verse - Chorus - Chorus Cardi B - Be Careful (46 on the chart): Intro - Verse - Verse - Pre-Chorus - Chorus - Verse - Verse - Pre-Chorus - Chorus
0 notes
amitstk-blog · 6 years
Text
what we see is what we get
looking at various examples of “machine seeing” reveals a disturbing image - that in fact not only machines are “blind”, the neoliberalism approach only “cover it’s eyes”. nowadays different projects like Fei Fei Lee’s develop technical tools to develop what looks on the surface as an artificial intelligence, but actually when examining these projects under carful eyes examples like “three black teenages” and alike highlight the inequalities and problems that exist in these projects, and media technology in general. where projects like google’s reCAPTCHA engages with users to gather and analyse machines it seems like it fails on topics of race and gander, and many other.
Machines are not as smart as we’d like them to be, and while reaching amazing success the problems being pushed to the sides. All these topics exactly were also highlighted when we attended in Transmediale’s keynote talk by Lisa Nakamura “Call out, Protest, Speak back” when she used another google project “Racial Identity” a VR experience on youtube that instead of actually provoke change only highlight the racial issues that tools such as VR where you allegedly can be who ever you want, only actually allow you to be what they wanted you to be or see. I’m glad that I have recently exposed to these issues , reading this paper by Safina Umoja Noble and the notes about AFROFUTURISM 2.0 creating somewhat a hopefully future that a change is coming, and we’re closer than ever to invoke the desired change. As for machine seeing, there is still a gap until we’ll be actually see the things that machines should see, and yet, it’s never too late.  
0 notes
amitstk-blog · 6 years
Text
The eye of the beholder
reading “Digital narratives” I could relate to the subject from the start from several perspectives on a personal basis, although I’m a complete atheist, I’m also a Jew and an Israeli, with everything that it stands for. reading the paper I could not stop thinking and reflecting the crisis in Pakistan to the suffering of Palestinians, and how they are reflected over the different digital medias. the way we engage digital media with new technologies and the emotions that it invokes, and how it affect politics is just outstanding. Eyal Weizman’s project and the Forensic Architecture is such a great example for digital witnessing, it’s beautifully simple and yet so meaningful and helpful. since we keep losing our trust with governments’s different policies it seems that a lot of the things that we’ve been told and lies, and not trustworthy anymore. Technology had shifted the power back to the people allowing them to choose how to trust and where to source information from. major news companies relay on “the small person” on the street to actually reflect different situations and this is somewhat of a good new, because when the people have “the power” there is a real chance for a change. Liquid Traces - The Left-to-Die Boat Case is another great example of “outsmarting” policies by using accessible tech and redefining it. All in all, it seems like in the last few years we like to witness, we like to peek at others life for endless amount of reasons and as technologies evolve it we’re just getting better at it. This week’s reading was extremely interesting for me, I feel like it was an outstanding example of our relationship with technology with the right amount of criticism as to how this reflects back to us. the questions keep coming the more I think about these things, and I actually couldn’t helped myself from sharing Nishat’s paper with friends since I think it’s important to be aware of these things.  
0 notes
amitstk-blog · 6 years
Text
The artist as a professional
Beatriz da Costa’s paper - “When Art Becomes Science” talks about a topic I can easily relate. An expert could be anyone in these days, and it’s all about how you portray things. The writer users Wikipedia as a great example of what now is being referred as a reliable source of information and how easy and simple this shift of power was in the digital cultural age. I feel this is a great example of how trends emerged online, and how people who recognised the potential of the digital medium became the new experts. Not to undermine people’s experience, these things could always go hand in hand but personally I feel like I became somewhat of a local “musical technical expert” just because I recognised what was missing digitally in my Israeli crowd’s domain. During the first week of the term I was introduced to the “impostor syndrome”, and I was relating to that as well - feeling sometimes that my achievements has actually nothing to do with my professionally, and this article talk about the same issues from a modest and honest way. Sometimes us artists don’t feel like we’re actually experts, or producing body of work in a certain domain doesn't mean you know everything about everything. Maybe being an expert is another path of the posthuman, and it’s all just another phase towards the future.
0 notes
amitstk-blog · 6 years
Text
The Executioner
The subject of execution and their related to computer science , or machinery again arise the questions of our relationship with technology in general. I feel that even people who are constantly dealing with computers never fully understand the entire process, and it seems like the more people are exploring these subjects we only raise more questions. Since technologies evolve, and processes only get better, execution time either reduced or having “multi-core” executions that allows us to do more at the same time frame it seems like we can’t fully explore the entire process and only examine small prats of it or speculate on bigger processes in relation to how these effect our lives. With the rise of “Internet of things” and ubiquitous computing execution is being pushed to the sides, as a society we seem to care more about the “what” regardless the “how” or “why” of things. we are immersed in technology and we can’t break down a simple process. Almost ever electrical device we use nowadays from a cell phones, cars, entertainment devices and what not constantly executing things and we just shut our eyes and say thank you. It seems somewhat  ironic when you thing about the origin of the word execution, and it’s reference to medieval ages where people were being executed.
0 notes
amitstk-blog · 6 years
Text
Digital future and Human machine interaction
From this week’s reading two main thoughts come to play: the first, related to Lucy Suchman research about our relations with machine. I find it always extremely fascinating reading texts that are 30 years old and still relevant. Suchman questions about attitude towards technologies (or machinery especially) and explore how we interact with machines, what types of interaction can we have and what makes machines intelligent. Even today after intensive artificial intelligent development I feel like we’re still unsure as to “where things go”? - what does it all mean and why actually are we trying to develop machines that will eventually be able to surpass our own understanding. I take things for granted most of the time and these reads invoke many great questions about our daily usage. I couldn’t stop thinking about the face that I learn to code nowadays, which in relation to my relationship with my computer, basically I learn how to understand it better, since it’s already smarter than me in many ways. As for “Digital Futures” project, I think transparency and digital data preservation (from all kinds) are still emerging topics with not enough awareness. the fact that when I was a young kid I went to the library and read the encyclopaedia and now everyone just “google” for answers or reaching for digitised sources without questioning their ownership its alarming. projects as “Digital Futures” make a really educated elegant usage of criticism to highlight these issues and I really liked it - note to self try and work with these topics to create a new interesting project !
0 notes
amitstk-blog · 6 years
Text
The Status Project
In the following 10 years that have passed since “the statues Project” emerged these issues of privacy and governments policies of surveillance have only progressed to a “next stage”. it seems that we’re constantly either being tracked or providing information as a new form of currency. I feel like it that last year I’ve became completely aware of these things and how the affect and intersect with our lives. It seems like there is a delicate balance between our attempt to use new technologies to improve our lives and what are we willing to pay for it. Companies like google constantly portraying new maps - sometimes literal maps - that constantly improved by incurring users to rate, share and outline location and sometimes just sorting data . during the last 10 years google has changed it’s privacy policy more than 20 times keeping up with the changes in governments policies -which sometimes merge and sometimes collide with each other. As for the connection with terrorism act, this is something that is driven by politics and neoliberalism. Coming from Israel I’m well aware of the usage of fear to change policies, and personally I don’t like it - I think nowadays governments just want to “keep the throne”, and actually conducting almost every crime possible to access data, and control using fear. while terrorism is real, in relation to digital technologies it is mostly a tool in to control mass crowd.
0 notes
amitstk-blog · 6 years
Text
Project update  “The experience economy” :
last week really changed my perspective towards my research. I was reading about immersive theatre and found great similarities to what I’ve found as an immersive experience. luckily I got two different books about this subject and while the first try to emphasise and detail what it means as a creator, and explore different productions the other book explore these same topics but with a critical eye. The critical approach opened my eyes to the fact that we are now in the middle of a new era - the “experience economy” :
“As businesses sought to adapt to an emerging neoliberal paradigm, economic production had to contend with a new kind of consumer and a new personalized and experiential forms of consumption in an expanding ‘experience economy’ “ Adam Alston .
Tumblr media
*photo from original article (link below) 
I never really thought about the simple concept that everything is now being a commodity for us to consume - even experiences. It really made a lot of sense relating to the fact that I felt that “immersive experience” is a buzz word, and when you look at the overall picture everything clicks and fall to the right place.
Thinking about my research in a different perspective still I was looking for answers as to what it means? how this relate to audiovisual practices and how everything is basically related. luckily again, I had the pleasure to interview and chat with Professor Atau Tanaka & Dr. Blanca Regina, who are leading the immersive pipeline project at Goldsmiths. Both  were kind enough to share their insight and experience and talk about immersive experiences. Their project if funded by the AHRC/EPSRC Research and Partnership Development call for the Next Generation of Immersive Experiences. It is incredible to understand that even the UK industrial strategy is interested in these matters. Their initial  approach is that  immersion by using VR is somewhat ironic - since you actually “loose” your body in order to be immersed in 3d space. Their main focus is around Audiovisual experiences and where is our body being situated, the body is a fundamental vessel to experience things and VR is socially isolating while digital sound and image could set as an entire space. They are examining immersion from a performer point of view and it’s correlation with performance art in the recent years. Along the project they are looking for ways to translate and transport audiovisual experiences in different spaces, since currently at the moment there are no standards or right tools to do such things. I’m not sure I got all the answers I was looking for, but many things really made sense, and who all these worlds for performance art , theatre , audiovisual and virtual realities collide into one big immersive experience that is being pushed all over, sometimes too much towards the experience economy.
As for the clear definitions of what is an immersive experience, and what and the key features Professor Tanaka said “As soon as you have sound and image together maybe you already starting to have the pre-condition of immersion” but asides that it’s each person’s interpretation.  
https://hbr.org/1998/07/welcome-to-the-experience-economy
http://sonics.goldsmithsdigital.com/immersive-pipeline/
Machon, Josephine. Immersive Theatres: Intimacy and Immediacy in Contemporary Performance. Palgrave Higher Ed M.U.A., 2013.
Alston, Adam. Beyond Immersive Theatre : Aesthetics, Politics and Productive Participation. 2016.
0 notes
amitstk-blog · 6 years
Text
My AudioVisual Journey
For my Max/MSP course I started working around the concept of visualising audio - In order to take on the challenge to learn how to work with jitter inside Max. My first step prior to that was finding different ways to extract different features from audio - which lead me to discover a great max pack called “Zsa.descriptors” which offered me many different ways of doing audio analysis , and the CNMAT pack which had some great tools as well ... I’ve constructed a patch with each of the objects I found interesting commenting how many features they extract and in what ways. first part of the project is done !   after the first part I started going over the different jitter tutorials ... after the last max class it really brought me up to speed quickly skipping the first 10 or more tutorials ... most of the first ones relate to working with either live camera or pre-recored videos and I was actually really interested in generating my own visuals - and that meant working in gl. quickly just exploring the different tutorials (I’ll do them in time ... ) I’ve found tutorial number 37 “Geometry Under the Hood” and I notice that the example is a great starting point for me to explore . I started playing around with it and got to this point :  https://www.instagram.com/p/Bf9icypHqAj/?taken-by=amitsegall later I’ve even extended this simple patch a bit more and it seems like a great starting point towards what I was looking for . 
after playing around with this patch for a while I came to several conclusions about my project : 1 - less is more - don’t go too deep to places you don’t feel comfortable. 2 - I want to control it using a controller - and change values in real time while it’s being processed and altered by the different audio features - again maybe less features is more. 3 - everything that I’ll be able to control in real time will have an probabilistic alternative, so this patch will be either installation or a live performance.  so far made great progress - and had lots of fun getting there. 
0 notes
amitstk-blog · 6 years
Text
Project Update 1
As I began my research of unraveling the “Immersive experience” I was looking through different books, online articles and texts and different companies that claim to create these experiences. At first I was looking for different definitions for Immersive and here are few gems I’ve found:
"People love the I-word," said Noah Nelson, who produces the noproscenium.com newsletter about immersive theater. Immersive culture, he says, could best be described as site specific, non-traditional or experiential art and entertainment that breaks the fourth wall or otherwise envelopes the viewer. "For me, it means a force, it's all around you but it also goes through you. It's not just a 360-degree set. It makes you part of it."
“ The arts buzzword of 2016:’immersive’ “ , L.A Times Dec 22, 2016
“Immersion is the subjective impression that one is participating in a comprehensive, realistic experience. Interactive media now enable various degrees of digital immersion. The more a virtual immersive experience is based on design strategies that combine actional, symbolic, and sensory factors, the greater the participant’s suspension of disbelief that she or he is “inside” a digitally enhanced setting.”
Dede, Chris. "Immersive interfaces for engagement and learning." science 323, no. 5910 (2009): 66-69.
As I was reading more and more texts I understood that this topic is :
A - incredibly fascinating to me, and has a great research potential.
B - very diverse ranging from theatre to VR : both are leading developers of the field .
C - It lacks clear definitions and research when it comes to the link between experience and technology.
I’ve started exploring “Immersed in technology” , a book of articles from 94’ summing group experimentation with VR leading different immersive projects. I could not think about the fact that now while VR and AR are “exploding” with fundings and projects and no real research is being conducted, or anything similar to what was conducted almost 25 years ago. I feel that new technologies will be able to shade new light on these subjects once again. I hope that my research will be a small milestone in the direction of how to explore these new technologies and how the intersect with other immersive fields as well. I feel like I’m headed on to a great start with great findings so far and I’m very excited to keep working on this.
I’ll finish with another great quote from 94’ that is still relevant as ever by Douglas MacLeod: “Most worrisome of all, a new approach to cultural initiatives suggest that all explorations must have a commercial or revenue-generating potential. While the effects of these pressures remain to be seen, it is very possible that this book documents a project that could never happen again” .
0 notes
amitstk-blog · 6 years
Text
Sensory Interactions
Sensory Interactions is a topic I have been exploring for a long time. Sensors and their relation to computational art practices are inevitable, since sensors are somewhat of an inlet to our world inside the computer.In the first term of this MA during my group research we encountered Richard Lowenberg’s work since we we’re looking for signification examples that use different sensing technics (Although at the end we changed our project, we explored many things along the way). After this week’s read about the different approaches to sensory interaction I was mainly fascinated by the ‘Amphibious Architecture’ project, and their attempt to explore our interaction with underwater environments. Even today we haven’t changed much of our attitude, we keep sensing what is around us but we do not explore enough underwater. It feels like we’re attempting to gather and sense information from space  more often then things that are close to us such as the deep blue sea. I while back I explored our interaction with motion sensors and what we do with them in relation to musical performances. I think that as an artist sensing most of the times might be the “easy” part of the job while criticising and making people react to art is the challenging part. During my exploration I was looking into the different methods to control music and audio using motion and gestures, and I think it’s mainly about how to map the data in interesting and engaging ways.
0 notes
amitstk-blog · 6 years
Text
Sola and the walkthrough method
In search for “artistic” apps that I could implement “The walkthrough method” to the test I was looking for apps that might criticise our daily life, and will invoke change or social awareness. I encountered an app called “Sola” (previously known as “Plag”). The initial idea of the app was to create a different way to share information, unlike a standard social network information is spread to the closest user physically and the user will determine either we well share your information like a virus (again to his closest user) or stop it from spreading. All information has an equal chance to be heard and it’s depends on other user behaviours, assuming you have interesting information to share you’ll be able to spread out to the entire world. There are no “friends” or “followers” and there is no need for that. I felt like the initial concept sounds interesting and I thought it could be a great app to test “the walkthrough method” as a tool to better understand how the app works and how it affects us.
https://sola.foundation/
conceptual framework :
from it’s launch in 2014 Plag turned into “Sola” (stands for “Social Layer”) and it seems that original concept has shifted slightly over the years. Nowadays  “Sola” is a foundation and as described in their white paper : “Sola is the next generation decentralized social platform that incentivizes
and benefits all involved parties — users, third-party developers and the core
team” . the entire paper outlines their principles, structure and economic system. I was easily able to find plenty of information provided both from “Sola” foundation and other online resources, all detailing goals and concepts that the company and the app strive for.  
the environment of expected use :
“Sola” is available on three different platforms - IOS, Android and a web-app version. It seems like when shifting from the original idea of “Plag” to the current version the focus shifted as well. It seems that the current agenda is to encourage users to engage in the app and collect a new “virtual currency”, you are able to gain more “coins” the more you engage / post / and the better your posts (they spread to a bigger crowd) you gain more “coins”. there is an obvious focus about creating quality content, and it is stated that you can later convert your “coins” to other cryptocurrencies and gain “real” money out of it. engaging with the app is expected all the time, just as a standard/ old social network type. The benefit of using an app like “Sola” is it’s direct relation to trending matters - although you can customise your news to some extent you are constantly exposed to either the most trending news or the closest to you.  
technical walkthrough:
the use of “Sola” required two-part verification process, I had to provide both an email address, and verify it and also my personal phone number and get a code via message that was sent to me. the user interface is more complex than described and although basic features like sharing “cards” or deciding it’s not interesting are easily made using swipe gestures there is a lot of in depth functions that requires menu diving and extended usage of the app to feel comfortable with. there is a dashboard and more than one type of “point” to accumulate , and once going to the dashboard there are several more options. you are also able to share your feed with other social networks such as Facebook and instagram, and you can technically copy/paste / write new post /share and engage with the app using standard and familiar icons . through the first steps in the app there is an “on top” help menu that suggest you’ll explore different features in the app and experiment as a user making your first steps.  
In general - I think that approaching an app using the walkthrough method revealed plenty of insight about the app. “Sola” as an app have a lot on their mind and a clear agenda and they seems open about it, providing a 45pages  white paper is not something I feel I’ll encounter with each app i’ll look into. having said that I feel like the technical part of the method is the less insightful one , and understanding the first two steps is crucial to the third. All an all I had a really fascinating experience learning about this app and the process of taking it apart.
0 notes
amitstk-blog · 6 years
Text
The Dolphin Attack
In the recent few weeks as part of our upcoming group project I’ve been looking into the technology behind voice controlled personal assistance. Siri,Alexa and Google’s equivalent have entered our lives in the past few years and it seems like they are here to stay. Led by Technology giants such as Apple and Amazon artificial intelligence modelled as a personal assistance easily found it’s way into homes of many.  The technology behind such devices is a mixture of nano-technology brought to us from the mobile phone industry, speech recognition algorithms and artificial intelligence. Along my research I discovered that the hardware is more or less equivalent to a standard mobile phone, while each company has it’s own preferences to what chipset and boards to use they all function to the same purpose: “talking to the cloud”. by using a device you basically open a port between your local device and a server hosted by the each of the leading companies, there is no actual data on the devices and no processing is being done asides voice recognition. As many other things, voice recognition algorithms are as well in an arms race, constantly improving and modifying it, enchanting it’s abilities and changing it’s requirements to the scenery. While I was searching for the ultimate algorithm, or at least a fair comprising between the different companies I decided to diverse and look for people who successfully hacked a home assistance device that uses the different algorithm and that’s how I discovered the “Dolphin attack”.
A university in China recently discovered* that regardless to how “smart” the AI that assists you the all fail in a very simple security check. by using Amplitude modulation and simple signal processing they managed to “hack” devices with ultrasonic waves and control them without anyone being able to hear anything. The researchers we’re able to issue payments, and access personal data, and also control and open cars. This raises a lot of problems, and actually allows hacked to penetrate devices using embedded online videos or any broadcast for that matter. Reading the above I was amazed by the ease and simplicity of that idea and the face that most major companies didn’t think about it is just astounding. that said - both google and apple ofer in between solution like training the device to work from only one person’s voice (Google’s acoustic modelling algorithm allow you to train your device to differentiate one person from another) . While apple’s home device is not officially launched yet, Siri the mobile device personal assistant does suffer from these issues.
although we can’t really tell what the future will tell I feel a bit pessimistic at the moment - it seems like we tend to trust and relay on these types of devices too much and too quickly and this might be just another wake up call  .
*https://endchan.xyz/.media/50cf379143925a3926298f881d3c19ab-applicationpdf.pdf
0 notes
amitstk-blog · 6 years
Text
in search for an artefact
in the past few weeks myself and a group of fellow students from school were requested to read all kinds of computational art-based researches. we we’re asked to find theories we relate to a design and conceptualise an artefact based on these theories. while reading and out of mutual interest in audio we wanted to create an art piece that will be represented sonically. Also we discovered that we all find the subject of gentrification (especially in south-east London)  fascinating are we were curious to find a way to combine all of the above into one whole theory that we’ll be able to put to the test, and design our artefact around it.
During the passing weeks we found plenty of historic/demographic/financial references we felt very relevant to our topic of choice but it was seems that we’re getting further away from computation. We asked our professor’s advice on the matter since we felt like we we’re missing short and she directed us to Rosi Braidotti’s book “The Posthuman”. reading few of Braidotti’s short pieces, and viewing few talked online were absolutely fascinating, and it seems like we’re on the right track.
As weeks went by, and after several group meetings we decided to pivot from our original topic. I felt like Braidotti’s direction while being really eye opening couldn’t really relate to what we were originally trying to do and we thought that it makes more sense to concentrate about computation first and that maybe fining a better path to the things we found interesting. Our new path was set and for our next meeting we decided to each explore what he found interesting around the general topic of “Data” and how we look at “Data” as a new form of currency.
Currently I’m exploring our new topic in two main areas: the first is location data, where do we get it from? what do we do with it ? challenges and problems that arise and so on. I read Davide Dardari’s article*1 about Indoor tracking and it’s challenges, and I think that Indoor tracking might be a good subject to further explore because of it’s problems. Also I’ve been exploring activism and Technoscience since I want find a theory to relate to my subjects . As for our sonified artefact, once I’ll find my right theory and research I’ll be able to gather the data and hopefully represent it in a convincing way that will reflect my findings.
1 - Dardari, Davide, Pau Closas, and Petar M. Djurić. "Indoor tracking: Theory, methods, and technologies." IEEE Transactions on Vehicular Technology 64, no. 4 (2015): 1263-1278.
0 notes
amitstk-blog · 6 years
Video
Today I did some soldering ... after few mistakes and debugging everything finally worked ! 
0 notes
amitstk-blog · 6 years
Video
second part of the first project ever ! 
0 notes