[Re]Cuts was specifically developed in January of 2010 for an exhibition at IMT Gallery in London. The video is inspired by Burroughs’s experimentation with tape recordings. The exhibit takes place from May 28 through July 18 2010. I thank Mark Jackson for the invitation and the opportunity to exhibit my work.
excerpt from the actual project webpage:
[Re]Cuts is a remix of image, sound, and text inspired by William Burroughs’s aesthetics of tape recording. The video is also influenced by his cut-up method as defined for writing in “The Cut-Up Method of Brion Gysin.” The video does not follow the strict cup-up rules professed by Burroughs, but rather considers his aesthetics as a point of reference to develop a non-sensical narrative.
The following is a presentation separated into two parts; it was produced for the conference Re*-Recycling_Sampling_Jamming, which took place in Berlin during February 2009.
Part One (above) introduces the three chronological stages of Remix, while part two (below) defines how the three chronological stages are linked to the concept of Authorship, as defined by Roland Barthes and Michel Foucault. Also see my previous entry “The Author Function in Remix” which is a written excerpt of the theory proposed in part two.
Below is the abstract that summarizes the content of the two videos. Total running time is around fifteen minutes.
———–
Text originally published on Re*- on February 2009:
SAMSTAG_28.02.2009_SEKTION IV_15-20 UHR
12_15:00 Remix[ing]. The Three Chronological Stages of Sampling
Eduardo Navas, Künstler und Medienwissenschaftler, University of California in San Diego (USA)
Sampling is the key element that makes the act of remixing possible. In order for Remix to take effect, an originating source must be sampled in part or as a whole. Sampling is often associated with music; however, this text will show that sampling has roots in mechanical reproduction, initially explored in visual culture with photography. A theory of sampling will be presented which consists of three stages: The first took place in the nineteenth century with the development of photography and film, along with sound recording. In this first stage, the world sampled itself. The second stage took place at the beginning of the twentieth century, once mechanical recording became conventionalized, and early forms of cutting and pasting were explored. This is the time of collage and photo-montage. And the third stage is found in new media in which the two previous stages are combined at a meta-level, giving users the option to cut or copy (the current most popular form of sampling) based on aesthetics, rather than limitations of media. This is not to say that new media does not have limitations, but exactly what these limitations may be is what will be entertained at greater length.The analysis of the three stages of sampling that inform Remix as discourse is framed by critical theory. A particular focus is placed on how the role of the author in contemporary media practice is being redefined in content production due to the tendency to share and collaborate. The theories on authorship by Roland Barthes and Michel Foucault are entertained in direct relation to the complexities that sampling has brought forth since it became ubiquitous in popular activities of global media, such as social networking and blogging.
Note: Press release about an upcoming exhibition in which I participate taking place in London at IMT Gallery during May through June of 2010.
———-
Dead Fingers Talk is an ambitious forthcoming exhibition presenting two unreleased tape experiments by William Burroughs from the mid 1960s alongside responses by 23 artists, musicians, writers, composers and curators.
Few writers have exerted as great an influence over such a diverse range of art forms as William Burroughs. Burroughs, author of Naked Lunch, The Soft Machine and Junky, continues to be regularly referenced in music, visual art, sound art, film, web-based practice and literature. One typically overlooked, yet critically important, manifestation of his radical ideas about manipulation, technology and society is found in his extensive experiments with tape recorders in the 1960s and ’70s. Dead Fingers Talk: The Tape Experiments of William S. Burroughs is the first exhibition to truly demonstrate the diversity of resonance in the arts of Burroughs’ theories of sound.
listen to your present time tapes and you will begin to see who you are and what you are doing here mix yesterday in with today and hear tomorrow your future rising out of old recordings
everybody splice himself in with everybody else
The exhibition includes work by Joe Ambrose, Steve Aylett, Alex Baker & Kit Poulson, Lawrence English, The Human Separation, Riccardo Iacono, Anthony Joseph, Cathy Lane, Eduardo Navas, Negativland, o.blaat, Aki Onda, Jörg Piringer, Plastique Fantastique, Simon Ruben White, Giorgio Sadotti, Scanner, Terre Thaemlitz, Thomson & Craighead, Laureana Toledo and Ultra-red, with performances by Ascsoms and Solina Hi-Fi.
Inspired by the expelled Surrealist painter Brion Gysin, and yet never meant as art but as a pseudo-scientific investigation of sounds and our relationship to technology and material, the experiments provide early examples of interactions which are essential listening for artists working in the digital age.
In the case of the work in the exhibition the contributors were asked to provide a “recording” in response to Burroughs’ tape experiments. The works, which vary significantly in media and focus, demonstrate the diversity of attitudes to such a groundbreaking period of investigation.
Dead Fingers Talk: The Tape Experiments of William S. Burroughs is curated by Mark Jackson. The project is supported by the London College of Communication, CRiSAP and ADi Audiovisual and has been made possible by the kind assistance of the William Burroughs Trust, Riflemaker and the British Library.
Jeremy Douglass (left) and Lev Manovich (far right) demonstrate how to analyze data on the Hyper Wall at Calit2.
The Cultural Analytics seminar took place at Calit2 on December 16 and 17 of 2009. The event brought together researchers and students from Bergen University and University of California San Diego. The two day event consisted of research presentations and demos of software tools.
Part One of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009. Introduction to principles of Cultural Analytics.
I will not spend much time in this entry defining Cultural Analytics. This subject has been well covered by excellent blogs such as Open Reflections. For this reason, at the end of this entry I include a number of links to resources that focus on Cultural Analytics. Instead, I will briefly share what I believe Cultural Analytics offers to researchers in the humanities.
Part Two of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009. Analysis of Vertov’s motion in scenes from Man with a Movie Camera.
This emerging field can be defined as a hybrid practice that utilizes tools of quantitative analysis often found in the hard sciences for the enhancing of qualitative analysis in the humanities. The official definition of the term follows:
Cultural analytics refers to a range of quantitive and analytical methodologies drawn from the natural and social sciences for the study of aesthetics, cultural artifacts and cultural change. The methods include data visualization techniques, the statistical analysis of large data sets, the use of image processing software to extract data from still and moving video, and so forth. Despite its use of empirical methodologies, the goals of cultural analytics generally align with those of the humanities.
One thing that separates the humanities from the hard sciences is the emphasis of qualitative over quantitative analysis. In very general terms qualitative analysis is often used to evaluate the how and why of particular case studies, while quantitative analysis focuses on patterns and trends, that may not always be concerned with social or political implications.
Part Three of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009. Jeremy Douglass analyzes comic books.
What Cultural Analytics is doing, in my view, is bringing together qualitative and quantitative analysis for the interests of the humanities. In a way Cultural Analytics could be seen as a bridge between specialized fields that in the past have not always communicated well.
Consequently, when new ground is being explored, questions of purpose are bound to emerge, which is exactly what happened during seminar conversations. As the videos that accompany this brief entry will demonstrate, the real challenge is for researchers in the humanities to engage not only with Cultural Analytics tools and envision how such tools can enhance their practice, but to actually embrace new philosophical approaches that blur the lines between the hard sciences and the humanities.
Part Four of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009. Cicero Da Silva explains his collaborative project, Macro.
To be specific on the possibilities that Cultural Analytics offers to the humanities, I will cite two demonstrations by Lev Manovich and Jeremy Douglass.
Lev Manovich at one point presented Hamlet by William Shakespeare in its entirety on Calit2’s Hyper Wall, which consists of several screens that enable users to navigate data at a very high resolution.
When seeing the entire text at once, one is likely to realize that this methodology is more like mapping. To this effect, soon after, we were shown a version of the text in which Manovich had isolated the repetition of certain words throughout the literary work.
This approach could be used by a literature scholar to study certain linguistic strategies, such as sentence structure, by an author. Let’s take this a step further and say that it has been agreed that a contemporary author is influenced by a canonical writer. How this supposed influence takes effect can be evaluated by studying certain patterns of sentences from both authors by isolating parts of literary texts for direct comparison. One could then evaluate if the supposed influence is formal, conceptual or both: perhaps the contemporary author might make ideological references that are clearly linked to the canonical author, but which are not necessarily influenced at a formal level; or it could be the other way around, or both. In this case, quantitative and qualitative analysis are combined to evaluate a case study. In other words, pattern comparison is used to understand the similarities and differences between two or more works of literature.
To this effect, Jeremy Douglass’s presentation of a comic book is important. He explained how by seeing an entire publication of a comic book story one can study how certain patterns in the narrative come to define the aesthetics for the reader.
While the reader may be able to experience the story in time, by actually reading it, the visualization of the comic book in grid-like fashion–as a structural map–allows the researcher to apply analysis of patterns and trends that may be more common in flows of networks to an actual narrative. Again, in this case we find quantitative and qualitative analysis complementing each other.
As noted at the end of the article “Culture is Data” (also provided below) from Open Reflections, it appears that Manovich is at times understood to argue that one should privilege quantitative over qualitative analysis. This proposition implies an either or mentality by certain researchers that needs to be reevaluated. Janneke Adema explains his answer better than I ever could:
But, on the other hand, won’t we loose a sense of meaning if we analyze culture like a thing? Manovich argues that this is of course a complementary method, we should not throw away our other ways of establishing meaning. It is a way of expanding them. And it is also an important expansion, for how is one going to ask about the meaning of large datasets? We need to combine the traditionally [sic]humanities approach of interpretation with digital techniques to find out more. And again, meaning is not the only thing to look at. It is also about creating an experience. Patterns are the new real of our society.
The most important thing to understand when evaluating the videos available with this entry is that one need not have a hyper wall to do research with Cultural Analytics methodologies; many of the tools can run on a personal computer. It’s more about adopting an attitude and willingness to do research by way of combining quantitative and qualitative analysis. At the moment I am evaluating the implementation of Cultural Analytics in my research on Remix.
The following text was written for Interactiva 09 Biennale, which takes place the month of May and June of 2009. Other texts written for the biennale can be found at the Interactiva site.
NOTE: I have written a text in which I discuss Twitter in social activism, something which is not included in this text. Please see “After Iran’s Twitter Revolution: Egypt.”
In March of 2005 I wrote “The Blogger as Producer.”[1] The essay proposed blogging as a potentially critical platform for the online writer. It was written specifically with a focus on the well-known text, “The Author as Producer,” by Walter Benjamin, who viewed the critical writer active during the 1920’s and 30’s with a promising constructive position in culture. [2]
In 2005 blogging was increasing in popularity, and in my view, some of the elements entertained by Benjamin appeared to resonate in online culture. During the first half of the twentieth century, Benjamin considered the newspaper an important cultural development that affected literature and writing because newspaper readers attained certain agency as consumers of an increasingly popular medium. During this time period, the evaluation of letters to editors was important for newspapers to develop a consistent audience. In 2005, it was the blogosphere that had the media’s attention. In this time period, people who wrote their opinions on blogs could be evaluated with unprecedented efficiency. [3] (more…)
Just got notice from Zemos 98 of their new book, Codigo Fuente: La Remezcla, which brings together a range of articles on Remix in culture and media. The book is in Spanish. I look forward to reading it and highlight some of the essays. Kudos to Zemos 98.
Adriene Jenik combines literature, cinema and performance to create works under the umbrella of Distributed Social Cinema. For Jenik, this term means that the language of cinema has been moving outside of the conventional movie screens on to different media devices, which today include, the portable computer, GPS locators, as well as cellphones. Earlier in her career, Jenik worked with video and performance, and eventually she produced CD-Roms, such as “Mauve Desert: A CD-ROM Translation” (1992-1997). Jenik’s practice took a particular shift towards network culture when the Internet became a space in which she could bring together her interests in film, literature, and performance. “Desktop Theater: Internet Street Theater” (1997-2002) was a virtual performance which took place in an online space. It was based on Samuel Becket’s play Waiting for Godot. In line with these works, SPECFLIC 2.6 is the result of Jenik’s interest in the relation of networked culture to film, literature and performance. The installation, then, is also another shift in Jenik’s interest in the expanded field of storytelling. In the following interview, Jenik shares the influences and aesthetical concerns that inform SPECFLIC 2.6
[Eduardo Navas]: You describe your ongoing SPECFLIC project, currently in version 2.6, as “Distributed Social Cinema.” Given that your installation takes on so many aspects of contemporary media, could you elaborate on how you arrived at the parameters at play around this concept?
[Adriene Jenik]: SPECFLIC was initially inspired by the recognition that cinema was moving beyond a single fixed image at an expected scale to one of multiple co-existent screens with extreme shifts in scale. I was seeing video on miniature screens, as well as gigantic mega-screens, and seeing these screens move about in space and wondering what types of stories could take advantage of these formal and technological shifts. I’ve long been involved in thinking through layered story structures and at the beginning of SPECFLIC, I could “see” a diagram of the project imprinted on the inside of my eyelids. That original retinal image burn has since been honed and shaped in relation to the needs of the story and the responses of the audience and performers.
The SPECFLIC 2.6 installation takes excerpts from material that was created for SPECFLIC 2.0, and follows on the heels of SPECFLIC 2.5, which was commissioned by Betti-Sue Hertz and presented at the San Diego Museum of Art in Spring of 2008. For SPECFLIC 2.5, I stripped away all of the live, interactive aspects of the piece, and instead, emphasized aspects of the story that might have been more in the background of the live event. This type of “versioning” is something that is in evidence in software creation, but has also become a method for developing an art practice that can expand and embrace new research and technologies. Distributed Social Cinema is a form that takes into account the importance (for me) of the public audience for a film. As cinema-going practice becomes “home entertainment,” I’m interested in what is at stake in cinema as a public meeting space. At the same time, I’m playing with the intimacy of the very small screen, the ways in which having part of a story delivered into someone’s pocket adds a layer of meaning in its form of delivery. The SPECFLIC 2.5 installation was an attempt to consolidate some of these aspects of distributed attention and “voice.”
Granted the opportunity for networked interaction within the gallery@calit2, for SPECFLIC 2.6 I have rethought the installation to develop in concert with audience contributions. So the project is very much evolving in response to what I learn from each previous iteration as well as the opportunities afforded by the space, encounter with the audience, and technological framework.
The installations “SPECFLIC 2.6” by Adriene Jenik, and “Particles of Interest” by *particle group*, on view at the gallery@calit2 from August 6 to October 2, 2008, ask the viewer to consider a not-so-distant future in which we will be intimately connected in networks not only through our computers, but also via nanoparticles in and on our very own bodies. Both projects respond to the pervasive mediation of information that is redefining human understanding of the self, as well as the concept of history, knowledge, and the politics of culture.
Information access to networked archives of books and other forms of publication previously only available in print is becoming the main form of research as well as entertainment. Access to music and video via one’s computer and phone as well as other hybrid devices has come to redefine human experience of media. From the iPhone to the Kindle, visual interfaces are making information access not only efficient in terms of time and money, but also in terms of spectacle. Accessibility usually consists of a combination of animation, video, image and text, informed in large part by the language of film and the literary novel.
After a long hiatus, online bookseller Amazon is back trying to encourage us to read in a new way. Its Web site now features this description of its Kindle reading device: “Availability: In Stock. Ships from and sold by Amazon.com. Gift-wrap available.” This good news for consumers comes after the first batch of the devices sold out in just six hours late last year.
This seems like a fitting time to ask: If the Internet is the most powerful communications advance ever – and it is – then how do this medium and its new devices affect how and what we read?
Aristotle lived during the era when the written word displaced the oral tradition, becoming the first to explain that how we communicate alters what we communicate. That’s for sure. It’s still early in the process of a digital rhetoric replacing the more traditionally written word. It’s already an open question whether constant email and multitasking leaves us overloaded humans with the capability to handle longer-form writing.
This text is a theoretical excerpt from one of my chapters on the role of Remix in Art. It outlines how the theories of Roland Barthes and Michel Foucault on authorship are relevant to New Media, particularly their link to the interrelation of the user and the maker/developer in terms of sampling (a vital element of Remix as discourse). While the text does place a certain emphasis on art, the propositions extend to various areas of culture. It was previously presented as a lecture for ICAM at UCSD on April 6, 2005: http://navasse.net/icam/icam110_spring05_schedule.html
Remix is Meta
The act of remixing (which I refer to in terms of discourse as Remix) developed as a meta-action. Its specificity in the second half of the twentieth century can best be understood when realizing that the strategies by artists throughout the first half of the twentieth century had to be assimilated to then be recycled as part of the postmodern condition in the second half—a time when remix proper developed in music. The acts of collage, photomontage and the eventual development of mixed media had to be assimilated, not only by the visual arts, but also mainstream media for the concept of remixing to become viable in culture. Remix’s dependency on sampling questioned the role of the individual as genius and sole creator, who would “express himself.” Sampling, then, allows for the death of the author; therefore, it is no coincidence that around the time when remixes began to be produced, during the sixties and seventies, authorship—as discourse—was entertained by Roland Barthes and Michel Foucault, respectively. For them, “writing” in the sense that Rousseau would promote the expressive power of the individual no longer was possible. Sampling allows for the postmodern condition (which some consider to be part of modernism) to come through. To this aspect of sampling we will turn to in the next section. What follows is an outline of Barthes’s and Foucault’s respective theories which were conversant with contemporary art practice, during the period when both authors developed their theories, as it will become evident throughout my argument, their ideas are quite relevant to media culture.
The Role of Author and the Viewer
In his essay, “The Death of the Author,” Roland Barthes questions the concept of authorship. For him it is the text that speaks to the reader. He writes, “A text is made of multiple writings, drawn from many cultures and entering into relations of dialogue, parody and contestation, but there is one place where this multiplicity is focused and that place is the reader, not, as was hitherto said, the author.”[1] With this statement he summarizes his argument that we should treat the text not as something coming from a specific person, but as something that takes life according to how the reader interprets the writing as a collage of diverse sources. For Barthes, it is the reader who holds the real potential to make discourse productive. He looks at specific authors, like Proust, Mallarme and Valery as authors who “Restore the place of the reader.”[2] The author ceases to matter for Barthes because only in this way can the text be set free, for to have “an Author is to impose a limit on that text, to furnish it with a final signified, to close the writing.”[3] Barthes wants the reader to overthrow the myth of the author as “genius” as it has been promoted since the renaissance. For Barthes, the text’s unity is not in its origin but its destination. And only the reader can define that. It is the reader who completes it.
Michel Foucault also questions the role of the author in contemporary culture, but unlike Barthes, who only pointed out the necessity to shift our cultural attention from the author to the reader, Foucault concludes that even though the death of the author as a great individual has been claimed, the notions supporting such claim actually have only renegotiated the privilege in authorship.[4] To prove this Foucault examines two notions supporting contemporary discourse. The first is the concept of the work, which includes everything an author has written, and the second is the notion of writing, which during Foucault’s time and even in our times pretends to function autonomously. Foucault goes on to claim that this is not so and sets out to prove his point by explaining the definition of his own term “The author function.” Foucault considers the author function to provide a way of controlling discourse. This actually is not too different from how Barthes considers the idea of authorship being a way of limiting the possibilities of the text. The Author Function is a classificatory function.[5] It is not universal, although such discourse could be presented as such. The author function is not created by a single individual but rather it is a complex web of power shifts that leads up to the construct of the author.[6] The author function becomes clear when Foucault explains it in relation to Marx and Freud, two “authors” who created discourses following their names, Marxism and Freudianism (or psychoanalysis). Foucault reasons that these two authors developed concepts that were reevaluated by later generations. Such discourses can be changed which is not necessarily true for the field of the natural sciences, whenever one refers back to the origin of the argument to question it, as he explains, “A study of Galileo’s works could alter our knowledge of the history, but not the science, of mechanics; whereas a re-examination of the books of Freud and Marx can transform our understanding of psychoanalysis or Marxism.[7]
In other words, discourse as developed by an author can be changed. While Foucault went further than Barthes and explained the power dynamics supporting the author, he also agrees with Barthes that one day the author, or the “author function” for him, will disappear: “We can easily imagine a culture where discourse would circulate without any need for an author. Discourses, whatever their status, form, or value, and regardless of our manner of handling them, would unfold in a pervasive anonymity. No longer the tiresome repetitions.”[8] One can notice hope in Foucault’s final statement for a time when a more democratic model would be at play; this has been a pronounced interest of artists and media researchers, and has provided fuel for the historical and neo-avant-garde to stay active since the beginnings of modernism. Barthes and Foucault’s reflections on authorship were already being put into action in their own time with Conceptual and Minimal art practices, which relied largely on appropriation and allegory to derive critical commentary. The notion of authorship which they examined can now be assessed, especially in relation to new media practice, which is largely dependent on the “reader” or user, as the participants are commonly called. This particular dynamic is actually an extension of sampling, which started during the early days of modernism with photography and music.
Sampling allows for the death of the author and the author function to take effect once we enter late capitalism, because “writing” is no longer seen as something truly original, but as a complex act of resampling and reinterpreting material previously introduced, which is obviously not innovative but expected in new media. Acts of appropriation are also acts of sampling: acts of citing pre-existing text or cultural products. (Let us extend the term “text” here to the visual arts and media at large.) This is the reason why citations are so necessary in academic writing, and certainly is something that is closely monitored in other areas of culture, like the music industry, where sampling is carefully controlled by way of copyright law. So, writing in the sense before the enlightenment no longer takes place. Instead, the careful choices of preexisting material made by authors in all fields are revered. Our most obvious example is the work of Duchamp (which I’ve cited in my definition of Remix[9] ), who understood this so well that he decided to simply choose readymades as opposed to trying to create art from scratch; he understood the new level of writing, or creating that was at hand in modernism, which entered a stage of meta—of constant reference, relying on the cultural cache of pre-existing material.
So writing’s and art’s true power is selectivity, and this comes forth today in sampling, a privileged symptom of the postmodern. The selectivity found in the death of the author and the author function as defined above is what makes the notion of interactivity easily assimilated because of sampling. For example, once cut/copy and paste is assimilated not only as a feature for the user to write her own texts, but also to reblog pre-existing material, the user then becomes more of an editor (a remixer) of material, by reblogging under a new context, as a new composition that allegorizes its sources. This possibility of selecting and editing to develop a specific theme according to personal interests plays a key role in how the art viewer, or new media user will relate to the person who produced the object of interaction. This shift, while redefining the concept of creativity and originality also develops new challenges for the media producer.
[1] Roland Barthes, “The Death of the Author,” Image Music Text (New York: Hill and Wang, 1977), 148.
[2] Ibid.
[3] Ibid.
[4] Michel Foucault, “What is an Author,” The History of Art History: A Critical Anthology, ed. Donald Preziosi (New York and Oxford: Oxford University Press, 1998), 299-314.
[5] Ibid, 305-307 .
[6] Ibid, 308-09.
[7] Ibid, 312.
[8] Ibid, 314.
[9] See: “Remix Defined,” Remix Theory < https://remixtheory.net/?page_id=3>.