About | Remix Defined | The Book | Texts | Projects | Travels/Exhibits | Remixes/Lists| Twitter

Archive of the category 'Literature'

Notes on Cultural Analytics Seminar, December 16-17, 2009, Calit2, San Diego, by Eduardo Navas

Jeremy Douglass (left) and Lev Manovich (far right) demonstrate how to analyze data on the Hyper Wall at Calit2.

The Cultural Analytics seminar took place at Calit2 on December 16 and 17 of 2009.  The event brought together researchers and students from Bergen University and University of California San Diego.  The two day event consisted of research presentations and demos of software tools.

Part One of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009.  Introduction to principles of Cultural Analytics.

I will not spend much time in this entry defining Cultural Analytics.  This subject has been well covered by excellent blogs such as Open Reflections.  For this reason, at the end of this entry I include a number of links to resources that focus on Cultural Analytics.  Instead, I will briefly share what I believe Cultural Analytics offers to researchers in the humanities.

Part Two of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009.  Analysis of Vertov’s motion in scenes from Man with a Movie Camera.

This emerging field can be defined as a hybrid practice that utilizes tools of quantitative analysis often found in the hard sciences for the enhancing of qualitative analysis in the humanities.  The official definition of the term follows:

Cultural analytics refers to a range of quantitive and analytical methodologies drawn from the natural and social sciences for the study of aesthetics, cultural artifacts and cultural change. The methods include data visualization techniques, the statistical analysis of large data sets, the use of image processing software to extract data from still and moving video, and so forth. Despite its use of empirical methodologies, the goals of cultural analytics generally align with those of the humanities.

One thing that separates the humanities from the hard sciences is the emphasis of qualitative over quantitative analysis.  In very general terms qualitative analysis is often used to evaluate the how and why of particular case studies, while quantitative analysis focuses on patterns and trends, that may not always be concerned with social or political implications.

Part Three of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009. Jeremy Douglass analyzes comic books.

What Cultural Analytics is doing, in my view, is bringing together qualitative and quantitative analysis for the interests of the humanities.  In a way Cultural Analytics could be seen as a bridge between specialized fields that in the past have not always communicated well.

Consequently, when new ground is being explored, questions of purpose are bound to emerge, which is exactly what happened during seminar conversations.  As the videos that accompany this brief entry will demonstrate, the real challenge is for researchers in the humanities to engage not only with Cultural Analytics tools and envision how such tools can enhance their practice, but to actually embrace new philosophical approaches that blur the lines between the hard sciences and the humanities.

Part Four of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009. Cicero Da Silva explains his collaborative project, Macro.

To be specific on the possibilities that Cultural Analytics offers to the humanities, I will cite two demonstrations by Lev Manovich and Jeremy Douglass.

Lev Manovich at one point presented Hamlet by William Shakespeare in its entirety on Calit2’s Hyper Wall, which consists of several screens that enable users to navigate data at a very high resolution.

When seeing the entire text at once, one is likely to realize that this methodology is more like mapping.  To this effect, soon after, we were shown a version of the text in which Manovich had isolated the repetition of certain words throughout the literary work.

This approach could be used by a literature scholar to study certain linguistic strategies, such as sentence structure, by an author.  Let’s take this a step further and say that it has been agreed that a contemporary author is influenced by a canonical writer.  How this supposed influence takes effect can be evaluated by studying certain patterns of sentences from both authors by isolating parts of literary texts for direct comparison. One could then evaluate if the supposed influence is formal, conceptual or both: perhaps the contemporary author might make ideological references that are clearly linked to the canonical author, but which are not necessarily influenced at a formal level; or it could be the other way around, or both.  In this case, quantitative and qualitative analysis are combined to evaluate a case study.  In other words, pattern comparison is used to understand the similarities and differences between two or more works of literature.

To this effect, Jeremy Douglass’s presentation of a comic book is important.  He explained how by seeing an entire publication of a comic book story one can study how certain patterns in the narrative come to define the aesthetics for the reader.

While the reader may be able to experience the story in time, by actually reading it,  the visualization of the comic book in grid-like fashion–as a structural map–allows the researcher to apply analysis of patterns and trends that may be more common in flows of networks to an actual narrative.  Again, in this case we find quantitative and qualitative analysis complementing each other.

As noted at the end of the article “Culture is Data” (also provided below) from Open Reflections, it appears that Manovich is at times understood to argue that one should privilege quantitative over qualitative analysis.  This proposition implies an either or mentality by certain researchers that needs to be reevaluated.  Janneke Adema explains his answer better than I ever could:

But, on the other hand, won’t we loose a sense of meaning if we analyze culture like a thing? Manovich argues that this is of course a complementary method, we should not throw away our other ways of establishing meaning. It is a way of expanding them. And it is also an important expansion, for how is one going to ask about the meaning of large datasets? We need to combine the traditionally [sic]humanities approach of interpretation with digital techniques to find out more. And again, meaning is not the only thing to look at. It is also about creating an experience. Patterns are the new real of our society.

The most important thing to understand when evaluating the videos available with this entry is that one need not have a hyper wall to do research with Cultural Analytics methodologies; many of the tools can run on a personal computer.  It’s more about adopting an attitude and willingness to do research by way of combining quantitative and qualitative analysis.  At the moment I am evaluating the implementation of Cultural Analytics in my research on Remix.

References worth perusing:

Cultural Analytics Seminar Schedule: http://lab.softwarestudies.com/2009/11/cultural-analytics-seminar-software.html

Software Studies, Website: http://lab.softwarestudies.com/2008/09/cultural-analytics.html

“Culture is Data,” article: http://openreflections.wordpress.com/2009/05/23/culture-is-data/

Culture Vis, Website: http://culturevis.com/cultural_analytics.html

“Cultural Analytics,” Wikipedia entry: http://en.wikipedia.org/wiki/Cultural_analytics

“The Next Big Thing in Humanities, Arts and Social Science Computing: Cultural Analytics,” article: http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html

“Lev Manovich: Studying Culture With Search Algorithms,” article: http://networkcultures.org/wpmu/query/tag/cultural-analytics/

“Cultural Analytics: a new field that combines arts, media and IT,” article: http://knowledge.smu.edu.sg/article.cfm?articleid=1201

“After the Blogger as Producer” by Eduardo Navas

Image source: The New Mexico Independent

Written for Interactiva Biennale 2009

The following text was written for Interactiva 09 Biennale, which takes place the month of May and June of 2009.  Other texts written for the biennale can be found at the Interactiva site.

NOTE: I have written a text in which I discuss Twitter in social activism, something which is not included in this text. Please see “After Iran’s Twitter Revolution: Egypt.”

In March of 2005 I wrote “The Blogger as Producer.”[1]  The essay proposed blogging as a potentially critical platform for the online writer.  It was written specifically with a focus on the well-known text, “The Author as Producer,” by Walter Benjamin, who viewed the critical writer active during the 1920’s and 30’s with a promising constructive position in culture. [2]

In 2005 blogging was increasing in popularity, and in my view, some of the elements entertained by Benjamin appeared to resonate in online culture.  During the first half of the twentieth century, Benjamin considered the newspaper an important cultural development that affected literature and writing because newspaper readers attained certain agency as consumers of an increasingly popular medium.  During this time period, the evaluation of letters to editors was important for newspapers to develop a consistent audience.  In 2005, it was the blogosphere that had the media’s attention.  In this time period, people who wrote their opinions on blogs could be evaluated with unprecedented efficiency. [3]
(more…)

Código Fuente, Edited Book on Remix and Culture

Just got notice from Zemos 98 of their new book, Codigo Fuente: La Remezcla, which brings together a range of articles on Remix in culture and media.  The book is in Spanish.  I look forward to reading it and highlight some of the essays.  Kudos to Zemos 98.

SPECFLIC 2.6: An Interview with Adriene Jenik, by Eduardo Navas

Adriene Jenik lecturing at Calit2

Images and text source: gallery@calit2

Note: The following is an interview published for the exhibition SPECFLIC 2.6 and Particles of Interest: Installations by Adriene Jenik and *particle group* on view from August 6 to October 3, 2008 at gallery@calit2. In this Interview Jenik shares the creative process behind her ongoing multi-faceted installation SPECFLIC, which points to a future where books have become rare objects.

Adriene Jenik combines literature, cinema and performance to create works under the umbrella of Distributed Social Cinema. For Jenik, this term means that the language of cinema has been moving outside of the conventional movie screens on to different media devices, which today include, the portable computer, GPS locators, as well as cellphones. Earlier in her career, Jenik worked with video and performance, and eventually she produced CD-Roms, such as “Mauve Desert: A CD-ROM Translation” (1992-1997). Jenik’s practice took a particular shift towards network culture when the Internet became a space in which she could bring together her interests in film, literature, and performance. “Desktop Theater: Internet Street Theater” (1997-2002) was a virtual performance which took place in an online space. It was based on Samuel Becket’s play Waiting for Godot. In line with these works, SPECFLIC 2.6 is the result of Jenik’s interest in the relation of networked culture to film, literature and performance. The installation, then, is also another shift in Jenik’s interest in the expanded field of storytelling. In the following interview, Jenik shares the influences and aesthetical concerns that inform SPECFLIC 2.6

[Eduardo Navas]: You describe your ongoing SPECFLIC project, currently in version 2.6, as “Distributed Social Cinema.” Given that your installation takes on so many aspects of contemporary media, could you elaborate on how you arrived at the parameters at play around this concept?

[Adriene Jenik]: SPECFLIC was initially inspired by the recognition that cinema was moving beyond a single fixed image at an expected scale to one of multiple co-existent screens with extreme shifts in scale. I was seeing video on miniature screens, as well as gigantic mega-screens, and seeing these screens move about in space and wondering what types of stories could take advantage of these formal and technological shifts. I’ve long been involved in thinking through layered story structures and at the beginning of SPECFLIC, I could “see” a diagram of the project imprinted on the inside of my eyelids. That original retinal image burn has since been honed and shaped in relation to the needs of the story and the responses of the audience and performers.

The SPECFLIC 2.6 installation takes excerpts from material that was created for SPECFLIC 2.0, and follows on the heels of SPECFLIC 2.5, which was commissioned by Betti-Sue Hertz and presented at the San Diego Museum of Art in Spring of 2008. For SPECFLIC 2.5, I stripped away all of the live, interactive aspects of the piece, and instead, emphasized aspects of the story that might have been more in the background of the live event. This type of “versioning” is something that is in evidence in software creation, but has also become a method for developing an art practice that can expand and embrace new research and technologies. Distributed Social Cinema is a form that takes into account the importance (for me) of the public audience for a film. As cinema-going practice becomes “home entertainment,” I’m interested in what is at stake in cinema as a public meeting space. At the same time, I’m playing with the intimacy of the very small screen, the ways in which having part of a story delivered into someone’s pocket adds a layer of meaning in its form of delivery. The SPECFLIC 2.5 installation was an attempt to consolidate some of these aspects of distributed attention and “voice.”

Granted the opportunity for networked interaction within the gallery@calit2, for SPECFLIC 2.6 I have rethought the installation to develop in concert with audience contributions. So the project is very much evolving in response to what I learn from each previous iteration as well as the opportunities afforded by the space, encounter with the audience, and technological framework.

(more…)

“On Distributed Social Cinema and the Nano Market”, by Eduardo Navas

Adriene Jenik installation at the SDMA

Images and text source: gallery@calit2

Note: This text was written for the exhibition SPECFLIC 2.6 and Particles of Interest: Installations by Adriene Jenik and *particle group*, at gallery@calit2, from August 6 to October 3, 2008. The text outlines how dematerialization is at play ideologically and materially in contemporary life, and how it might be at play in the not so far future.

The installations “SPECFLIC 2.6” by Adriene Jenik, and “Particles of Interest” by *particle group*, on view at the gallery@calit2 from August 6 to October 2, 2008, ask the viewer to consider a not-so-distant future in which we will be intimately connected in networks not only through our computers, but also via nanoparticles in and on our very own bodies. Both projects respond to the pervasive mediation of information that is redefining human understanding of the self, as well as the concept of history, knowledge, and the politics of culture.

Information access to networked archives of books and other forms of publication previously only available in print is becoming the main form of research as well as entertainment. Access to music and video via one’s computer and phone as well as other hybrid devices has come to redefine human experience of media. From the iPhone to the Kindle, visual interfaces are making information access not only efficient in terms of time and money, but also in terms of spectacle. Accessibility usually consists of a combination of animation, video, image and text, informed in large part by the language of film and the literary novel.

(more…)

The Digital Future of Books

Image and text source: Wall Street Journal

Published on May 19, 2008; Page A13

Note: Also see the comment written against this article titled, Books Have a Bright Future, Not Just a Digital One

After a long hiatus, online bookseller Amazon is back trying to encourage us to read in a new way. Its Web site now features this description of its Kindle reading device: “Availability: In Stock. Ships from and sold by Amazon.com. Gift-wrap available.” This good news for consumers comes after the first batch of the devices sold out in just six hours late last year.

This seems like a fitting time to ask: If the Internet is the most powerful communications advance ever – and it is – then how do this medium and its new devices affect how and what we read?

Aristotle lived during the era when the written word displaced the oral tradition, becoming the first to explain that how we communicate alters what we communicate. That’s for sure. It’s still early in the process of a digital rhetoric replacing the more traditionally written word. It’s already an open question whether constant email and multitasking leaves us overloaded humans with the capability to handle longer-form writing.

Read the entire article at Wall Street Journal

The Author Function in Remix, by Eduardo Navas

Image sources
Barthes (left): Project Narrative
Foucault (right): K-punk

The Author Function in Remix

This text is a theoretical excerpt from one of my chapters on the role of Remix in Art. It outlines how the theories of Roland Barthes and Michel Foucault on authorship are relevant to New Media, particularly their link to the interrelation of the user and the maker/developer in terms of sampling (a vital element of Remix as discourse). While the text does place a certain emphasis on art, the propositions extend to various areas of culture. It was previously presented as a lecture for ICAM at UCSD on April 6, 2005:
http://navasse.net/icam/icam110_spring05_schedule.html

Remix is Meta

The act of remixing (which I refer to in terms of discourse as Remix) developed as a meta-action. Its specificity in the second half of the twentieth century can best be understood when realizing that the strategies by artists throughout the first half of the twentieth century had to be assimilated to then be recycled as part of the postmodern condition in the second half—a time when remix proper developed in music. The acts of collage, photomontage and the eventual development of mixed media had to be assimilated, not only by the visual arts, but also mainstream media for the concept of remixing to become viable in culture. Remix’s dependency on sampling questioned the role of the individual as genius and sole creator, who would “express himself.” Sampling, then, allows for the death of the author; therefore, it is no coincidence that around the time when remixes began to be produced, during the sixties and seventies, authorship—as discourse—was entertained by Roland Barthes and Michel Foucault, respectively. For them, “writing” in the sense that Rousseau would promote the expressive power of the individual no longer was possible. Sampling allows for the postmodern condition (which some consider to be part of modernism) to come through. To this aspect of sampling we will turn to in the next section. What follows is an outline of Barthes’s and Foucault’s respective theories which were conversant with contemporary art practice, during the period when both authors developed their theories, as it will become evident throughout my argument, their ideas are quite relevant to media culture.

The Role of Author and the Viewer

In his essay, “The Death of the Author,” Roland Barthes questions the concept of authorship. For him it is the text that speaks to the reader. He writes, “A text is made of multiple writings, drawn from many cultures and entering into relations of dialogue, parody and contestation, but there is one place where this multiplicity is focused and that place is the reader, not, as was hitherto said, the author.”[1] With this statement he summarizes his argument that we should treat the text not as something coming from a specific person, but as something that takes life according to how the reader interprets the writing as a collage of diverse sources. For Barthes, it is the reader who holds the real potential to make discourse productive. He looks at specific authors, like Proust, Mallarme and Valery as authors who “Restore the place of the reader.”[2] The author ceases to matter for Barthes because only in this way can the text be set free, for to have “an Author is to impose a limit on that text, to furnish it with a final signified, to close the writing.”[3] Barthes wants the reader to overthrow the myth of the author as “genius” as it has been promoted since the renaissance. For Barthes, the text’s unity is not in its origin but its destination. And only the reader can define that. It is the reader who completes it.

Michel Foucault also questions the role of the author in contemporary culture, but unlike Barthes, who only pointed out the necessity to shift our cultural attention from the author to the reader, Foucault concludes that even though the death of the author as a great individual has been claimed, the notions supporting such claim actually have only renegotiated the privilege in authorship.[4] To prove this Foucault examines two notions supporting contemporary discourse. The first is the concept of the work, which includes everything an author has written, and the second is the notion of writing, which during Foucault’s time and even in our times pretends to function autonomously. Foucault goes on to claim that this is not so and sets out to prove his point by explaining the definition of his own term “The author function.” Foucault considers the author function to provide a way of controlling discourse. This actually is not too different from how Barthes considers the idea of authorship being a way of limiting the possibilities of the text. The Author Function is a classificatory function.[5] It is not universal, although such discourse could be presented as such. The author function is not created by a single individual but rather it is a complex web of power shifts that leads up to the construct of the author.[6] The author function becomes clear when Foucault explains it in relation to Marx and Freud, two “authors” who created discourses following their names, Marxism and Freudianism (or psychoanalysis). Foucault reasons that these two authors developed concepts that were reevaluated by later generations. Such discourses can be changed which is not necessarily true for the field of the natural sciences, whenever one refers back to the origin of the argument to question it, as he explains, “A study of Galileo’s works could alter our knowledge of the history, but not the science, of mechanics; whereas a re-examination of the books of Freud and Marx can transform our understanding of psychoanalysis or Marxism.[7]

In other words, discourse as developed by an author can be changed. While Foucault went further than Barthes and explained the power dynamics supporting the author, he also agrees with Barthes that one day the author, or the “author function” for him, will disappear: “We can easily imagine a culture where discourse would circulate without any need for an author. Discourses, whatever their status, form, or value, and regardless of our manner of handling them, would unfold in a pervasive anonymity. No longer the tiresome repetitions.”[8] One can notice hope in Foucault’s final statement for a time when a more democratic model would be at play; this has been a pronounced interest of artists and media researchers, and has provided fuel for the historical and neo-avant-garde to stay active since the beginnings of modernism. Barthes and Foucault’s reflections on authorship were already being put into action in their own time with Conceptual and Minimal art practices, which relied largely on appropriation and allegory to derive critical commentary. The notion of authorship which they examined can now be assessed, especially in relation to new media practice, which is largely dependent on the “reader” or user, as the participants are commonly called. This particular dynamic is actually an extension of sampling, which started during the early days of modernism with photography and music.

Sampling allows for the death of the author and the author function to take effect once we enter late capitalism, because “writing” is no longer seen as something truly original, but as a complex act of resampling and reinterpreting material previously introduced, which is obviously not innovative but expected in new media. Acts of appropriation are also acts of sampling: acts of citing pre-existing text or cultural products. (Let us extend the term “text” here to the visual arts and media at large.) This is the reason why citations are so necessary in academic writing, and certainly is something that is closely monitored in other areas of culture, like the music industry, where sampling is carefully controlled by way of copyright law. So, writing in the sense before the enlightenment no longer takes place. Instead, the careful choices of preexisting material made by authors in all fields are revered. Our most obvious example is the work of Duchamp (which I’ve cited in my definition of Remix[9] ), who understood this so well that he decided to simply choose readymades as opposed to trying to create art from scratch; he understood the new level of writing, or creating that was at hand in modernism, which entered a stage of meta—of constant reference, relying on the cultural cache of pre-existing material.

So writing’s and art’s true power is selectivity, and this comes forth today in sampling, a privileged symptom of the postmodern. The selectivity found in the death of the author and the author function as defined above is what makes the notion of interactivity easily assimilated because of sampling. For example, once cut/copy and paste is assimilated not only as a feature for the user to write her own texts, but also to reblog pre-existing material, the user then becomes more of an editor (a remixer) of material, by reblogging under a new context, as a new composition that allegorizes its sources. This possibility of selecting and editing to develop a specific theme according to personal interests plays a key role in how the art viewer, or new media user will relate to the person who produced the object of interaction. This shift, while redefining the concept of creativity and originality also develops new challenges for the media producer.

[1] Roland Barthes, “The Death of the Author,” Image Music Text (New York: Hill and Wang, 1977), 148.
[2] Ibid.
[3] Ibid.
[4] Michel Foucault, “What is an Author,” The History of Art History: A Critical Anthology, ed. Donald Preziosi (New York and Oxford: Oxford University Press, 1998), 299-314.
[5] Ibid, 305-307 .
[6] Ibid, 308-09.
[7] Ibid, 312.
[8] Ibid, 314.
[9] See: “Remix Defined,” Remix Theory < http://remixtheory.net/?page_id=3>.

Amazon Wiki and Washington Post Remix, by Richard MacManus (Reblog)

Image source: Customer Evangelists

Text source: ZDnet

Original post: November 23, 2005

Two pieces of otherwise unrelated news flew past my eyes today while I was hydroplaning through my RSS Aggregator. The first was Amazon has apparently launched, or is just testing out, ProductWiki – a way for Amazon users to enter “customer editable product information” that will appear alongside “most, if not all, of the items the company sells”. I haven’t seen any confirmation of this in official Amazon sites or PR, so consider this an unconfirmed rumor at this point. But it’s certainly a fascinating concept, for anyone and everyone to be able to add information to Amazon product data. I assume that it would be additional data and wouldn’t replace the official manufacturer and retailer data (can you imagine the outcry otherwise).

In another more substantial piece of news, washingtonpost.com has released a “Post Remix site”, with the witty nickname mashingtonpost.com. They’re doing this “to foster innovation, and because we want to see your ideas about new ways of displaying news and information on the Web.” Some interesting mash-ups that people have done already: News Cloud (a tag cloud), Ripped from the Headlines! (a daily news quiz), world map interface, thumbnail quiz of Arts & Entertainment stories, and washingtonpost.com search results via RSS.

What do Amazon ProductWiki and mashingtonpost.com have in common? They both let users remix existing content and create new content. This is what Web 2.0 is all about, folks.

Busy weekend: Kindle and Facebook beatings, by Dan Farber

Image source: scifi.com

Text source:  ZDnet

Published: November 25th, 2007

Robert Scoble spent the last week giving his new Amazon Kindle ebook reader a test drive, reading a couple of books and declaring the progeny of Jeff Bezos a failure. He thinks the usability and user interface suck and it lack features such as a touch screen, social networking and the capability to send electronic goods to others. He wants version 3.0 of the device.
David Pogue of the New York Times is far kinder to the Kindle.

So if the Kindle isn’t a home run, it’s at least an exciting triple. It gets the important things right: the reading experience, the ruggedness, the super-simple software setup. And that wireless instant download — wow.

Even though most people will prefer the feel, the cost and the simplicity of a paper book, the Kindle is by far the most successful stab yet at taking reading material into the digital age.

(more…)

Amazon Pitches a Wireless IPod for Books, by Saul Hansell


Jeff Bezos, Amazon’s founder and chief executive, introduces its new e-book, called Kindle (Mark Lennihan /Associated Press)

Image and text source NYTimes.com 

November 19, 2007

Amazon.com introduced its electronic book reader today at a packed event in New York. Unlike other products in this area, Amazon’s $399 Kindle is designed to be used without ever connecting to a computer. Instead it has a wireless Internet connection that lets users browse Amazon’s online store on the device and download a book in less than a minute.

Amazon is trying to do for books what Apple has done for music. It has linked its device tightly to its own online bookstore, just as the iTunes music store is tied into the iPod. Amazon has 90,000 titles for sale at launch, including books from all major publishers.

Best sellers and new releases will cost $9.99. That represents a substantial savings off of Amazon’s already discounted prices. Amazon is currently selling hardcover bestsellers for roughly $13 to $20 and trade paperbacks for $8 to $11.

Read the entire article at NYTimes.com 

Current Projects