The following is a presentation separated into two parts; it was produced for the conference Re*-Recycling_Sampling_Jamming, which took place in Berlin during February 2009.
Part One (above) introduces the three chronological stages of Remix, while part two (below) defines how the three chronological stages are linked to the concept of Authorship, as defined by Roland Barthes and Michel Foucault. Also see my previous entry “The Author Function in Remix” which is a written excerpt of the theory proposed in part two.
Below is the abstract that summarizes the content of the two videos. Total running time is around fifteen minutes.
———–
Text originally published on Re*- on February 2009:
SAMSTAG_28.02.2009_SEKTION IV_15-20 UHR
12_15:00 Remix[ing]. The Three Chronological Stages of Sampling
Eduardo Navas, Künstler und Medienwissenschaftler, University of California in San Diego (USA)
Sampling is the key element that makes the act of remixing possible. In order for Remix to take effect, an originating source must be sampled in part or as a whole. Sampling is often associated with music; however, this text will show that sampling has roots in mechanical reproduction, initially explored in visual culture with photography. A theory of sampling will be presented which consists of three stages: The first took place in the nineteenth century with the development of photography and film, along with sound recording. In this first stage, the world sampled itself. The second stage took place at the beginning of the twentieth century, once mechanical recording became conventionalized, and early forms of cutting and pasting were explored. This is the time of collage and photo-montage. And the third stage is found in new media in which the two previous stages are combined at a meta-level, giving users the option to cut or copy (the current most popular form of sampling) based on aesthetics, rather than limitations of media. This is not to say that new media does not have limitations, but exactly what these limitations may be is what will be entertained at greater length.The analysis of the three stages of sampling that inform Remix as discourse is framed by critical theory. A particular focus is placed on how the role of the author in contemporary media practice is being redefined in content production due to the tendency to share and collaborate. The theories on authorship by Roland Barthes and Michel Foucault are entertained in direct relation to the complexities that sampling has brought forth since it became ubiquitous in popular activities of global media, such as social networking and blogging.
On March 26, Parsons the New School for Design and MoMA, in collaboration with IFF, Seed, and Coty, will present Headspace: On Scent as Design. A one-day symposium on the conception, impact, and potential applications of scent, the event will gather leading thinkers, designers, scientists, artists, established perfumers as well as “accidental perfumers”—architects, designers, and chefs—to acknowledge scent as a new territory for design. Seed sat down with organizers Paola Antonelli, Véronique Ferval, Jamer Hunt, Jane Nisselson, and Laetitia Wolff to discuss why we tend to overlook the importance of scent, our increasingly antiseptic, smell-free lives, and how our lives could change when we begin to tap into the rich olfactory dimension of design.
What inspired Headspace?
The idea that led us to organize Headspace is that scent is not only a medium for design, but also a design form in its own right. Perfumers and scientists working on scent perform every time a design act. Sometimes it is good, sometimes mediocre. It can be very commercial, or more limited and idiosyncratic. Just like other forms of design, it is targeted to the goal at hand, whether the creation of a new clothing detergent with universal appeal or of a unique scent that will touch only a few dozen wrists. Just like other forms of design, it requires expertise and dedication, not to mention talent. We are therefore not advocating that any self-described designer should also feel free to tackle scent, but rather that designers should be aware of the spatial and perceptive potential of scent, and that perfumers should realize that they are engaged in design and take advantage of that knowledge.
Why is the smell experience of an object or an environment so often ignored or treated as less significant than the visual and, when it applies, aural, tactile or taste experience?
Scent happens both before and behind all other senses. Scents hit us directly through the limbic system; they are more pre-cognitive and emotional. For that reason, it’s harder for our mind to compute. Language doesn’t really seem up to the task of expressing all that scent means to us, or triggers within us. We ignore olfactive input because we have not been educated in a language with which to express any perceived gradations. Thus, we are still at the level of the “grunt,” limited to broad terms like good, bad, ugh, and sweet.
History has helped smell’s downfall, too. With the Enlightenment Era came a certain rationalization of our senses, where knowledge, culture, class, and intelligence were associated directly with our visual senses, whereas smell was associated with bodily fluids, dirt, and poverty. We seem to still be shaped by that dichotomy and we therefore miss out on one of our great cognitive gifts
An approach similar to the wine industry’s could motivate the public to acquire an education and a vocabulary to share their olfactive experiences. We have cultivated a sophisticated approach to flavor that makes us think we can really choose among twelve types of salt and twenty-five types of olive oils. There is no similar reciprocal relationship in the domain of smell that invites and rewards people to cultivate and pursue odor distinctions and experiences.
Social history has encouraged a discomfort with our beautifully functional nostrils. It is time to reclaim them!
Originally published on December 15, 2009 on Seed Magazine
by Lee Billings
In April 1965, a young researcher named Gordon Moore wrote a short article for the now-defunct Electronics Magazine pointing out that each year, the number of transistors that could be economically crammed onto an integrated circuit roughly doubled. Moore predicted that this trend of cost-effective miniaturization would continue for quite some time.
Two years later Moore co-founded Intel Corporation with Robert Noyce. Today, Intel is the largest producer of semiconductor computer chips in the world, and Moore is a multi-billionaire. All this can be traced back to the semiconductor industry’s vigorous effort to realize Moore’s prediction, which is now known as “Moore’s Law.”
There are several variations of Moore’s Law—for instance, some formulations measure hard disk storage, while others concern power consumption or the size and density of components on a computer chip. Yet whatever their metric, nearly all versions still chart exponential growth, which translates into a doubling in computer performance every 18 to 24 months. This runaway profusion of powerful, cheap computation has transformed every sector of modern society—and has sparked utopian speculations about futures where our growing technological prowess creates intelligent machines, conquers death, and bestows near-omniscient awareness. Thus, efforts to understand the limitations of this accelerating phenomenon outline not only the boundaries of computational progress, but also the prospects for some of humanity’s timeless dreams.
An interesting discussion on the work of Cindy Sherman takes place between Judith Butler and a gallery host. Butler discusses the representation and questioning of vulnerability of women in Sherman’s work, and also shares the formal pleasures she finds in the works of art. The subtitles are in French, and the discussion is in German; most of the documentary is in English with French subtitles. The segment on Sherman begins around 3:10 and carries over to later segments. I find this documentary excerpt worth noting because it offers a rare moment when a philosopher discusses works of art casually, yet with careful analysis.
I find some of Butler’s premises on performativity to run parallel with the development of Remix, and to be potentially useful to evaluate current concepts on cultural mixing. I say this without claiming that her work could be directly linked to Remix as discourse, but rather that a paradigmatic reflection on her ideas can be helpful in understanding the cultural variables in which remix culture plays out. Not sure how long the documentary may stay on YouTube, but here are the links for future convenient access:
Figure 1: A function f and its Fourier transform F(f). Both the function and its Fourier transform are complex-valued, but in graphs like this only the magnitudes of the functions are shown.
Note: An online page I discovered, which was last updated, apparently in Winter of 2000. It provides a good introduction to the theoretical aspects of sampling.
———-
This document is a short overview of some aspects of sampling theory which are essential for understanding the problems of Volume Rendering, which can be viewed as nothing but resampling a data set obtained from sampling some unknown function.
Prerequisite for this document is a basic understanding of Fourier Analysis on an intuitive level. You have to know that a function f(x) in the spatial (or time) domain has a counterpart F(f) in the frequency domain. Any function satisfying some simple properties can be written as a weighed sum of harmonic functions (shifted and scaled sine curves), and (F(f))(s), called the Fourier transform or spectrum of f, gives the weight of the harmonic function of frequency s in f.
Jeremy Douglass (left) and Lev Manovich (far right) demonstrate how to analyze data on the Hyper Wall at Calit2.
The Cultural Analytics seminar took place at Calit2 on December 16 and 17 of 2009. The event brought together researchers and students from Bergen University and University of California San Diego. The two day event consisted of research presentations and demos of software tools.
Part One of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009. Introduction to principles of Cultural Analytics.
I will not spend much time in this entry defining Cultural Analytics. This subject has been well covered by excellent blogs such as Open Reflections. For this reason, at the end of this entry I include a number of links to resources that focus on Cultural Analytics. Instead, I will briefly share what I believe Cultural Analytics offers to researchers in the humanities.
Part Two of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009. Analysis of Vertov’s motion in scenes from Man with a Movie Camera.
This emerging field can be defined as a hybrid practice that utilizes tools of quantitative analysis often found in the hard sciences for the enhancing of qualitative analysis in the humanities. The official definition of the term follows:
Cultural analytics refers to a range of quantitive and analytical methodologies drawn from the natural and social sciences for the study of aesthetics, cultural artifacts and cultural change. The methods include data visualization techniques, the statistical analysis of large data sets, the use of image processing software to extract data from still and moving video, and so forth. Despite its use of empirical methodologies, the goals of cultural analytics generally align with those of the humanities.
One thing that separates the humanities from the hard sciences is the emphasis of qualitative over quantitative analysis. In very general terms qualitative analysis is often used to evaluate the how and why of particular case studies, while quantitative analysis focuses on patterns and trends, that may not always be concerned with social or political implications.
Part Three of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009. Jeremy Douglass analyzes comic books.
What Cultural Analytics is doing, in my view, is bringing together qualitative and quantitative analysis for the interests of the humanities. In a way Cultural Analytics could be seen as a bridge between specialized fields that in the past have not always communicated well.
Consequently, when new ground is being explored, questions of purpose are bound to emerge, which is exactly what happened during seminar conversations. As the videos that accompany this brief entry will demonstrate, the real challenge is for researchers in the humanities to engage not only with Cultural Analytics tools and envision how such tools can enhance their practice, but to actually embrace new philosophical approaches that blur the lines between the hard sciences and the humanities.
Part Four of Hyper Wall Demonstration during Cultural Analytics Seminar at Calit2, San Diego, December 16-17, 2009. Cicero Da Silva explains his collaborative project, Macro.
To be specific on the possibilities that Cultural Analytics offers to the humanities, I will cite two demonstrations by Lev Manovich and Jeremy Douglass.
Lev Manovich at one point presented Hamlet by William Shakespeare in its entirety on Calit2’s Hyper Wall, which consists of several screens that enable users to navigate data at a very high resolution.
When seeing the entire text at once, one is likely to realize that this methodology is more like mapping. To this effect, soon after, we were shown a version of the text in which Manovich had isolated the repetition of certain words throughout the literary work.
This approach could be used by a literature scholar to study certain linguistic strategies, such as sentence structure, by an author. Let’s take this a step further and say that it has been agreed that a contemporary author is influenced by a canonical writer. How this supposed influence takes effect can be evaluated by studying certain patterns of sentences from both authors by isolating parts of literary texts for direct comparison. One could then evaluate if the supposed influence is formal, conceptual or both: perhaps the contemporary author might make ideological references that are clearly linked to the canonical author, but which are not necessarily influenced at a formal level; or it could be the other way around, or both. In this case, quantitative and qualitative analysis are combined to evaluate a case study. In other words, pattern comparison is used to understand the similarities and differences between two or more works of literature.
To this effect, Jeremy Douglass’s presentation of a comic book is important. He explained how by seeing an entire publication of a comic book story one can study how certain patterns in the narrative come to define the aesthetics for the reader.
While the reader may be able to experience the story in time, by actually reading it, the visualization of the comic book in grid-like fashion–as a structural map–allows the researcher to apply analysis of patterns and trends that may be more common in flows of networks to an actual narrative. Again, in this case we find quantitative and qualitative analysis complementing each other.
As noted at the end of the article “Culture is Data” (also provided below) from Open Reflections, it appears that Manovich is at times understood to argue that one should privilege quantitative over qualitative analysis. This proposition implies an either or mentality by certain researchers that needs to be reevaluated. Janneke Adema explains his answer better than I ever could:
But, on the other hand, won’t we loose a sense of meaning if we analyze culture like a thing? Manovich argues that this is of course a complementary method, we should not throw away our other ways of establishing meaning. It is a way of expanding them. And it is also an important expansion, for how is one going to ask about the meaning of large datasets? We need to combine the traditionally [sic]humanities approach of interpretation with digital techniques to find out more. And again, meaning is not the only thing to look at. It is also about creating an experience. Patterns are the new real of our society.
The most important thing to understand when evaluating the videos available with this entry is that one need not have a hyper wall to do research with Cultural Analytics methodologies; many of the tools can run on a personal computer. It’s more about adopting an attitude and willingness to do research by way of combining quantitative and qualitative analysis. At the moment I am evaluating the implementation of Cultural Analytics in my research on Remix.
“Space Junk Spotting, by Saso Sedlacek, Software Mashup of Google Earth and a NASA database of space debris.
As part of my residence at the Swedish Traveling Exhibitions, on November 6 I visited Mejan Labs, an art space dedicated to supporting projects that critically reflect on diverse forms of technology. The art space is located in the heart of the city of Stockholm. Director Peter Hagdahl and Curator Björn Norberg greeted me upon my arrival. We spent sometime discussing the history of media, and how Mejan Labs is part of the ongoing development of new media art practice. In just three years, Mejan Labs has become an exhibition space worth noting outside of Sweden. I learned about it almost as soon as its first exhibition was launched. It was quite a treat to be able to visit it and meet its founders in person.
At the time of my visit, Mejan Labs featured three works that focused on Astronomy, or on the earth in some abstract form. “Earth and Above” on view from November 5, 2009 to January 12, 2010, presents the works of three artists, “A Space Exodus” (2008) by Larissa Sansour, “No Closer to the Source” (July 20, 1969) by Lisa Oppenheim, and “Space Junk Spotting” by Saso Sedlacek.
”Wall Drawing #715”, February 1993
On a black wall, pencil scribbles to maximum density. Pencil.
Courtesy Estate of Sol LeWitt
First installation: Addison Gallery of American Art, Phillips Academy, Andover, MA
First drawn by: S. Abugov, S. Cathcart, A. Dittmer, F. Dittmer, L. Fan, C. Hejtmanek, S. Hellmuth, D.
Johnson, A. Moger, A. Myers, J. Noble, G. Reynolds, A. Ross, A. Sansotta, J. Wrobel. (Varnished by
John Hogan)
Image courtesy of Magasin 3
On November 5, as Correspondent in Residence for the Swedish Traveling Exhibitions, I visited Magasin 3, located in Frihamnen (freeport), Stockholm. Curator Tessa Praun took the time to discuss with me the history of the Konsthall (art space) which opened in 1987, and has since then developed an extensive collection of contemporary art.
In the tradition of appropriation, Magasin 3 takes its name after the building’s original function as a sea port storage facility. The space is hard to find, and one must make a definite commitment to visit it. I was no exception. I first took the subway then a bus to the end of the line, then walked and (as is probably common for first time visitors) got a bit lost, but finally found the space.
The Konsthall has a low-key facade, and retains the look of an industrial space. Its name is no different than the other storage facilities in the area (there are magasin 1, 2, 4, 5, and more); because of this, it is unlikely that a casual passerby will enter the premises. This exclusivity gives Magasin 3 an elegance defined with minimal aesthetics. Appropriately enough, at the time of my visit, the konsthall featured minimal drawing installations by Sol LeWitt, curated by Elisabeth Millqvist. The Sol LeWitt exhibition opened on October 2nd 2009 and will close June 6, 2010. In what follows, I discuss LeWitt’s work as well as two video installations by british based Israeli artist Smadar Dreyfus, curated by Tessa Praun. (more…)
Description by Färgfabriken Birthday Party, 2000 Birthday Party (1:10) is a reconstruction of the party of the artist?s mother?s 65th birthday on March 16, 2000. Ten cameras documented the party, and the films were later screened in the windows of a wooden model of the suburban villa.
During the months of October and November, I am working for four weeks as a Correspondent in Residence for the Swedish Traveling Exhibitions (STE), a non-profit organization based in Visby, a small town located in the island of Gotland. The Institution produces exhibits of all types that travel throughout Sweden, and is particularly interested in exploring the possibilities of the exhibition space as a mobile unit in all possible forms.
As part of my residency, I am scheduled to visit a number of institutions mainly in the cities of Stockholm and Goteborg. My first stop was Stockholm, where on Monday, October 20, I visited Färgfabriken, an artspace located in a former factory sector. Project Manager Sofia Palmgren generously showed me around the former paint factory, which in 1996 was turned into an artspace that is focused in art as process. The institution has a very open mission statement, but upon examining their archives, it becomes evident that their interest is to deliver conceptually engaging art installations that are quite sensitive to all the senses.
The following text was originally published during the month of August, 2009 as part of Drain‘s Cold issue. The journal is a refereed online journal published bi-annually. The text is republished in full on Remix Theory with permission. Drain’s copyright agreement allows for 25% of the essay to be reblogged or reposted on other sites with proper citation and linkage to the journal at http://www.drainmag.com/. I ask that their agreement be respected by the online community.
In 1964 Marshal McLuhan published his essay “Media Hot and Cold,” in one of his most influential books, Understanding Media.[1] The essay considers the concepts of hot and cold as metaphors to define how people before and during the sixties related to the ongoing development of media, not only in Canada and the United States but also throughout the world.[2] Since the sixties, the terms hot and cold have become constant points of reference in media studies. However, these principles, as defined by McLuhan, have changed since he first introduced them. What follows is a reflection on such changes during the development of media in 2009.
McLuhan is quick to note that media is defined according to context. His essay begins with a citation of “The Rise of the Waltz” by Curt Sachsk, which he uses to explain the social construction behind hot and cold media. He argues that the Waltz during the eighteenth century was considered hot, and that this fact might be overlooked by people who lived in the century of Jazz (McLuhan’s own time period). Even though McLuhan does not follow up on this observation, his implicit statement is that how hot and cold are perceived in the twentieth century is different from the eighteenth. Because of this implication, his essay is best read historically. This interpretation makes the reader aware of how considering a particular medium as hot or cold is a social act, informed by the politics of culture. McLuhan’s first example demonstrates that, while media may become hot or cold, or be hot at one time and cold at another, according to context, the terms, themselves, are not questioned, but rather taken as monolithic points of reference. To make sense of this point, McLuhan’s concepts must be defined.