Author Archives: tomwink

About tomwink

Graduate student at Rowan University currently researching public spaces and their impact

Generative Poem Reflection 2

There are two threads that are repeated in our readings of digital composition: the belief that producers must offer over some control over the creation of meaning of a text to their audience, and we should be arranging ideas/information differently-focusing on associative rather than linear connections.

So, how do these ideas fit in with poetry and electronic literature? Word choice is one way. As I mentioned in my other reflection, given the generative poem’s code structures-think poetry formatting rules like iambic pentameter or haiku and you’ve got it-writers aren’t really sure when and where any given word will appear. They can estimate where in the poem a word will appear by looking at where they add it into the code, but beyond that, it’s random. What does this mean? That word choice cannot be as “random” as it appears. The thought that electronic literature is simply a lot of randomly generated text is a fairly disparaging idea that is bandied about too much. The words in generative poems are not randomly selected, rather like traditional print poems they are governed by form.

Further dispelling the myth of random selection-as in all poetry, the words that appear in a generative poem are selected because they are deemed effective and connected to the topic by the writer. So it goes with generative poems. However, this must be taken a step further. Words in generative poetry must be especially effective, since as stated above we remain unsure when and where they will appear. That means the poet must carefully select words that will impact and further the idea/topic of the poem. The poet cannot have any weak words, each must be able to be associated to the poem. Takei, George provides a fine example. Lines about warp factors, rapiers,  internment camps, action figures, and homosexuality seem odd, to say the least. Until you as a reader begin to think in an associative way, looking for connections. Then we can see that the lines are telling us about Mr. Takei’s life-from a childhood in an internment camp during WW2, to popularity as Star Trek’s rapier-wielding Mr. Sulu to becoming a leading figure in the gay community.

Associative thinking also encourages the handing over of the control and creation of meaning. We as writers must recognize that meaning is ultimately decided by the reader, what they put into a piece, what they attach to words. Generative poetry naturally extends this. Go back to Takei, George. It does not start with the same line it did when you first went there. This is a conscious decision made by the producer of the poem/poem’s code. It is designed to remove the idea that there is only ONE spot to start reading, and only ONE spot to end, which means that you can be reading a poem the wrong way. Random line generation, as well appearance and disappearance of lines, removes the idea that there is only one way to read the poem. This hopefully focuses the reader on absorbing the words, rather than focusing on the style-since they generate fairly rapidly, the reader must pay them all of the attention. The style also forces the poet into favoring short phrases and individual words, so they can never be quite sure how a line will end up looking. for readers, this means the poet cannot lead the reader to a conclusion, as is the case in print.

It is strange that generative poetry not be considered as serious a literary style as print genre. Such poems certainly fit the definition of poetry. Even if readers (wrongly) approach generative poetry with opinions based off of traditional literature, it cannot be denied that generative poetry is as evocative as its print counterpart, and that the two share more stylistic heritages than might be suspected. If creators and consumers can begin to approach generative poetry and elit on their own terms, than there is no reason that the body of literature cannot make room for these genres.

Categories: elit, ergodic literature, generative poem, information architecture, technology, Uncategorized | Leave a comment

Generative Poem Reflection 1

I really enjoyed this kind of approach to writing. After finding my topic, ( the poem is based off a weird dream I had, which stood out more because I don’t dream) After getting the title/topic-Tlaloc, an Aztec deity, and all sorts of apocalyptic  imagery, I had began my word choices. Before this, though, I looked at the html set-up of Tacoma Grunge. I noticed beyond organizing words by  nouns, adjective, etc, the poet also selects words that will build environment.

This is something I tried to copy, and it’s something I think is a key difference in writing generative poetry. I can estimate, using the code, where and in what circumstances my words will appear, but cannot really say what lines will appear. So what I tried to do, what I thought I saw Chuck Ryback, author of Tacoma Grunge doing, was to construct a wordbank that all related to the topic at some level.

I did some research about Tlaloc, trying to find ideas or things associated with him that I could use in my poem. Since my dream had a very doom-y perspective, I tried to build this mood in my verb and adjective choices, deliberately choosing harsh sounding words and words that have a negative connotation. Unhappy stuff, lots of death.

In all honesty, I think I went a little too far afield. I think my poem could be a little more focused on Tlaloc and aspects of him-more water related stuff, more Aztecian mythology in general, Tlaloc celebrations. I bring in other mythologies-Christian, Nordic, North American Native American, Islamic, as well as various New Age mysticism and fringe cults and what have you. While I think they were interesting, I think I could have limited my focus-I feel the poem is less about Tlaloc than it could have been, and with a little more research I can come up with a lot more great Aztec/Tlaloc specific stuff.

Making verb selections was interesting. I would pick a word, and then that word would make me think of three others. This was helpful in fleshing out the poem, and making sure that lines/phrases/words didn’t repeat themselves too much, which made everything look new, but it also made me question if I was writing, or simply playing word association. I was trying to associate words with the topic, and I feel the paranoiac atmosphere I was going for gave me some leeway, but I am still not completely satisfied. I’m also split between this being unhappy with the way the words kind of took off in a million directions, away from Tlaloc-centric stuff, or unhappy because I was projecting a different kind of expectations to the writing.

In the first attempt, I was much more interested in getting the words up there than anything else. For the final poem, I tried to be a little more cognizant of my placement. To this end, I made an alteration to the html code,  constructing a variable of just god-names. I also changed some code to fix the s (the letter just seemed to crop up, which lead to odd spelling). These are small changes but I think they improved how my poem looks and reads.

I’m totally enamored with the project. I really like seeing how lines are constructed just from a database I set up and kept tweeting lines because I was really pleased with some. There were lines I would have never thought of writing, and it just keeps going.  Poetry to me is all about creating lines that convey extraordinary meaning, and I think that generative poetry is an extremely effective way of doing that, since you are more carefully adding words since you  have less control where they appear.

Here, anyway, is Tlaloc Grins

Categories: elit, ergodic literature, generative poem, technology, Uncategorized | Leave a comment

My infographic reflections


Reflection 1

As a huge baseball fan, I wanted to find an aspect of the sport I could make an interesting presentation on (instead of Hey, so-and-so can still throw X amount of curve balls after X amount of years). In researching baseball-not uninfluenced by the amount of “42” trailers-I decided to investigate diversity in baseball. This was further revised to African-Americans and their presence in baseball. I’ve had arguments about this before, and many people simply say something along the lines of “It doesn’t matter what color a person is, baseball is just about how good you are as a player.” While this is true, I wanted to be able to illustrate that there are roadblocks keeping African-Americans from developing into major-league talent.

One of the problems I encountered with my first draft was a lack of any sense of narrative. I was all over the place. In this draft, I was able to focus on one group, which really helped me show change over time and offer reasons for the change. Since my topic is essentially disparity, I wanted visual reminders. To this end, I used graphs and color options to emphasize the differences. I also made use of the icons/images offered by Piktochart when talking about the reasons offered/suggested for why the African-American numbers have dipped, as well as areas in which baseball is trying to improve. I did this because I thought visuals would make the socio-economics that act against integration more concrete.

Many of the design problems I encountered came from the website’s interface. While I was able to graph the 75/856 African-American player representation, I was unable to present more information in that style graph because Piktochart informed me I did not have access too it. This resulted in my having to try to scratch-graph it, which explains why some of the baseballs in the second graph in that block are a bit wonky. I think Piktochart would benefit from some sort of “auto-align” tool for different columns/grouped information.

During revisions, aspects of my presentation (icons or what have you) would sometimes disappear for several sessions, which was unnerving. I would replace it in the infographic, and then the original would reappear sometimes days later. I also feel that Piktochart is a bit over-eager to group things together. Adding/changing text formats was annoying-it would revert to false in whatever font/color/size you wanted to change too, instead of simply changing existing fonts. As a first time Piktochart user, I felt the walk-through tutorial could be a little more robust and in-depth.

I feel that an infographic was a good approach to showing rather than telling, to acknowledge opposite arguments and refute them.

Reflection 2

I wanted my infographic to be beautiful evidence. So in constructing my presentation, I knew that there were definite Tuftean principles I wanted to apply. Firstly, due to spatial considerations, I, like Tufte, would need to become a proponent of visual density. The infographic is a very finite space, and given my topic one that I needed to make the most of.  It was at times difficult to strike a balance. It was in finding a balance that I began to experiment with font styles and size in order to maximize my real estate.

To avoid overpopulating my infographic, I relied partially on using the layout of the theme to separate and compartmentalize ideas. Tufte recommends keeping related information at eye level,  so readers understand that the information is connected (Tufte p.91). By quasi-cartouching, I hoped to avoid presenting a confusing and difficult to read column of text and image. So, I hope to have made it easier to differentiate between ideas in my infographic without having made my presentation choppy.

Font use was another way I attempted to distinguish between information types.I limited myself to 5 fonts, 4 font sizes, and 3 colors during my presentation. I wanted to use group information by font, so that when readers saw a font that had been previously used in my presentation, they would automatically make a link between the two pieces of information. I also but a lot of thinking into font selection, as Ellen Lupton, author of Thinking With Type, writes that there is a whole history and metaphor/ emotive aspect to font selection. Going off of this, I picked very clear, thin, almost severe “hard” fonts for factual information, and a wider, more spacious, “softer” font for quotes. I was my intention to use these alternative fonts to mimic the idea of “hard facts.” Courier New has a sense of coldness (I think, due to the thinness of the characters), so I used that font when talking about the real world and it’s inequality. Likewise, I chose a font for the quotes that counterbalanced this, reasoning that the quotes are from people trying to put their own spin on the situation, and thus are a bit inflated.

I think producing an infographic is a great working example of a selection from Tufte’s book ” whatever evidence it takes to understand what is going on” (Tufte p. 78). Piktochart offers a lot of images and icons, which could easily be converted into the type of distracting “phluff” as Tufte calls it, that litters many PowerPoint presentations. The challenge is to use these images as a mode of information, or as repetitive information. In my infographic, I use visuals (baseballs for timeline, a schoolhouse, etc) to reinforce the topic. This can be used to particular affect in graphing.Instead of using a pie or bar graph, which provide abstract visuals, I chose to display my data in a more visually appealing way. Instead of impersonal lines and circles, my graphs recall humanity. Further, the use of color and countable icons in my graphs is superior to visually abstract pie/bar charts when trying to project disparity.

I tried to show forward progress (where it existed) and used comparison whenever possible. Tufte argues for comparison when presenting data, as it provides context to information. So, for everything I touched upon, I tried to provide a scale. Without such knowledge, we can’t really ascertain if there is a problem. Scale comparison was especially important to my presentation, as it is about a group being marginalized. Instead of just saying there is a problem in baseball now, we must look at this years number as it relates to the whole history of African-Americans in baseball. We must look at contributing factors, and compare the African-American baseball population to African-American total population. Only through examining these factors can we accurately declare that there is a real problem with the number of African-American players represented.

I relied heavily on Tufte and Lupton’s theories in designing my infographic. Without their influences, I feel that my presentation would have been “pretty” (if I was lucky) without really having anything to say, which is a damnable sin in evidential presentations. Being able to go to their works for reference helped me understand way of putting content first, then using other aspects of the presentation as enhancements.

By using Lupton’s approach to layout, I feel that my attempt at a Tuftean infographic was as successful as a first-timer could be.

Categories: #IAMondays, baseball, class activities, diagrams, evidence, infographic, information architecture, tufte | Leave a comment

Tweetping and the Semantic Web

Recently, recurring birthday girl Devon tweeted about  Tweetping, a website tracking world-wide tweet counts. Tweetping may be the first step to realizing a Semantic Web. The term, developed in the 1960s, describes a web network that enables machine-read metadata to recognize relationships between webpages and web searches and attempts to establish these links in order for users more accurately and conveniently access the web.

I’ve had tweetping open for maybe ten minutes now and I’ve watched the number of tweets rise exponentially. What the site does-recording tweets by characters used, by hashtag (by another word, folksonomies-organically developed content retrieval tags instigated by small groups or individuals , words, mentions, place of origin, is the beginning of the Semantic Web. These ontologies -a set of data within a domain (of discourse)-touch at the possibilities of a Semantic Web.

The computer/algorithm is tracking data, but to be part of the Semantic Web, data needs to be relatable. Tweetping may track hashtags and @mentions, but offers no way of viewing the amount of times any given hashtag was used, or who/where/when the hashtag was used. In Semantic Web terms, this is provenance. Provenance is an important aspect of the Semantic Web because it can show what information is available to different areas of the globe, the density of information/technology availability (Africa has noticeably fewer tweets than other inhabited areas), and how inhabitants in the area feel about the information.  Archiving is an important function in the Semantic Web, as it is through archiving that metadata relationships can be recognized.

Moving towards a Semantic Web changes conceptions of websites and what counts as data. Twitter is an ideal example, as the social media platform is often derided for user’s tweets being thin-largely irrelevant posts (i.e. “oh snap”). But authors Tim Berners-Lee, Nigel Shadbolt and Wendy hall would argue that twitter is a a great example for Semantic Web development-given the websites tendency towards shared conceptionualization and peer-to-peer protocols. Twitter has changed how information is recorded, communicated, and archived. Twitter users have the ability to list information via topic, or group information in any way they want to. These lists can also be shared, altered, etc.

While tweetping is an imperfect example of the Semantic Web, it is a step in the right direction. However, it can easily be adapted. It would not take much to expand Tweetping to track tweet posts by topic, area, etc. As stands, it is an interesting look at how machine-and not humans-track data. It also stands as an interesting contribution to web science, a science that seeks to develop an understanding of how information systems (both human and machine) operate on a global scale.

Tweetping offers an interesting look a global data trends-though again, only through seeing repeated hashtags in this version. As users of the Semantic Web, we must be aware that we are not just tweeting, not just blogging, not just idly surfing the web. We are contributing to the global information database.

Categories: #IAMondays, information architecture, mapping, semantic web, twitter | Tags: | 2 Comments

Baseball infographic



Here’s my baseball infographic. Feedback would be lovely.

Categories: #IAMondays, baseball, diagrams, images, infographic, tufte | Leave a comment

Asking Questions Without Expecting Answers

This past week, we read and discussed former Formula One racer Franco Moretti’s book, Graphs, Maps, and Trees: Abstract Models for Literary History. While his layout might not be the most beautiful-several expressed a Tuftean dislike for placement of visuals and scale-Moretti’s key idea is revolutionary and exciting.

His concept of distant reading (focusing not on individual words and individual texts, but on genre, literary trends) is one that helps us as consumers and presenters of information. By using his process of deliberate reduction and abstraction, we can parse away unnecessary details that might otherwise draw our attention away. A perfect example of this is Charles Minard’s map of the retreat from Russia.

Tan line thinning

Notice that superfluous details-geographic map, description of army components, even Napoleon’s name-are left out of the graph. These are all things that distract us from what we are trying to understand. To borrow Tufte’s word, chartjunk. As Moretti puts it, distance is not an obstacle, but a specific form of knowledge.

Graphs are important, to Moretti they provide “quantitative research which provides a type of data which is ideally independent of interpretations.”

We made our own graph. For zombie movies. Because genres.  (The rise/fall of genre and literary trends is a large topic of Moretti’s book.)

The construction of the graph brought up an interesting question. As consumers of evidence, are we too schooled to ask Tufte’s question of “compared to what?” and expect visuals to provide perfect/complete context and understanding of it’s topic?  As a class, we were very-almost over-eager to explain why the sudden rise for zombie flicks. Each rise, surely, is instigated by a worldwide disease outbreak. No, the spikes reflect a war beginning or ending.

It could be both. It may be neither. To be honest, we don’t know. It doesn’t really matter. We got caught up in the why-did-this-happen, at the cost of saying, ok, this happened. Moretti says that we put many people off teaching because we have the answer to everything.

Are we as consumers and producers of information too eager to apply readings and rationales to graphics? Are we, as Moretti implies, unwilling to accept that we might not have the answers?

Such questions clearly shade how we think of ourselves as presenters. Instead of presenting to prove a point, maybe we should provide, and request others to provide, raw data.

Going back to our zombie chart, we were very excited to be able to show that other things were happening concurrent to the rising spike in zombie movies. But we need to temper this excitement of proving with the threat of false causailty. In class, we could not prove that zombie movies shot up do to global illness or war. It just looked coincidental and we made the assumption that the twain were linked. THIS IS BAD! Both may prove to have some influence on the success of the zombie genre, but without hard proof, it does us no credit to make any suppositions.

This needs to inform our evidence presentations. We must be wary to present information in a way that makes inaccurate connections. We, as presenters, must swallow any pride and admit when we do not know, and simply present what we have as raw data.

Moretti is more than happy to conclude passages in his book by admitting he has no answers, that there might not be any answers.  This is an attitude we should adopt. It may infuriating to some, but sometimes the smartest thing we can say is, “I don’t know.”


Categories: diagrams, evidence, franco moretti, mapping, tufte | 4 Comments

Create a free website or blog at