Archive

Author Archive

Twidroid Review

December 13th, 2009 1 comment

twidroidI picked up the MyTouch G3 about a month ago on a whim.  I was fueled by my recent desire for a phone that allows me to access Twitter a little more effectively than Virgin Mobile’ ARC.

Twidroid is the a third-party application for Twitter.  It was developed by Ralph Zimmerman and Thomas Marban for use on the Android Operating System.  Twidroid is available for free on Android’s market, which is accessible through the phone and from a computer.  Mashable.com, “the world’s largest blog focused exclusively on Web 2.0 and Social Media news,” (said themselves) rated Twidroid as one of the best free Twitter applications for the Android.  How could I say “no” to something so highly regarded?  And free?

Before I continue, let me say that my photos were taken with a digital camera.  Getting screenshot software to work was more complicated than I imagined and I just don’t have the knowledge or time to deal with the process.

Once opened, Twidroid presents itself in a rather straight forward manner.  Tweets are displayed on the majority of the screen while at the bottom are several icons:twidroid2*Photo courtesy of michael-lipson.com

This is the home screen.  To send a tweet, I just need to press on the speech bubble on the bottom bar, just right of the house.  At the top of the screen, a space will appear.  After tapping on that space, the G3 keyboard will appear at the bottom of the screen.  Now it’s just a matter of carefully entering whatever message I want with my clumsy fingers, but my problems with G3’s keyboard are for another post.

Pressing @ icon opens up my list of mentions, displayed in reverse chronological order.  The envelope icon shows my list of direct messages in a similar fashion.  The magnifying glass icon opens the searching tool with which I can search for other users and keyword.  The circular arrow on the far right refreshes whatever list I’m looking at, which is quite useful when I have the automatic refresh set for longer periods of time or when I’m engaged in a conversation that requires a certain degree of swiftness in replies.

To reply to other users’ tweets, I just press on the arrows to the right of their post and a menu will appear:

twittermenu

From here I am able to reply, look at their profile, favorite that user, retweet their post, send them a direct message, copy their tweet to my phone’s clipboard, share their tweet (email, Facebook, SMS), or report the user as spam.  The last two options aren’t visible in the picture, but the menu does scroll down.

The menu button on my G3 opens another menu on the bottom of the screen:

twitter submenu

From here I can choose to jump to the top of the tweet list, enter Twidroid settings, view my lists, view my profile, and exit Twidroid.  The “More” icon opens a sub menu containing access to my Twitter accounts, my favorite users, and an option to manage my lists, though List Management is an offer available to those who have Twidroid PRO (which users have to pay for).

Viewing my own profile on Twidroid is quite similar to viewing it on Twitter’s website: Twidroid displays personal information on the top, icon to the right, and tweets below whereas Twitter keeps the personal information confined to the far right.

twitter profile

The large similarities between Twitter  and Twidroid  allow more users to comfortably shift from one to the other without becoming confused by the interface differences.  In this particular case, the layout is simple enough to navigate without much prior knowledge of Twitter.com.  This can be done by simply exploring the application.  But that can be said of anything.  The best way to learn a new skill is by using it.  You’ll be clumsy and uncomfortable at first, but all new interfaces are reflective for while, and “even the most reflective interfaces tends toward transparency as a user becomes accustomed to it,” so sayeth Colin Brooke in Lingua Fracta, page 133.

Categories: Uncategorized Tags: ,

The “New Moon” Craze.

November 21st, 2009 2 comments

Although I’d rather not bring it up at all, I think it’s particularly relevant to our discussion: I saw “New Moon” last night.  I haven’t seen “Twilight” and I have no interest in the genre at all, but my girlfriend enjoys it so I treated her to opening night.  It was my first opening night viewing since “Team America: World Police”, but this time, the theater was much more packed.  The audience was mostly teenage girls, but every so often I could spot a parent or a boyfriend who, like me, probably didn’t care about human-vampire romances.  I bought our tickets for the 10:46 show because every show between 4:15 and 10:45 was sold out, which was about seven shows.  I had never seen so many sold out and it made me curious as to how well the movie did elsewhere.

Once the movie ended and I made it back home sanity intact, I took a peek at some statistics.  A Huffington Post article reported the following:

According to online ticket seller MovieTickets.com, “New Moon” is the No. 1 Advance Ticket Seller of all time, surpassing “Star Wars Episode III: Revenge of the Sith,” which previously held the title.

News organizations nationwide reported their local theatres selling out, with many camped out for hours to stake out their spot for the heavily-hyped midnight premiere. Before even hitting the screen, it was reported Thursday that more than 2,000 theatres sold out.

2,000 theaters being sold out by solely advanced ticket sales.  That’s simply amazing.  Also surprisingly, the opening day madness broke  the record previously held by the latest Harry Potter film, “Harry Potter and the Half-Blood Prince.”  This was reported by the NY Daily News in their own midnight showing article, ‘New Moon’ Opening Night Sales: Box Office Breaks Record for Midnight Screenings. Some more facts from them:

  • “New Moon” raked in approximately $26.27 million in 3,514
  • “Harry Potter and the Half-Blood Prince” earned $22.2 million
  • “The Dark Knight” comes in third, having drawn in $18.5 million

Variety.com claims that “New Moon” made $72.7 million on its opening day, Friday, beating out “The Dark Knight”, which had $67.2 million.  This is amazing. “New Moon” more than doubled the opening day revenue of its predecessor, “Twilight,” which had about $36 million just a year ago.

So why in the world did “New Moon” do so well?  It all has to do with how “Twilight” dug out a new genre and created for itself a entire world of merchandising possibilities.  The incompleteness of the movie allows for fans to construct their own interpretations, carry the movie’s ideas along new paths, and gives the creators of the movie room to expand and build upon their work.  Chuck Tryon in “Reinventing Cinema: Movies in the Age of Media Convergence,” explains the incompleteness of “The Matrix” which inspired the creation of a huge franchise involving, “video games, comics books, and online communities and alternative reality games” (29).  The popularity of “Twilight” was propelled by fan blogs, entertainment blogs, and, unlike “The Matrix” which appealed more to online-gaming (The Matrix Online), spawned series after series of published novels.  Visit the Barnes and Noble in Deptford, NJ, and swing by the Teen section (which is next to Writing References, oddly), and you’ll see several bookshelves paying homage to “Twilight”‘s ideas.

The Great Yogurt in “Spaceballs” had said, “Merchandising! Merchandising! Where the real money from the movie is made!”  Today, we accept this truth without thinking about it too much.  Much of the merchandise falls into the standard categories, such as t-shirts, book covers, and posters, but as the fan base grows, a certain percentage tends to become more devoted, which always results in stranger merchadise.  This is not specific to the “Twilight” series, though.  We can see the same thing with any film culture, such as “Star Wars“.

The year between “Twilight” and “New Moon” allowed the fan base to increase  almost exponentially.  Tryon, in his blog,  gives the credit for this rapid expansion to speed of publication, but is unable to determine whether it’s good or bad:

I’m not ready to argue that this process – in which gossip and entertainment bloggers rush to satisfy the voracious interest in Twilight films – is harmful…

But I think it does speak to one of the ways in which the “industry” of blogging – the modes of producing a profit – begin to shape how a film gets covered and even risks drawing attention from lesser known films.

He concludes that thought by saying that online social media tools are an important part in how a movie is received, promoted, and discussed.  In the case of “Twilight” and “New Moon”, its popularity depended entirely on those social media tools (fan blogs, Facebook, Myspace, film blogs, etc).  Now that fans of the series have been given the latest installment and because of how it ended (though I won’t spoil that for you), I see perhaps an even larger turnout for the next movie.

Bill Wasik Visits Google

November 14th, 2009 No comments

Part of Google’s “Authors@Google” series, Bill Wasik discusses “And Then There’s This: How Stories Live and Die in Viral Culture.”  I thought it’d be appropriate to share this video before we move on with our own discussion.

Categories: Uncategorized Tags: ,

New Media Education

November 6th, 2009 No comments

The incorporation of new media in the classroom has been an ongoing process.  In the mid-1960s, bulky vacuum tube computers were establishing a presence on well-to-do universities, and smaller miniframes and minicomputers were starting to be used.   According to Catherine Schifter’s 2008 article, “A Brief History of Computers, Computing in Education, and Computing in Philadelphia Schools,” computers were often used in the 1960s for computer-assisted instruction.  Many teachers were hesitant to use this new technology and preferred educating with tools they knowledgeable with rather that this “alien” technology.

It wouldn’t be until the rise of Apple and their donations of computers to schools and universities would a class rely on computers as an educational tool.  In the 1980s, computer classes, or “labs”, became part of the curriculum.  However, the use of computers was still very constricted to the teaching of computer literacy.  This was so because computer skills weren’t needed in other classes, or if they were at all, they were used on a very basic level (simple math problems, science quizzes, etc).  Their use for higher level teaching was not popular outside of programming courses at universities.  Apple’s development of a decent word processing program, the Apple Works suite, became a common in 1984, but high school typing classes wouldn’t be until 1990.

Then textbooks began supplementing their material with 3.5in floppy disks and CD-ROMS.  The multimedia program, Hyperstudio, introduced high school students to multimodality in text.  Computer rooms in school were becoming more and more common as this “Internet” thing was slowly being realized as more than just a fad.

Now, computers and education have become integrated down to the elementary level.  The importance of computer literacy and teachers who are knowledgeable with computers have facilitated that integration.  A first grade teacher at Prairie South School in Central Saskatchewan, Canada, uses technology daily with her six to seven year old students.  They routinely use the Internet and even have their own blog, Blogmeister.  The following video was made by the class and is an example of just how fundamental new media has become in our schools.  (Pardon the music)

However, even though schools across the nation have created a multitude of computer classes and classes have worked computer use into their respective studies, there is still a need for a more systemic educational standards.  The participatory nature of new media presents many obstacles and questions that children, if left on their own, may or may not successfully navigate to became active and intelligent members of this new culture.  This is the argument Henry Jenkins et al. have constructed in Confronting the Challenges of Participatory Culture: Media Education for the 21st Century.  They stressed that young people need to develop a certain set of skills to achieve such a participatory status.  Those skills are play, performance, simulation, appropriation, multitasking, distributed cognition, collective intelligence, judgment, transmedia navigation, networking, and negotiation.

They ask and address three questions on page 56 to which the aforementioned skill set need be applied to:

  • How do we ensure that every child has access to the skills and experiences needed to become a full participant in the social, cultural, economic, and political future of our society?
  • How do we ensure that every child has the ability to articulate his or her understanding
    of the way that media shapes perceptions of the world?
  • How do we ensure that every child has been socialized into the emerging ethical standards that will shape their practices as media makers and as participants within online communities?

These questions raised by Jenkins and his colleagues are one that educators and scholars have been asking since computer technologies were seen as an important and ignored learning tool.  It would seem that teachers all over the world have been grappling with this problem and have been adjusting their courses accordingly, however new media have advanced incredibly fast in the last decade.  Administrations have be hard pressed to adjust so quickly and teachers may be more capable for impromptu adaptations, but the educational system is a slow giant.  We need to look at how schools are helping students become active participants in our “Web 2.0 culture” and determine what we can do to improve that transformation.

The Needs of the Many?

October 26th, 2009 3 comments

This weekend, I downloaded “District 9”, “Bubba Ho-Tep”, and “Zombieland” with the bit torrent application, uTorrent.  It’s a handy little tool that I’ve been using regularly since 2003 to download large files such as movies, games, and music albums.

To handle the smaller files, such as individual songs or various essential component files (to properly run programs), I’ll use Limewire.  The file-sharing program, DC++, was helpful in the past, but only on the campus network.

Back when I used to play computer games with a near-fanaticism, I searched high and low for CD cracks, passwords, DVD image rippers, and various what-have-you.

So suffice to say, I am a pirate.  If you ask me why I do this, I’d have a hard time casting myself in a positive light.  However, let me say that I try to find the best deals on goods.  If I’m able to get a product for free without leaving home, I’m not going to spend fuel and money just for the box it comes in.  I’m not a bad person, I swear, but if the opportunity arises for a free copy of “Plan 9 from Outer Space”, I’ll take it without question.

My theft was largely a result of the band wagon mentality – everyone else was doing it, so I might as well have hopped on – but I won’t discount my own conscious decisions to violate copyright laws.  I knew what I was doing.  When Napster was being hit hard with copyright infringement lawsuits in 2000, I was temporarily hesitant when downloading mp3s, but that fear was short lived and I thieved under the idea that I was very unlikely to be “discovered” by the “authorities”.  This all doesn’t mean that I didn’t eventually go out and buy the albums I was downloading – there is always some validity in holding the official copy.

Siva Vaidhyanathan mentions this post-download buying on page 179 in Copyrights and Copywrongs, “…it’s not so clear that people will stop buying CDs just because they can get free MP3s one song at a time.”  I’m not at all alone in my justifications.  I may very well be just another follower in the Grateful Dead business model he discusses – “…give away free music to build a following, establish a brand name, and charge handsomely for the total entertainment package.”  Indeed, I often do buy after downloading, and I often buy  obsessively.  Upon downloading Pink Floyd’s “The Wall” in 2000, I bought the album, then the performance in Berlin, then “Dark Side of the Moon”, and soon after simultaneously buying and downloading every single Pink Floyd album I could find.

I can’t say I act the same way with movies, unless the film was particularly good.  I’m more likely to download, watch, and delete.  There can be raised an issue of theft-for-profit, if I had a desire to make a profit, but I see this more often done elsewhere and usually unnoticed (or even accepted).  For instance, I was in Iraq last year and on base there was a small shop ran by a couple foreign contractors.  They sold copies and bootlegs of movies on DVD and movies still in theaters for $3 or two for $5.  Their primary customers were U.S. soldiers.  It felt almost disturbing to see such blatant violation of copyright, that our own Constitution protects, being violated on a U.S. military post.  Why did we let this happen?  Convenience, mostly, but we were a captive audience – I doubt that selling DVDs at a standard state-side market price would have seen any less buyers.

Whether or not it was lawful, the residents of the post were grateful to have such a service provided and I’ll admit that having such cheap movies made the quality of life a little better.

The same can be said of much of the copyright infringements we see today.  Although perhaps I assume humanity to be more generous than it is, much of the copied content I see on the Internet has no real aspirations other than entertainment.  The YouTube site for The Gregory Brothers is a good example of an amalgamation of material for such entertainment.  Known for “Auto-Tune the News”, The Gregory Brothers use a tuning program on the voices of political figures, as well as other musical effects, to create music where there previously wasn’t.  Here is their latest video:

Though the group does accept donations and have t-shirts for sale, they do not sell any of the audio of video they have mixed.  There is simply no need.  The remixes they create are for laughs and to add a little enjoyment into what would ordinarily be a drab speech in Congress.

Similarly, the site Garfield Minus Garfield author Dan Walsh crafted his art for the public for months before the comic strip author, Jim Davis, took notice.  Davis, however, was actually intrigued by Dan’s creativity:

“I think it’s an inspired thing to do,” Davis said. “I want to thank Dan for enabling me to see another side of Garfield. Some of the strips he chose were slappers: ‘Oh, I could have left that out.’ It would have been funnier.”

gmg

Another popular site that twists copyrighted material is YTMND (You’re the Man Now Dog), which mashes together picture, text, and sound in often humorous ways (i.e.: Lord of the Rings’ ‘potato’ scene).  While this site falls largely under the protection of parody and the creators of the “ytmnds” are required to cite their sources, it regularly falls into lawsuits with Ebaumsworld and Sega (over the use of Sonic the Hedgehog’s image), their position of parodizing is repeatedly held and the public continue to thrive on and contribute to the content.

On page 155 Vaidhyanathan quotes Richard Stallman as saying:

I consider that the golden rule requires that if I like a program I must share it with other people who like it.  Software sellers wants to divide the users and conquer them, making each user agree not to share with others.  I refuse to break solidarity with other users in this way.  I cannot in good conscience sign a nondisclosure agreement or a software license agreement.

While it’s completely impractical to apply this theory to all of media, it should be important to consider it.  Linux and its dozens of sister operating systems adhere to Stallman’s idea of copyleft, which requires anyone who alters Free Software publicly release all changes to the text.  Since its creation, Linux certainly has become a huge force among the programming elite and techno-savy, and it’s curious to see how they develop it into platforms that can compete with Mac OS and Windows with graphics and user-friendliness.

I dual-boot with Windows and Linux (the Ubuntu release).  I enjoy the idea of not paying for an operating system, though I don’t know enough of programming to use Linux to even 50% of its potential.  Why bother?  Perhaps it’s that band wagon mentality again.  How can so many experts be wrong?  It’s the same utopian ideal we’ve been seeing since this whole “internet” thing took flight, the same concept we see Wikipedia developing, the same principle Napster had publicized, and the same goal early webrings had sought to accomplish.

But perhaps I assume humanity to be more generous than it is.

Categories: Uncategorized Tags:

Kress, Literacy, and Language

October 18th, 2009 3 comments

There has always been a discussion of whether the language of the people controls the standards of the dictionary or the standards of the dictionary control the language of the people.  This can be argued either way, up and down, and ad infinitum.  Our dictionary gives us a set of words with their agreed-upon meanings (or I should say, strongly suggested meanings), and we go about our lives happily using those words.  I can only guess that most of the time, we use them appropriately and correctly, like we have been in this blog.  It’s foolish to think that the dictionary holds all we have to use.  Our language is changing everyday, thanks largely to new things to name and new ways of naming things.  For instance, crunk is now in the Merriam-Webster Collegiate Dictionary, Eleventh Issue, as well as ginormous.  They both were two of 100 accepted entries.

Word meanings change when we want them to.

I still have trouble (and still don’t like) considering image to be text.  I simply don’t expand my definition of text to include images.  I also still consider Pluto to be a planet and I don’t consider indigo to be a rainbow color that I have to memorize.  So just because an assumed authority defines a word as such doesn’t mean the public will listen.  Still, our words have definitions.

Kress appeared to be struggling to define that ugly word of today, literacy.  What felt odd about that chapter (What is Literacy?) was that for a learned man, a professor at the University of London, he approached a definition of literacy from a surprisingly simple angle, broke down the terms, and played with them for a while.  Only images and text were discussed while many other forms of literacy were left untouched.

When I was in middle school, I thought of literacy as pertaining only to images and text.  Now I’ve entered a world in which text doesn’t just mean actual text, where the screen dominates, where everyone needs a cellphone, where watches are becoming useless, where Pluto is now just a dwarf planet, and where literacy is rocking on a fence that runs between obsolesce and useless generality.  The world has changed so much in my measly twenty five years, so why are we still clinging to this clearly abstracted term?

But can’t the same situation be said of the word animal?  It’s also an umbrella term, underneath which lies six other classifications (phylum, class, order, family, genus, species).  Through this classification, we can identify and name every living thing we encounter.  It seems to me that this scientific approach would help us with this struggle.

Linda Dubin, a reading specialist at West Bridgewater University, has on her website the most reasonable definition of literacy that I’ve been able to find:

In broad terms, literacy is the ability to make and communicate meaning from and by the use of a variety of socially contextual symbols.

I hope we can embrace the largeness of this term as something of importance while creating under it a classification system for the various literacies.

We need to gain a little control over how we name things in our world.

With regards to ecology

October 5th, 2009 2 comments

Collin Brooke takes an interesting look at the canons of rhetoric in Lingua Fracta, but what interested me most was his insistence on the ecological perspective because it was so encompassing and relative. Relative to what? All facets of text, hypertext, how and why we produce. Brooke considers it a more accurate term than context. It is indeed metaphorical, almost poetic, to call this system of canons and texts an ecosystem, but it is more accurate. He states his case on page 42:

The appeal of ecology as a conceptual metaphor is its ability to focus our attention on a temporarily finite set of practices, ideas, and interactions without fixing them in place or investing too much critical energy in their stability.

Brooke says later, on page 44, “When we have paid particular attention to one or more canons, it has often been to render it more static.” While he presses us to treat the canons “at the level of generalized activity” (44), the very ecological model he is defining requires us to temporarily reduce a canon or two down to a certain practice or series of practices. When finished applying the canon appropriately we can release it back to its almost theoretical status.

He discusses briefly the ecologies of practice but admits that “distinguishing them from the ecologies of code and culture can only ever only ever be a temporary, conceptual maneuver – one that does not translate into actual practice.” (52) Brooke says that this means there is no “pure zone” in which the ecologies of practice reside. I understood this to mean that the ecologies of practice exist, but only in theory, and any attempt to distinguish them from culture and code is futile.

Prior to this, back on page 44, he explains with a little detail how the ecological approach can be applied to invention and quotes Karen LeFevre from 1987 as defining the ecology of invention to be “the ways ideas arise and are nurtured or hindered by social context and cultures.” This is nearly identical to the concept of intertextuality. The conference that Brooke said to have attended is a form of discourse community. I don’t make these associations throughout Brooke’s text as a way to boil down his ideas to irreducible regurgitations of overused arguments, but rather as a habit of learning by attribution and extension.

That’s what I felt Brooke was emphasizing – an extension of our definitions and theories as a form of adaptation.

Stuart’s Multiliteracies

September 19th, 2009 No comments

This book was written to help teachers of writing and communication develop full scale computer literacy programs that are both effective and professionally responsible.

Stuart Selber opens his book, Multiliteracies for a Digital Age, with the above introduction, letting us readers know that what he plans to address in the next 239 pages will be a comprehensive plan for teachers. This book is more or less a persuasive argument and you should enter it with that thought in mind.

Sliding gracefully across fourteen pages, Stuart then announces in a clarifying voice what the problems are in today’s (2004’s) teaching of computer technologies and literacies. Initially, his focus falls on how many schools with computer competency courses lack in one of three crucial literacy categories that he outlines throughout the book. These are functional literacy, critical literacy, and rhetorical literacy. Stuart presents the example of Florida State University’s computer science requirements, explaining that it “…promotes skills for working productively in practical terms, on the other hand, fails to offer the perspectives needed for making rhetorical judgements.”

Thus, Stuart defines what he claims:

Students who are not adequately exposed to all three literacy categories will find it difficult to participate fully and meaningfully in technological activities.

Stuart enters his chapter on functional literacy and identifies computers as tools. His list of competencies, that the ideal functionally literate student has, hold within them parameters Stuart finds important: ability to achieve educational goals, understands social conventions that determine computer use, makes use of associated discourses, effective management of online world, and confident resolution to technical impasses. These skills provide a sound foundation for functional literacy.(45) However, Stuart warns that the literacy hides the political leanings embedded in technologies, and while a functional literate student can manage himself effectively, such work is shortsighted and dangerously malleable without a critical understanding of technology. And so, he addresses that in his next chapter. (72)

It is the why and then the how that is stressed in this next chapter. Under the flag of critical literacy, Stuart encourages teachers to instill in their students a questioning, almost skeptical, frame of mind. He asks critically literate students to be aware of the dominating politics inherent in technology, to contextualize it, and to criticise the sculpting forces of culture and institutions. To achieve this, he prescribes metadiscourse heuristics. He quotes Michael Joyce as saying, “…technology, like any other unacknowledged representation of power, endangers learning.”(133) To counter, students and teachers need to be able to recognize the ebb and flow of power, and need to be able to act accordingly.

This action, as Stuart puts it, is reflective production, constitutes the majority of his definition of rhetorical literacy. Within this literacy, he visualizes computers as hypertextual media, digitized text engaged in the mass dissemination of information. Viewing these hypertexts as form of rhetoric, students can engage in discourse with them, much like conventional conversation. This is largely done at the interface, where the user and the technology meet, where the user asserts control. Stuart idealizes rhetorically literate students as being able to negotiate the persuasive techniques of the producers, and to be able to become producers themselves. (160)

Stuart sums up his beliefs on page 179, 58 pages from the end, by saying:

The more associations that individuals can form between old and new knowledge, the better their understanding of that new knowledge is likely to be.

While the phrase can be applied in many way to many subjects, we can tweak it ourselves by replacing “knowledge” with “technology”. Proceeding, he explains his suggested pedagogical procedures in matters of layered contexts, enabling the students to heuristically climb to higher and broader levels of understanding. Or rather, he says what he thinks is a good way to help students learn about technology and learn from technology. And although rather broad and idealistic, we, as students, can already see his changes in our education. It’d be a wonder to see if they’re being applied to the younger generations.