Interpretation – 3DH http://threedh.net Three-dimensional dynamic data visualisation and exploration for digitial humanities research Wed, 19 Dec 2018 18:43:20 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 http://threedh.net/wp-content/uploads/2016/04/cropped-3dh-siteicon-32x32.png Interpretation – 3DH http://threedh.net 32 32 3DH Workshop in Montréal http://threedh.net/3dh-workshop-in-montreal/ http://threedh.net/3dh-workshop-in-montreal/#respond Fri, 20 Oct 2017 14:42:31 +0000 http://threedh.net/?p=419 Read more]]> Prior to this year’s DH conference in Montreal, Canada (8 – 11 August) some of us flew in a little earlier to come together for a workshop in the context of the 3DH project. Apart from the core project team and our colleagues Evelyn Gius and Marco Petris we were joined by our associated members Johanna Drucker, Geoffrey Rockwell and Marian Dörk as well as Laura Mandell.

Over the span of two and a half days we had an intense and productive workshop that had the goal of refining and reifying the three concepts we had developed so far over the course of the last weeks. Springboards for this process were on the one hand our four conceptual 3DH postulates: 2-way-screen, parallax, qualitative and discursive, on the other hand reflections about supporting the process of interpretation in digital tools. We specifically discussed the relevance of the article “Thinking about interpretation: Pliny and scholarship in the humanities” by John Bradley.

What is intriguing in the software “Pliny” described by Bradley, is, that scholars are very not bound in the way they organize their notes and annotations, there is no need to assign distinct categories or relations to them. Instead, these can be organized on a plane and emerging structures becoming apparent can be inscribed by encapsulating them in boxes, when the interpretation progresses.

This appears to be a way of modelling interpretative data that takes into consideration methods scholars have been using in the analog world, however, also exceeds that and opens up new possibilities enabled by the digital (in terms of interaction with and visualization of data), an approach that seems very much related to the goals of the 3DH project as well.

In our design process so far we have based our concepts on real-world scenarios fed by experiences of literature scholars in research projects and arrived at similar conclusions as Bradley: It seems counterintuitive for scholars to force them to apply structure to their annotations when they start with their process. Relations between and qualitative statements about annotations often can only be made when the process has progressed.

When we discussed the wireframes in the workshop we realized that we can differentiate two different environments or spaces of literary scholarly work: Johanna called this research and argument space. While we define typical descriptive acts of the scholarly process like annotating, collecting and commenting as research activities, we consider tasks like grouping, ordering and organizing as interpretative or at later stages  argumentative activities. Usually scholars switch between activities of either of the modes perpetually.

img_2222Interplay between research environment and argument environment (by Johanna Drucker)

We understood that this circumstance has to be supported by the interface much more deliberately. Thus, for the next steps in the design process we will focus on the representation of and interaction between these spaces in the interface. What would an interface look like that supports continuous switching between these mentioned activities?

In the discussion we came up with the concept of a semantic plane that might allow us to bring these two spaces together. While we would produce annotations in the research phase that would be represented as glyphs on the plane, in the argument phase we would position and manipulate these glyphs to assign meaning to them and create  arguments that we later can publish.

Merken

Merken

Merken

Merken

Merken

]]>
http://threedh.net/3dh-workshop-in-montreal/feed/ 0
VIS2016 – K. Coles: Show ambiguity http://threedh.net/vis2016-coles-show-ambiguity/ http://threedh.net/vis2016-coles-show-ambiguity/#respond Fri, 25 Nov 2016 11:10:19 +0000 http://threedh.net/?p=330 Read more]]> This is the first post in a series of posts about what I think to be most relevant for 3DH from the IEEE VIS2016 conference in Baltimore.

The poetry scholar Katherine Coles gave a presentation on Poemage at the VIS4DH workshop at VIS2016. Poemage is a tool for exploring and analysing the sound topology of a poem. It is an interdisciplinary work between poetry scholars, computer scientists and linguists. Recommended reading is not only the presented paper Show ambiguity, which takes a more poetry scholar influenced perspective on Poemage but also the companion paper which complements “Show ambiguity” by adding the computer scientist stance to it. Besides the methodological principles that are covered by Poemage both papers give also great insight into the collaborative aspects of the project across disciplines.

poemage

The UI of Poemage offers three views. The Set View offers rhyme sets, which are sets of words that are connected by a specific rhyme scheme. The rhyme sets are organized by rhyme types. Each circle represents a specific rhyme set. The size of the circle depends on the number of words in the set. The Poem View shows the poem in its original form and the Path View gives a 2D space where the flow of the poem according to its rhyme topology is displayed. Each node in the path view represents a word in the poem and is positioned in relation to its position in the layout of the poem. The curves show the flow of a rhyme set through the poem. The views are linked by color coding and by interaction: e. g. selecting a rhyme set in the Set View also activates the visualization of that rhyme set in the other two views.

I like especially the openness of the tool. It supports and encourages multiple readings and the rhyme types are extensible in two ways. The simple way allows the scholar to group words freely to form custom sets without being bound to any predefined rhyme type. The more complex way allows the scholar to access the underlying rules engine or formalism to formulate new rhyme types in a notation which is geared to poetry scholars.

The representation of rhyme sets as paths allows exploration of the rhyme topology by examining spatial phenomena of the paths like intersections, mergings and divisions. There is a tight link between the visualisation and the poem that makes it easy to trace back observations in the visualization to the original data.

Another interesting aspect of her talk was when Coles shared her view on the humanistic idiosyncrasies of data visualization, especially in poetry scholarship. She wanted Poemage “to provide an aesthetically enriched experience” and emphasized the engagement between scholar and object of study which should extend to the visualization as well.

When we discussed the special needs for the humanities for visualization in the 3DH project so far, I (with a computer science background) was very sceptical about seeing the humanities on one side and the hard sciences on the other side. On the contrary I can see a lot of common ground between a physicist and a humanities scholar exploring and interpreting his or her data with visualizations. Instead of seeing the two as opposites we in 3DH started to work with a methodological continuum between the poles of subjectivity/uniqueness/particularity and objectivity/reproducibility/universality. I doubt that the kind of engagement Coles describes is the same engagement between a physicist and his or her data. I think Coles managed to describe at least part of the possible contribution of visualisation to one extreme of that continuum. And this really helps to track down the methodological differences 3DH visualizations need to account for.

]]>
http://threedh.net/vis2016-coles-show-ambiguity/feed/ 0
Stan Ruecker: The Digital Is Gravy http://threedh.net/stan-ruecker-the-digital-is-gravy/ http://threedh.net/stan-ruecker-the-digital-is-gravy/#respond Sat, 25 Jun 2016 14:51:50 +0000 http://threedh.net/?p=308 Read more]]> Timeline Design
Timeline Design

Stan Ruecker gave the 3DH talk on the 23rd of June with the enigmatic title The Digital Is Gravy. He explained the title in reference to gravy being the what gives flavour to the steak. In his case, he wanted to show us how physical prototyping can give substance (steak) to the digital.

Stan started with an example of a physical prototype that materializes bubblelines that was developed by Milena Radzikowska who showed it at Congress 2016 in Calgary. (See Materializing the Visual.) He suggested that materialization of a visualization slows down analysis and leads to other lines of thought.

At the IIT Institute for Design Stan is weaving physical prototyping into digital design projects. His main research goal is to find ways to encourage people to have multiple opinions. He want to build information systems that encourage the discovery of different perspectives and the presentation of multiple opinions on a phenomenon. The idea is to encourage reflective interpretation rather than dogmatism.

How prototypes build understanding

He listed some ways that prototyping can build understanding:

  • Build something to collect information
  • The prototype is itself a kind of evidence
  • Learning through making. You don’t even need to finish a prototype. “Fail early and fail often.”
  • Prototype is also a representation of the topic area

Why physicality is important

After returning to the materialized bubblelines he talked

  • Materialized prototypes take time differently which can lead
  • It can produce results that can be used for comparison (with other results)
  • It can engage physical intelligence – embodied experience can leverage different ways of knowing
  • It involves collaboration (over time)  that involves community knowing
  • It encourages multiple perspectives from different people and different points of view

My experience with the pleasures of physical prototyping in a group reinforces the way the making of the

Timelines

He then talked about a project around timelines that has built on work Johanna Drucker did. He had gone through multiple prototypes from digital to physical as he tried to find ways to represent different types of time. He tried creating a 3D model in Unity but that didn’t really work for them. He now has a number of student designers who are physically modelling what the timeline could be like if you manipulated it physically and then that was uploaded to the digital representation (the gravy.)

Physical Qualitative Analysis

He then talked about how a multinational team is designing physical analytical tools. The idea is that people can analyze a text and model an understanding of it in a physical 3D space. It is grounded theory – you build up an emergent understanding. They tried creating a floating model like a Calder sculpture. They tried modelling technical support conversations. They used a wired up coat rack – hacking what they had at hand.

My first reaction is that doing this physically would be so slow. But that is the point. Slow down and think by building. They tried a digital table and that was no fun so they started making all sorts of physical

I’m guessing it would be interesting to look at Ann Blair’s Too Much To Know where she talks about the history of note taking and physical ways of organizing information like excerpt cabinets.

Stan then talked about a successful line of prototypes that had transparent panels that could be organized, joined, and on which ideas could be put with post-it notes. Doing this in a team encourages users to different views on a subject as the panels have two sides and can be jointed to have even more.

Finally, they are now trying to bring these back to the digital so that once you have an arrangement of panels with notes you can digitize it and bring it into the computer. This also suggests the possibility of automatically generating the model on the computer from the text.

He commented on how he has no industry industry interested in the analysis of conversations.

And that was the end.

 

 

]]>
http://threedh.net/stan-ruecker-the-digital-is-gravy/feed/ 0
Laura Mandell: Visualizing Gender Complexity http://threedh.net/laura-mandell-visualizing-gender-complexity/ http://threedh.net/laura-mandell-visualizing-gender-complexity/#respond Sat, 11 Jun 2016 17:29:34 +0000 http://threedh.net/?p=294 Read more]]> Laura started her talk by showing some simple visualizations and talking about the difficulties of reading graphs. She showed Artemis, searching for words “circumstantial” and “information” over time. She then compared it to the Google NGram viewer. She talked about the problems with the NGram viewer like shifts in characters (from f to s) around 1750. Dirty OCR makes a difference too. She showed a problem with Artemis having to do with the dropping out of a dataset. Artemis has a set of datasets, but not all for all time so when one drops out you get a drop in results.

Even when you deal with relative frequency you can get what look like wild variations. These often are not indicative of something in the time, but indicate a small sample size. The diachronic datasets often have far fewer books per year in the early centuries than later so the results of searches can vary. One book with the search pattern can appear like a dramatic bump in early years.

There are also problems with claims made about data. There is a “real world” from which we then capture (capta) information. That information is not given but captured. It is then manipulated to produce more and more surrogates. The surrogates are then used to produce visualizations where you pick what you want users to see and how. All of these are acts of interpretation.

What we have are problems with tools and problems of data. We can see this in how women are represented datamining, which is what this talk is about. She organized her talk around the steps that get us from the world to a visualization. Her central example was Matt Jocker’s work in Macroanalysis on gender that seemed to suggest we can use text mining to differentiate between women and men writing.

World 2 Capta

She started with the problem of what data we have of women’s writing. The data is not given by the “real” world. It is gathered and people gathering often have biased accounting systems. Decisions made about what is literature or what is high literature affect the mining downstream.

We need to be able to ask “How is data structured and does it have problems?”

Women are absent in the archive – they are getting erased. Laura thinks these erasures sustain the illusion.

Capta 2 Data or Data Munging

She then talked about the munging of data – how it is cleaned up and enriched. She talked about how Matt Jockers has presented differences in data munging.

The Algorithms

Then she talked about the algorithms, many of which have problems. Moritz Hardt arranged a conference on How Big Data is Unfair. Hardt showed how the algorithms can be biased.

Sara Hajian is another person who has talked about algorithm unfairness. She has shown how it shows prestigious job ads to men. Preferential culture is unfair. Why Big Data Needs Thick Data is a paper that argues that we need both.

Laura insisted that the solution is not to give up on big data, but that we need to keep working on big data to make it fair and not give it up.

Data Manipulation to Visualization

Laura then shifted to problems with how data is manipulated and visualized to make arguments. She mentioned Jan Rybicki’s article Vive la différence that shows how ideas about writing like a man and like a woman don’t work. Even Matt Jockers concludes that gender doesn’t explain much. Coherence, author, genre, decade do a much better job. That said, Matt concluded that gender was a strong signal.

Visualizations then pick up on simplifications.

Lucy Suchman looks at systems thinking. Systems are a problem, but they are important as networks of relations. The articulation of relations in a system is perfomative, not a given. Gender characteristics can be exaggerated – that can be the production of gender. There are various reasons why people choose to perform gender and their sex may not matter.

There is also an act of gender in analyzing the data. “What I do is tame ambiguity.”

Calculative exactitude is not the same as precision. Computers don’t make binary oppositions; people do. (See Ted Underwood, The Real Problem with Distant Reading.) Machine learning algorithms are good at teasing out loose family resemblances, not clear cut differences and one of the problems with gender is that it isn’t binary. Feminists distinguished between sex vs. gender. We now have transgender, cisgender … and exaggerated gender.

Now that we look for writing scales we can look for a lot more than a binary.

Is complexity just one more politically correct thing we want to do? Mandell is working with Piper to see if they can use the texts themselves to generate genders.

It is also true that sometimes we don’t want complexity. Sometimes we want simple forceful graphics.

Special Problems posed by Visualizing Literary Objects

Laura’s last move was to  then looked at gender in literary texts and discuss the problem of mining gender in literary texts with characters. To that end she invoked Blakey Vermeule, Why Do We Care About Literary Characters? about Miss Bates and marriage in Austen’s Emma.

Authors make things stand out in various ways using repetition which may through off bag-of-words algorithms. Novels try to portray the stereotypical and then violate it – “The economy of character.

Novels are performing both bias and the analysis of bias – they can create and unmask biases. How is text mining going to track that.

In A Matter of Scale, Jockers talks about checking confirmation bias to which Flanders replies about how we all operate with community consensus.

The lone objective researcher is an old model – how can we analyze in a community that develops consensus using text mining? To do this Laura Mandell believes we need capta open to examination, dissensus driving change, open examination of the algorithms and then how visualizations represent the capta.

]]>
http://threedh.net/laura-mandell-visualizing-gender-complexity/feed/ 0
The Making of: The 3DH Logo and How it Got That Way http://threedh.net/3dh-logo/ http://threedh.net/3dh-logo/#respond Wed, 01 Jun 2016 14:00:40 +0000 http://threedh.net/?p=245 Read more]]> 3dh-threedees

In mid-March this year, I was contacted by Prof Christoph Meister of Universität Hamburg, with whom I had previously collaborated on the re-branding of the European Association for Digital Humanities (EADH). He wanted a logo for the 3DH project.

In the course of the following two weeks, I engaged in an intensive email exchange with the 3DH team and Profs Johanna Drucker and Geoffrey Rockwell, both of whom are visiting professors in Hamburg this summer term as part of the 3DH project. By the end of the month, we had worked out a logo design that everybody considered a success.

The following timeline is a collage of discussion fragments, logo sketches and drafts that passed back and forth in an ad-hoc collaboration conducted entirely via email; the timeline seeks to document the main ideas that guided the collaboration and to capture a sense of the process by which we arrived at the final design as displayed on this site now.

In his initial message to me, Christoph emphasised the need for the project to have a visual identity. On 14 March 2016, he wrote:

I’ve just started a new research project for which we need a visual identity – and this time that’s doubly important as the project itself is about visualization.

Moreover, the design needed to reflect the ambitions of the project as articulated in Johanna’s latest book:

[I’m looking for] a suitable logo idea that can serve to highlight what Johanna so aptly emphasizes in “Graphesis”: the importance of thinking about visualizations as a genuine epistemic and explorational device rather than a mere representational instance of ‘data’.

To illustrate what a successful design might look like, he cited a drawing reproduced in the book:

I came across Johanna’s mention of Kandinsky’s “From Point and Line to Plane” on p.35 in “Graphesis” and was immediately attracted by Fig.98 in the right hand margin: For me the vertical line touching the upper border signifies a subtle transgression of the idea of visually supported dualism as it pulls the reference plane within the square and that of the perceiver notionally situated outside the square (if you wish, the discourse plane) together and brings them into contact.

I accepted the job and set to work.

Identity design is in large measure typography, and typography is often a good starting point when creating a new logo. For this project I decided to start from the genre of decorative typefaces known as ‘shaded,’ as these invoke a sense of 3-dimensionality.

I also noticed a possible connection between the design brief and a peculiarity of the 3DH acronym. I knew enough of Johanna and Geoffrey’s work to understand that the phrase ‘epistemic and explorational’ in Christoph’s brief related to their conviction that interfaces and visualisations should be re-conceived to facilitate interpretation as their primary affordance; at the same time the D in the 3DH seemed to invite, if not require, interpretation due to its indeterminacy. I wrote on 16 March 2016 at 12:04 hrs:

As design briefs go, the above requirement is definitely one of the tougher assignments I’ve seen.¶ I wonder if it could be solved through a bit of playfulness. Let’s start with the project name: 3DH. There’s a quibble to be had from this name as to whether the 3D or the DH part should be the privileged reading: the D is ambiguous, therefore in need of interpretation.

I proposed a series of four typographic markers based on this idea, using the colours black and red to delimit the 3D and the DH groupings, with the fourth piece in the series separating the two colours in a diagonal division running through the letter D:

Figure 1: 3DH-logo
16 March 2016: Fourth logo in a series of four, using a shaded typeface [all four in a single file]

Johanna responded to this design by bringing up the concept of parallax, the displacement in the apparent position of an object viewed along different lines of sight, which she had discussed previously in some of her published work. In this work she asserts that through visualisations implementing the concept, it will be possible for value, identity, and relation of temporal events to be ‘expressed as a set of conditions, rather than givens’. She wrote on 16 March 2016, 13:24 hrs:

I’m wondering if the concept of parallax could be built in here to go “beyond representational concepts of visualization”.¶ Unfortunately, most diagrams of parallax are pretty reductive. But if you could imagine the 3DH logo you’re playing with constructed from two points of view or scales and have them not match but still relate–sort of like extending that diagonal slice through the D in the fourth version of the logo, but so that it refracts the letters. I would suggest lightening the design as well so it is not quite so solid/architectonic.

She illustrated her idea with a number of hand-drawn and scanned sketches, collated into a single PDF.

#3DH logo proposal
16 March 2016: Johanna’s parallax suggestion, detail [full PDF]
Johanna’s parallax drawings test the limits to which a logotype can absorb the generic conventions of diagrams, which seemed legitimate to me, but I was skeptical of whether the spacial expansiveness of the design would scale very well: Logotypes need to stay distinct and recognisable even at small sizes, which sharply limits the amount of whitespace typically found in them. So I wrote on 21 March 2016 at 12:31 hrs:

A diagram consisting of conceptual space mapped out by thin lines and inhabited by typographic elements at comparatively small size is in danger of looking ethereal and anaemic at small scales, especially when displayed next to more conventional logotypes.

On 21 March 2016 at 15:19 hrs, Johanna conceded that her diagrammatic approach would be susceptible to the scaling issue, yet she suggested that there might be a way to merge our separate approaches into in a single design:

I wonder if we can work with that ambiguity and an indication of non-identity or non-similarity between the two meanings of the “D” in the acronym. That could introduce the parallax issue in some way.

I frankly didn’t know how to act on this suggestion, so I tried a variation on the D that faces two ways. I understood that both Johanna and Geoffrey were opposed to anything ‘Cartesian’, so central perspective was out of the question. My suggestion made use of a roughly cobbled-together axonometric projection, about which I wrote, on 22 March 2016 at 11:48 hrs:

Attached as well is another take on the same idea expressed in a 3×5 pixel font rendered 3D in axonometric projection (channelling my inner Max Bill here). The piece turns on the ‘ambiguity’ of the D again, as the letter associates with the 3 in its orientation but associates with the H in its colour.

Axonometric projection
22 March 2016: pixel font in axonometric projection

Christoph was intrigued by the piece, and he wrote on 22 March 2016 at 12:53 hrs:

it creates a weird Escher-like paradoxical n-dimensionality that loops onto itself and makes it, how shall I put it, “performative” in that you simply cannot stop re-processing the image.

He encouraged me to pursue the idea further, but Johanna, as I had anticipated from her critique of my initial offering as too ‘solid/architectonic’, was unconvinced. She urged a change of approach on 22 March 2016 at 13:07 hrs:

We might consider using the positive/negative space instead of closing the forms

Playing on positive/negative space, unlike the parallax idea, was something I knew how to handle. I wasn’t very keen on the idea because it seemed to offer less scope for the play on the letter D, but I pursued the idea anyway, resulting in a few iterations that struck me as nicely done but showing little relevance to the design brief.

3dh-negative-space03
22–23 March 2016: iterations of the ‘negative/positive space’ idea

Meanwhile, Johanna had been at work trying to bring about the merger of our separate starting ideas that she had hinted at. She wrote on 23 March 2016:

I’m going back to your sliced “D” idea and seeing if I can play with some parallax in it.

She supplied a sketch with two drawings:

Sketch by Johanna Drucker
23 March 2016, Johanna Drucker: ‘Sliced D’

Johanna’s drawings were a welcome occasion to drop the ‘negative space’ idea. They reminded me that a few days earlier my preoccupation with ‘shaded’ typefaces had led me to look at fonts constructed as impossible objects, a typographic genre often associated with the name of the Dutch artist M.C. Escher, who may have done most to popularise such objects. I wrote on 24 March 2016 at 22:29 hrs:

I returned to a few recent Escher-inspired retail typefaces and examined them under the aspect of whether the two yoked-together perspectival components of the D might be coloured different to convey the ‘ambiguity’ of the letter. I put this through a few iterations until it occurred to me that I could vectorise Johanna’s D sketch and use it in the same fashion.¶ Which I did as a rough and ready first cut.¶ Please find the whole series also included in the zip. Don’t worry about the grey/white/yellow colour scheme just yet.¶ I think we have a candidate here.

Escher pieces
24 March 2016: ‘Escher’ pieces using commercially available typefaces whose characters form impossible objects

The attachment included two pieces that would form the basis of the eventual design.

3dh-escher-drucker-2xA4
24 March 2016: ‘Escher’ pieces using Johanna’s ‘D’ drawing

This batch of pieces was well received. Johanna wrote on 24 March 2016 at 22:45 hrs:

Oooohhhhh! I am really loving these. I have my favorites, but will hold off until others weigh in. SUPER!!! We are really getting close, I think. Elegant, too!!

Christoph wrote on 25 March 2016 at 06:04 hrs:

Wow, Rudolf,¶ this is really a leap forward!

Things took a curious turn at this juncture: Johanna never came back to name her favourites among the Escher batch, whereas Christoph and I focused on the version of the design using Jeremia Adatte’s Bron Black typeface. Oddly, we both shared the concern that the 3 character looked like Homer Simpson‘s face, and that it required a modification to its shape. I also developed an obsession with searching for ‘impossible object’ typefaces and, telling myself I was doing my due diligence, went through as many such typefaces as I could find. And it occurred to me that we could bake the yin and yang motif into the design by not just making the D the location where the two colours cross over from one to the other; in addition, both of the other characters could have their respective main colour counterpointed by a small included segment of the opposite colour.

3dh-escheryin-yang-rmx
25 March 2016: De-Simpsonised remixes of the Bron Black piece. Left: no yellow. Right: shape of the 3 adjusted, yin and yang idea added

The work now seemed nearly completed. Christoph wrote on 26 March 2016 at 1:36 hrs:

I think we’re about to reach design freeze!

Johanna agreed, writing on 26 March 2016 at 15:38:

This has been REALLY fun! And so fast!

However, still unresolved was the question of what the colours would be. One possibility, perhaps the obvious one, was to rely on shading, which is conventionally used to evoke the physicality of a three-dimensional object in two dimensions; we could render the design in a local colour and a corresponding shaded hue.

3dh-escher-bron-three
28 March 2016: Coloured instances of the design based on the Bron typeface.

Christoph had another idea. He wrote on 29 March 2016 at 12:48 hrs:

I have to consider internal politics and strategy: I would appreciate if we could either use Hamburg University’s color scheme (see https://www.uni-hamburg.de/) or one that resembles that of the City of Hamburg (which includes blue: see http://www.hamburg.de/). These are my funders who I need to get on board as co-owners and I want to make sure that they, too, will be able to identify with our project.

I objected to the adoption of Hamburg’s colour scheme, but Christoph insisted and asked for a draft of the logo, so he could attach it to a mailing to the project funders at the end of the week. In response to this request I started to look for ways to colour the design red and blue.

3dh-hamburg-yin-yang
29 March 2016: ‘Escher’ version with Hamburg city and university logos added

We seemed to have arrived at the end of the process now. Yet by that point I also nurtured a growing sense of dissatisfaction. It vexed me that we were merely going to apply a minor tweak to a typeface and to play a game with colours that seemed too clever by half while failing to state the design’s basic idea with any clarity or forcefulness.

Why weren’t we using Johanna’s D drawing, which was our own original creation? If anything, I wanted that drawing back! With due apologies for the very late about-face, I lamented on 30 Mar 2016 at 16:27 hrs:

instead of building further on the piece with Johanna’s unique drawing, uniquely connected to the project, we went for a generic, commercially available revival of a nineteen-seventies typeface

I attached a few revisions of the earlier piece.

3dh-escher-drucker-hamburg01
30 March 2016: ‘Escher-Drucker’, returning to Johanna’s drawing

To accommodate Christoph’s wish for Hamburg styling, I adopted the colour scheme specified in the branding guidelines of the City of Hamburg [PDF] and modified the typography. Throughout this project, I had been using the Futura typeface as a nod to the Bauhaus aesthetic, following Christoph’s mention of Kandinsky during the earliest stage of the collaboration. As Universität Hamburg’s branding guidelines specify TheSans of Lucas de Groot’s wonderful Thesis family of typefaces, I happily switched from the geometric sans serif to the humanistic sans serif.

3dh-escher-drucker-hamburg02
30–31 March 2016: ‘Escher-Drucker’, Hamburg version

I was apprehensive of the response to my about-face, as the proposal second-guessed what very much seemed like a done deal. However, both Johanna and Christoph supported the change right away. Johanna wrote on 31 March 2016 at 00:56 hrs:

I really love these […] the larger D in the center with the real dimensionality to it is terrific.

Christoph concurred and wrote on 31 March 2016 at at 06:54 hrs:

Escher-Drucker it shall be. It’s leaner, less self-absorbed and elegantly accentuates the dynamic D as a perceptual and intellectual axis.

This version was adopted, then, and made it into the mailing.

With the site coming online in early April, I implemented the logo and the Hamburg colours in a lightly modified version of the content management system’s GeneratePress presentation layer.

And this is how it all got that way.

]]>
http://threedh.net/3dh-logo/feed/ 0
Watching Olympia: Visual Programming for Surveillance http://threedh.net/watching-olympia-visual-programming-for-surveillance/ http://threedh.net/watching-olympia-visual-programming-for-surveillance/#respond Sat, 14 May 2016 10:03:31 +0000 http://threedh.net/?p=197 Read more]]> visualprogramming
Olympia Visual Programming Slide

I (Geoffrey Rockwell) gave the May 12th lecture on the subject of visual programming languages (VPL). I started by providing a surveillance context for understanding why VPLs are developed to provide a way into programming. The context was the CSEC slide deck leaked by Snowden that shows the Olympia Network Knowledge Engine which allows analysts to access other tools from the 5-Eyes services. Olympia includes a VPL for creating “chains” that automate surveillance processes (see the slide above in which the VPL is introduced.) I argued that in many ways we in the humanities also do surveillance (of cultural history) and we pay attention to tools like Olympia developed to help analysts automate interpretative tasks. I also argued that we need to study these types of slide decks as examples of how big data analysis is conceived. These are the types of tools being developed to spy on us and manage us. They are used by governments and corporations. We need to learn to read the software and documentation of algorithmic management.

The heart of the talk was a survey of VPLs. I argued that we have had specialized formal visual languages for some time for describing wiring diagrams or signalling plans for train stations. These languages allow someone to formally represent a process or design. I then gave a brief history of visual programming and then turned to VPLs in the digital humanities. This connected to a survey of some types of VPLs as I wanted to go beyond the pipe-and-flow types of VPL. I then summarized some of the opportunities and challenges for VPLs in the digital humanities and returned to Olympia. VPLs only work when there is a community that develops and understands the semantics of their visual language. Wiring diagrams work because people understand what a line connecting two icons means and what the icons mean in the context of electronics. For visualization in general and VPLs in particular to work in the humanities we need to develop both a visual literacy and a discussion around the meaning of visual semantics. One way to do that is to learn to read VPLs like Olympia. Again, the humanities need to take seriously these new types of documents as important and worth studying – both PowerPoint decks (that are handed around as a form of communication) and software like VPLs.

Visual Programming in the Digital Humanities

EyeContact Prototype
EyeContact Prototype

One of the first projects in the digital humanities to prototype a VPL for text analysis was the EyeConTact project by Geoffrey Rockwell and John Bradley. See also a paper Seeing the Text Through the Trees: Visualization and Interactivity in Textual Applications from LLC in 1999. This was inspired by scientific visualization tools like Explorer. Before that there were many projects that shared flowcharts of their programs. For example we have flowcharts of both how computers fit in scholarly concording and how the concording tools worked for PRORA. One can see how generations of programmers raised on flowcharting their programs would desire a flowcharting tool that actually was the programming.

The SEASR project developed a much more sophisticated VPL called Meandre. See Ian Milligan’s discussion of using Meandre. Meandre was designed to allow humanists a way of using all the power of SEASR. Alas, it doesn’t seem to be still maintained.

The best system currently available is built on an open VPL called Orange. Aris Xanthos has developed text analysis modules for Orange called Textable. Xanthos has a paper on TEXTABLE: programmation visuelle pour l’analyse de données textuelles (French). Orange is a well supported VPL that can be extended.

Opportunities and Challenges

Some of the points about the suitability of VPLs to the digital humanities that I made at the end include:

  • VPLs are intuitively attractive
  • Visual vocabulary is not always clear. What does a pipe mean? What direction do pipes go? Left to right? What flows through a pipe?
  • Domain specific applications work best.
  • We need to develop a community of use
  • VPLs are good at the visualization of process (and data and results in that context). They show rather than hide the processes in a way that can be explored and fiddled with. They are good for showing the chain of filters and transformations that data goes through.
  • VPLs are slower than traditional coding for proficient programmers.
  • They can be fiddly.
  • It is hard to handle a big codebase with a VPL as you end up hiding chains.
  • The are good a showing chains of processes, but not at showing highly interactive systems.

Conclusions

Above all, as mentioned above, we need to learn to read visualizations (including VPLs) in the humanities. These are forms of communication that are increasingly important in the algorithmic state. They are used widely in business and government. They are essential to understanding how big data is consumed and used. I propose that developing a discourse and hermeneutics of visualization is fundamental to developing better visualization tools. The two go hand in hand.

]]>
http://threedh.net/watching-olympia-visual-programming-for-surveillance/feed/ 0
Framing Visualization http://threedh.net/framing-visualization/ http://threedh.net/framing-visualization/#respond Fri, 06 May 2016 07:43:47 +0000 http://threedh.net/?p=190 Read more]]> Historic specimen from the Natural History Museum in Verona
Historic specimen from the Natural History Museum in Verona

How are visualizations framed? As part of a design session we brainstormed about the ways visualizations are framed:

  • They are framed by texts like labels, legends, titles, captions, and other explanatory texts.
  • They can have links to other texts, other visualizations, or even help systems.
  • They will have controls that are part of the frame of the visualization itself. These controls are sometimes right in the visualization (direct manipulation) and sometimes in separate visual spaces.
  • They draw from data that you can sometimes see in other panels or get access to. The data can have different levels in that there could a corpus of texts and then a table of results of a statistical process that is then used to generate the visualization.
  • They are created by code which can sometimes be seen. You can see code in a visual programming system or spreadsheets. Some systems will show you the code that is running or give you a space to enter complex queries (which are a higher level of code that acts as a control.) In notebooks the code is visible too.
  • There will be a social frame of people interacting with the visualization and “consuming” it. They are made and used by communities whose diversity of values, positions, cultural conventions and mores are part of the conditions of their production, access, and reception. These community frameworks shape the design process. We tend to think of visualizations as being used by one person on a personal computer, but they also show up in presentations before groups of people, on television as part of a mediated presentation, on public displays and over the internet for others to look at. We need to pay attention therefore to the ways that groups of people share visualizations including the ways they show their screens to each other. Who controls group or public visualizations?

Here are some more frames to consider.

  • They will have an associated interface like that of a web browser or software.
  • There can be computing interface surrounding the visualization on the screen. This might show other applications or controls for the operating system/computer.
  • The screen will be a projection surface which has features. A computer screen on a laptop will have a keyboard and webcam attached. A projection on a wall will have other things on the wall. A projection on a building or specially designed surface (like a globe) will be framed by the building or the exhibit design. There may be special controls to an exhibit that are really part of the computing interface. People may be able to use their own devices to send input to a public projection.
  • The surface will be part of the infrastructure that makes the visualization possible. That will include the systems, the networking, the electricity, the building and those that maintain them.
  • There is a physical site with all sorts of political and cultural issues associated.
  • There is a frame of development that creates the visualization, associated data and code. This isn’t necessarily one thing as the data could be created by one group and uploaded to the visualization developed by a different team. The infrastructure could have been developed by yet another group. Development has costs and there are stakeholders like sponsors, granting councils, and universities that provide development support.
  • There is an epistemological frame of the tacit and explicit knowledge that is needed to develop and understand the visualization. This can include the new knowledge generated by the visualization and published in different ways. We could have also called this the rhetorical frame in the sense that a visualization is created and used to convey something. It is created and read for pleasure, for information, or to make a point. In this sense there is a performance to the knowing created and explored.

We are obviously pushing the envelope on what is a frame, but the idea was to get a sense of all the things outside the visualization itself that contribute to it. Almost all of these have to be taken into account in an ambitious project like 3DH.

]]>
http://threedh.net/framing-visualization/feed/ 0
Johanna Drucker: 3DH http://threedh.net/johanna-drucker-3dh/ http://threedh.net/johanna-drucker-3dh/#respond Sat, 23 Apr 2016 07:36:04 +0000 http://threedh.net/?p=89 Read more]]> Johanna Drucker gave the third lecture in the 3DH series. She talked about 3 dimensional digital humanities and how she conceives of the road ahead of us. She started with the goal of the project:

To develop a conceptual blueprint for next generation digital humanities visualizations.

What would that mean? How can we do it? To do this we need to understand where we are and where we have to go and her talk did that by touching on:

  1. How visualizations have an imprinted form of argument that comes from their origins.
  2. Understand ideas about languages of form – ideas about how one can systematize the visual.
  3. Look at how contemporary DH people use visualizations and what work do they want them to do.
  4. Understand conventions of pictorial imagery and how most visualizations are pictorially impoverished.
  5. Identify the epistemological challenges ahead.

She noted that 3DH is focusing on the visualizations of humanities documents and humanistic inquiry. Humanists are engaged in the production, interpretation, and preservation of human record. We need to think about problems of our practices like interpretation.

Types of Visualization in the Humanities

What kinds of visualizations do we use? Johanna Drucker gave a concise overview of the major types of visualizations used in the humanities. Each of these have different visual traditions and relationships to data.

  1. Digitizations/remediations of original – These are visualizations that represent an original like a facsimile or a electronic text. They are digital surrogates.
  2. Data-driven displays – These don’t represent an original, but represent some abstraction or analysis. Some types might include charts, graphs, maps, and timelines.
  3. Visual renderings – These are complex 3D constructions and fantasies that use codes of pictorial representation with little data. They are extrapolations of the data. They augment the data. Some types include 3D renderings, augmented reality and virtual reality. They are often based on minimal data giving the illusion of repleteness.
  4. Computationally processed visualizations – These are the special forms of imaging applied to artefacts like manuscripts. They adapt imaging techniques from the material sciences like MRI or x-ray scans.

All of these types of visualizations carry epistemological baggage, often from the sciences, but also from gaming (in the case of renderings.)

Examples

She the showed example images and talked about their limits. We can remediate the already remediated.

Historical Origins: Imprints of Disciplines

Drucker gave a quick tour through some of the types of visualizations and how they are imprinted with their origins. They carry the baggage of their history of use. We need to understand these histories in order to understand how they will be interpreted or overintepreted.

  • The table is one of the earliest and main forms of visualizing data. It is a powerful interpretative tool and we forget how it uses visual arrangement. It is invisible as a visualization.
  • The tree (as in tree of life or family tree) has spiritual origins. It bears notions of continuity or, in the case of the tree of life it bears notions of hierarchy. Trees carry structure in subtle ways. Think of the family tree of consanguinity (who can inherit) – showing a mythic notion of inheritance.
  • Charts have their origin in political arithmetic. They are a way of showing abstract data from human situations so that people can be managed.

Graphical “language of form”

Drucker then turned to the idea of a “language of form”. The languages of architecture (think Palladio) are a predecessor to the more recent idea of a language of visual form. These languages of form are often used in discussions of information visualization, but they have a history. This idea comes from the aspirations of the visual arts to be as authoritative as the sciences. One of the early attempts to develop such a language is in Humbert de Superville‘s Essai sur les signes inconditionnels dans l’art. He developed a language from which more complex works can be drawn. Kandinsky’s Point and Line to Plane 1926 was another attempt that breaks with with 19th century realism, developing a stable graphic language which becomes a foundation of graphic design languages. It is an attempt at an abstract set of signs. She talked about how we can mine the inventory of modern art for ideas. She showed the lino cuts of 1961 of Anton Stankowski whose Functional Graphics look extraordinarily like templates for the visualizations we use today. He imagined ways to make invisible processes visible. She then mentioned how perceptual psychology also developed a language of form trying to find a graphic vocabulary.

Important to data visualization is Jacques Bertin and his Semiology of Graphics. In this he distills 7 graphic variables with which show information: size, tone, texture, color, orientation, shape, and position. Drucker added that in dynamic situation we need to add: motion, rate of motion, direction of motion, and the sound of motion. Graphical systems make use of these variables. They also carry semantic value. As a principle, we should use things for what they are good at showing.

Drucker then showed some types of visualizations that haven’t been used like architectural plans. We don’t use perspective – we obliterate dimensions.
When we leave out perspective we leave the perspective of the speaker out. This creates the illusion that it is as if the visualizations speak for themselves. We also lose the ability to use distortion or translation of perspective.

Another type that we haven’t used is the cabinet of curiosities like Wormius’ one. She talked about the complexity of the image and how much data it carries using perspective, tonal value.

She compared a Moretti graph of Hamlet to a Daniel Maclise painting of the play within the play. She showed a Charneaux lingerie image that shows how lingerie adds structure to the body. She showed a cartoon showing a step by step process. All these to show how impoverished visualizations were.

What is the work of visualizations and what do we want to do

Visualization can be a type of fiction that obscures a lot in order to show an overview or gestalt. Some of the things we want to do include:

  • Add dimensions and perspective back – flat screens are lacking
  • Translate images through rendering – can we use the visual for what it is good at?
  • We want to be shown degrees of certainty.
  • Map views can make it look as if the same space is the same – we want to show distortions and how maps are of their time. We want to avoid historical anachronism and use data to build a map rather than structure the data with a map.
  • We want to use renderings to hold evidence not to obscure provide illusion of it.

What is the work ahead

Drucker closed by talking about the epistemological issues and graphic challenges ahead.

  • Partial knowledge: how do we show what we don’t know – figure without ground
  • How can we show evidence and see what shape it takes rather than imposing shape
  • How can we situate knowledge – provide a point of view
  • How can we be clear about the historical specificity and diverse ontologies
  • How can we show process – visual and non-visual
  • How can we provide for annotation – commentary and non-visual
  • How can we visualize the methodological. How can we show contradiction, incompleteness, doubt, uncertainty, and parallax.
  • How can we show non-standard/variable metrics – affective metrics, diverse scales
  • How can we make a semantically legible system

 

]]>
http://threedh.net/johanna-drucker-3dh/feed/ 0
The Strange Attraction of the Graph http://threedh.net/the-strange-attraction-of-the-graph/ http://threedh.net/the-strange-attraction-of-the-graph/#respond Sat, 16 Apr 2016 15:02:12 +0000 http://threedh.net/?p=68 Read more]]> Screenshot
CSEC Summary Slide with Comm Network

I (Geoffrey Rockwell) gave the second lecture on Thursday the 14th with the title The Strange Attraction of the Graph (video). I started with the image above which is of a PowerPoint slide from one of the decks shared by Edward Snowden. This is the Summary of the CSEC Slides (see my blog entry on these slides) where CSEC showed what their Olympia system could do. The Summary slide shows the results of big data operations in Olympia starting with a target (phone number) and getting a summary of their telecommunications contacts. The image was not in the slides shared by either of the media companies (Fantastico or Globe and Mail) that reported on this as it has too much information. Instead hackers reconstructed it from video that showed it in the background. That gives it the particular redacted and cut-up quality.

I showed this slide as an example of a visualization we want to interpret. My talk addressed the question of how we can interpret visualizations like this, namely graphs in the computing sense of sets of linked points. I didn’t develop a general hermeneutics of visualization, or talk that much about this CSEC slide, but stayed focused on one type of visualization, the graph with nodes (vertices) and edges on a plane.

Here are some of the approaches I took.

What is a graph?

It is important to draw on the traditions of computing to understand visualizations, especially if we want to understand how they are rendered by computers, how they are conceived by programmers, and how the tools work. For this reason we want to understand the basics of graphs. Graphs are representations of sets of points and their links. The simplest heuristic to interpreting a graph is to ask what the points represent and what the lines connecting points represent. That’s all there is to interpret in a simple graph which shows a set of nodes or vertices with edges between some of them. There is no information in the distance between nodes or their position on the screen; all that is generated by the tool based on rules about how to make graphs pretty and easy to read. (See the poetics below.)

I should add that a key heuristic to interpreting any visualization is ask what is metrical (based on a measurement in the data) and what is not. For example, in word clouds, the location of the word and its colour is often random while the size of the word and its centrality is based on the frequency. Many of the graphic features can have nothing to do with the data being represented which means it has nothing to do with the phenomenon the data comes from.

When rendering graphs a computer needs ways to represents the points, ways to arrange the objects, ways to represent the edges, and ways of decorating the background plane. The rules or algorithms used to determine how the points, lines and plane are drawn often have to do more with aesthetic choices than with some feature of the thing being represented. This can mislead us to overinterpret a graph.

What are some of the types of graphs?

The simple of idea of a network graph has been used in many familiar contexts to represent very different types of things from sentences to networks. We have traditions of using and interpreting them that are important to understand as interpretative expectations can bleed from one to another. Some of the types of familiar graphs that I showed include:

  • Family trees where there are conventions of layering the nodes in generations. These use the layered location to arrange the generations of children nodes so you can tell where the third generation is quickly. We could say that these use a structured surface for the plane where most do not.
  • More generally tree graphs usually start from one node and there is interpretative expectation that things evolve out from there. Examples might be trees of knowledge or trees of life.
  • One particular type of tree diagram that is used in data representation are dendograms which show the clustering of things at different levels.
  • In linguistics we see parse trees that show the parts of a sentence.
  • State Diagrams are less familiar, but they and flow charts can show processes. John B. Smith used state diagrams for rhetorical moves in “Computer Crticism” (1978).
  • Flow Charts can also help with making decisions (decision trees). These can be considered as part of a larger class of diagrams for reasoning with. Such diagrams don’t represent something that already exists, but they let you generate knowledge.
  • Visual Programming environments are an extension of knowledge generating diagrams that allow one to “pipe and flow” data through simple processes.
  • Radial charts are a form of arrangement that puts the points around a circle and then connect them with curves. TextArc (not really a graph, but worth mentioning) and Saklovskie’s NewRadial are examples.
  • Arc Diagrams are another form of arrangement that puts the points on a line and then creates arcs between them. This can be used in timelines.
  • Sociograms or social network graphs are in many ways the most important tradition and they go back to work by Jacob L. Moreno in the 1930s. They show the relationships between groups of people and are used in ethnography and sociology. They are important because they show us, people, in networks of relationships. They have come to form our imagination of how relationships can be shown. I compared this way of representing a relationship to that in a painting like David’s Death of Socrates. I showed a sociogram tool called RezoViz that we developed as part of Voyant. Here it is showing people and other entities in Humanist.
  • Citation Networks show a particular type of relationship between academics – that of citation like this one of Comparative Literature 2004-14. Who cites who. What people or articles are central.

The Poetics, Ontology and Epistemology of Graphs

Poetics: Expanding on the heuristics for interpreting a graph mentioned above, it is useful to ask how the graphs we see are made (poetics.) What sort of data are network graphs typically used for? What sorts of algorithms are there for generating (rendering) the graphs. A good place to look for the algorithms and the aesthetics they encode is the computing literature and code libraries out there that people use. Di Battista et. al. in Algorithms for Drawing Graphs: an Annotated Bibliography (PDF) (1994) provides a great overview. He lists the aesthetics for attractive simple graphs as:

  • display symmetry;
  • avoid edge crossings;
  • avoid bends in edges;
  • keep edge lengths uniform;
  • distribute vertices uniformly. (p. 7)

One of the ways these aesthetic guidelines are achieved with real code is to create a physics models of the data as a collection of rings connected by springs and then calculate and show the rings (nodes) in tension with each other. This is why network visualization tools often have sliders to control tension or how much nodes repel each other.

Epistemology: a large topic that I didn’t have time to go into, is how we make and then read meaning from visualizations in general and graphs in particular. This is obviously connected to the issue of the interpretation. I mentioned Ben Schneiderman’s Visual Information Seeking Mantra from “They Eyes Have It” (1996).

Visual Information Seeking Mantra:

  • Overview first,
  • Zoom and filter,
  • Details on demand

This mantra is both normative and descriptive in that it is meant to capture how people read visualizations and therefore how you should design them. The idea is that users begin by getting an overview, then they zoom in and explore parts/relationships, and lastly they check details that interest them. We need to learn a lot more about how exploration works cognitively as compared to reading.

Another approach to the generation of meaning is to look at the graphical features (points, lines, plane) and how they are used for what. The assumption is that there is some sort of analogy between the linking of points and the real-world relationships being graphed. The graph creates a visual model of the abstract network of relationships. One can see how the imagination begins to dominate our conception. Relationships are not lines between solitary points, but that is how we have come to “see” them thanks to sociograms and other network graphs.

A direction I would like to take further would be to draw on how artists think of abstract art. I particular Kandinsky’s Point and Line to Plane strikes me as useful artist’s view on the use of these graphic primitives. For example, he makes a point about External and Internal Experience. One can experience something from outside it (as if through a window above it) or one can experience it inside the phenomenon.

Ontology: connected to the epistemology of the graph is the ontology. What are graphs and what do they represent. I argued that they are models of models. They are representations of data sets, not of the thing studied. The datasets in turn are measurements or observations of the thing studied. Sometimes there are intermediate models giving us layers on layers. The point is that they show interpretations not the thing. This is a point Moretti makes and Smith actually theorized in 1978 in his structuralist “Computer Criticism.”  One could say that network graphs pretend to show the underlying structure of the phenomenon and that is part of their attraction.

Interestingly graphs are not necessarily quantitative. One can represent with a graph a table of friend’s names and whether or not they are friends. All the cells are names and interpretations of friendship entered by hand. There is no quantitative, only categorical data.

Why the attraction?

This brought me back to the title and the attraction of the graph. This part was more speculative. Some reasons why network graphs in general and the CSEC one in particular are attractive include:

  • Simple graphs are simple – just points and lines. This leave lots of room for the imagination to see things into them and to explore them into significance.
  • Visual exploration of a rich graph feels much more open and free than following a sequence of statements as one might find in a text, syllogism, or code which is read in a linear way. They feel like they can be consulted rather than having to be read end to end.
  • They show time in the space of the plane. Everything that might be complex and hidden in time is arranged out on the planar screen to be seen in one glance.
  • Another way to put this is that they provide both overviews and then details. There is something attractive to the way one can explore a rich graph.
  • Or we could say that time (and distance) are collapsed into and abstraction for the screen.
  • This puts everything into an arrangement for a gestalt view where you have the impression you can see the whole. You can grok it! which establishes yet another connection, this time between you and the visualization.
  • One of the things compressed is the process of generating observations and then the visualization. The messiness is hidden in the white box – the black box you don’t even know is there.
  • Finally, the network graph has become the visualization of the networked age. I showed early and later graphs of the internet, graphs of communication and how computers work, graphs of network cables, ontological graphs and graphs of surveillance software architecture. The graph is the form with which we imagine virtual culture. It is a form without distance (time) or position (space). It is the simplest form of abstraction.

Of course, it also has little to do with the material form of computing or networks or software, but that is the point of the virtual. It approximates the way we imagine pure abstraction.

As for the communications graph I started with, it had further attractions as it had traces of its generation (time) and redaction. It has a context which can be unpacked from this visualization. In this way the graph is far richer than most graphs that hide all the messy.

What makes an attraction strange? I didn’t have time to explain my title, but the idea of an attractor in systems is that it is something towards which they evolve. I was using the term metaphorically for the way we can feel there is an interpretation that acts as an attractor pulling us towards the emergence. The attractor can be felt as a hidden structure that attracts the arrangement of the visualization. This sense that there is an attractor operating on us in visualizations is what is strange. For that matter the attractors are strange. They don’t exist in the sense of some underlying truth which we get closer to in certain visualizations, but we are driven to keep on trying as if attracted to their flame.

What are some guidelines for design?

The problem with network graphs is that it is easy to overinterpret them. I was asked during question period if I had suggestions for how to design network graphs that would be hermeneutically more robust. Here are some of the ideas I put forward:

  • Provide controls so the user can play with the visualization and see that there is no right arrangement. Controls also give some representation of the user’s perspective making the point that all visualizations are from a perspective.
  • By extension one can animate the graph as many based on physics models are. The nodes can move around on their own as the springs bounce them around. This makes it clear there is no objective arrangement. The user can also drag stuff and watch what rubber bands pull what else. This manipulation is not only satisfying, but it helps the user understand what is based on the observations and what is a rendering decision.
  • Present the graph with other linked visualizations that let the user see different and conflicting views onto their data. For humanists it is also a good idea to give them access to the “original” text or metadata so they can use other reading methods to see if there are reasons for the patterns being seen. This is an example where giving details allows intuitions to be checked in ways that humanists are comfortable with.
  • I closed by showing a Mathematica notebook where I could show the code with the visualization. The notebook is a form of meta interface which visually arranges text, code and results in a way that is supposed to recall the scientist’s notebook. This means that any visualizations (even interactive ones) are woven together with their code. This way you can see the logic of the rendering.
  • Reflecting later I wondered if Kandinsky’s point about internal experience couldn’t be revised into a guideline – namely that visualizations should look as if they are viewed from inside the experience. This would mean that it is clear that not everything can be seen (lines might always lead off the screen).
]]>
http://threedh.net/the-strange-attraction-of-the-graph/feed/ 0