3DH http://threedh.net Three-dimensional dynamic data visualisation and exploration for digitial humanities research Wed, 19 Dec 2018 18:43:20 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 http://threedh.net/wp-content/uploads/2016/04/cropped-3dh-siteicon-32x32.png 3DH http://threedh.net 32 32 DHd2018 in Cologne! http://threedh.net/dhd2018-in-cologne/ http://threedh.net/dhd2018-in-cologne/#respond Thu, 05 Apr 2018 11:54:14 +0000 http://threedh.net/?p=433 Read more]]> It’s about time for a new blog post! 3DH has progressed quite a bit in the last couple of months. We fleshed out wireframes based on the conceptual foundations that came out of the workshop in Montreal, iteratively refined them and finally brought the concept to an interactive prototype level that we were able to present at the DHd2018 conference in Cologne.

hauptscreen11_border

After the Montreal workshop we focussed on the classic close reading scenario with an emphasis on interpretation because it can be considered the one to which 3DH postulates apply the most, so we wanted to make sure we cover that first: Exploration of free annotations to sharpen the literary research question.

Developing an early prototype as a proof-of-concept for this scenario first would make it easier to transfer interface principles to the other scenarios, so we reasoned. Over the course of the last months we chose a text we deemed appropriate for the prototype and for the intended audience and populated the scenario with real data. The text we wanted to annotate needed to fulfill some basic requirements: It should be well-known, so people can relate to it. It should be complex enough, so different paths of interpretation can be pursued and it should be short enough, so people can actually read the text without spending too much time, if they want to. It should also be long enough, so visualization as a method of getting an overview really makes sense. We picked the short story In der Strafkolonie von Franz Kafka.

For this short story we created over 600 annotations in 19 different interpretation categories in Catma. In the next step we exported our Catma annotations as JSON and built a web-based demonstrator with Javascript and D3 that shows the most important interactions of the concept.

The main principles of our concept are the tripartition of the interface and the representation of annotations as glyphs. So, while we clung to the idea of glyphs (mentioned in the last article), we have abandoned the idea of a strict spatial separation between the two activity complexes research and argument. We came to the conclusion that scholarly activity is better represented by three adjustable spaces text, canvas and argument.

Here text is simply the part of the interface, where our research text can be read and annotated. For each annotation a glyph is created on the canvas in the middle of the interface. We can sort these glyphs, structure them according to different criteria and draw connections between individual or groups of glyphs. Scholars can save multiple canvasses each of them highlighting a particular aspect of the text. In the argument space on the right side of the interface these canvasses can be combined and arranged to form an argument.

Since this year’s topic of the DHd2018 conference was critical perspectives on digital humanities, our contribution put an emphasis on our design process and the accompanying design-based critical perspective we have applied in the process. We talked about how we incorporated the four methods scenarios, wireframes, prototyping and design reviews into our process and how these helped us to gain new insights and arrive at the current state of design.

Here’s a link to the early prototype that allows you to explore the interaction between annotations and glyphs:

Prototype

You can find our slides here:

Slides

These two videos show the interplay between the three parts of the interface:

Populating the canvas with glyphs Filtering, adding canvasses and drawing

 

]]>
http://threedh.net/dhd2018-in-cologne/feed/ 0
3DH Workshop in Montréal http://threedh.net/3dh-workshop-in-montreal/ http://threedh.net/3dh-workshop-in-montreal/#respond Fri, 20 Oct 2017 14:42:31 +0000 http://threedh.net/?p=419 Read more]]> Prior to this year’s DH conference in Montreal, Canada (8 – 11 August) some of us flew in a little earlier to come together for a workshop in the context of the 3DH project. Apart from the core project team and our colleagues Evelyn Gius and Marco Petris we were joined by our associated members Johanna Drucker, Geoffrey Rockwell and Marian Dörk as well as Laura Mandell.

Over the span of two and a half days we had an intense and productive workshop that had the goal of refining and reifying the three concepts we had developed so far over the course of the last weeks. Springboards for this process were on the one hand our four conceptual 3DH postulates: 2-way-screen, parallax, qualitative and discursive, on the other hand reflections about supporting the process of interpretation in digital tools. We specifically discussed the relevance of the article “Thinking about interpretation: Pliny and scholarship in the humanities” by John Bradley.

What is intriguing in the software “Pliny” described by Bradley, is, that scholars are very not bound in the way they organize their notes and annotations, there is no need to assign distinct categories or relations to them. Instead, these can be organized on a plane and emerging structures becoming apparent can be inscribed by encapsulating them in boxes, when the interpretation progresses.

This appears to be a way of modelling interpretative data that takes into consideration methods scholars have been using in the analog world, however, also exceeds that and opens up new possibilities enabled by the digital (in terms of interaction with and visualization of data), an approach that seems very much related to the goals of the 3DH project as well.

In our design process so far we have based our concepts on real-world scenarios fed by experiences of literature scholars in research projects and arrived at similar conclusions as Bradley: It seems counterintuitive for scholars to force them to apply structure to their annotations when they start with their process. Relations between and qualitative statements about annotations often can only be made when the process has progressed.

When we discussed the wireframes in the workshop we realized that we can differentiate two different environments or spaces of literary scholarly work: Johanna called this research and argument space. While we define typical descriptive acts of the scholarly process like annotating, collecting and commenting as research activities, we consider tasks like grouping, ordering and organizing as interpretative or at later stages  argumentative activities. Usually scholars switch between activities of either of the modes perpetually.

img_2222Interplay between research environment and argument environment (by Johanna Drucker)

We understood that this circumstance has to be supported by the interface much more deliberately. Thus, for the next steps in the design process we will focus on the representation of and interaction between these spaces in the interface. What would an interface look like that supports continuous switching between these mentioned activities?

In the discussion we came up with the concept of a semantic plane that might allow us to bring these two spaces together. While we would produce annotations in the research phase that would be represented as glyphs on the plane, in the argument phase we would position and manipulate these glyphs to assign meaning to them and create  arguments that we later can publish.

Merken

Merken

Merken

Merken

Merken

]]>
http://threedh.net/3dh-workshop-in-montreal/feed/ 0
Getting more specific: Refinement of our narratological use case(s) http://threedh.net/getting-more-specific-refinement-of-our-narratological-use-cases/ http://threedh.net/getting-more-specific-refinement-of-our-narratological-use-cases/#respond Wed, 07 Jun 2017 16:41:43 +0000 http://threedh.net/?p=368 Read more]]> Second Co-Creation-Workshop in Potsdam May 31st, 2017

We are halfway through our lecture period by now and since our first co-creation workshop in Potsdam at the end of April a lot has happened.

The concept sketches that were created by our five student groups during the first workshop were elaborated on in preparation for the next exchange between Hamburg and Potsdam.

On May 10th, Marian Dörk and I, together with the Potsdam interface design students, visited Chris Meister, Rabea Kleymann and the other members of the team to join Chris’ seminar and the accompanying exercise.

In the seminar Chris gave an introduction to the collaborative annotation tool Catma and explained to the Potsdam students, how you would use the tool with a certain literary question in mind. This introduction was meant to serve as a primer to Catma on the one hand, but also as an insight into the literary scholar’s process. Since the Potsdam students are supposed to base their visualizations on real data, i.e. narratological annotations produced in Catma, we deemed it necessary to make them comfortable with the process and the tools. The annotations they will eventually use, will be produced and made available to them by the Hamburg students via Catma.

In the exercise the interface design students presented their refined concepts in front of the Hamburg students and Chris Meister’s team. The concepts were quite diverse, in terms of narratological questions they were supposed to address, as well as media, technology and design. The images depicted below give an impression.

bildschirmfoto-2017-06-07-um-17-30-42 bildschirmfoto-2017-06-07-um-17-28-48

bildschirmfoto-2017-06-07-um-17-27-52bildschirmfoto-2017-06-07-um-17-39-31
Sketches from short presentation in Hamburg

The predominant issues addressed with the concepts were, among others: narrative levels, advanced text search, narrative polarities and relations between objects, characters and parts of the text. After each presentation the literary scholars gave feedback on the projects. In the following three weeks the students had time to continue working on their concepts, before the two student groups from Hamburg and Potsdam got together again.

On May 31st our second co-creation workshop took place at the University of Applied Sciences Potsdam. The goal of this workshop was to sharpen the students’ concepts with respect to their ability to help answering narratological questions. In the first part of the workshop the student groups presented their current status followed each by a short discussion. In the second part all of the interface design student groups in Potsdam were assigned one or more students or researchers from Hamburg and worked on reifying their concepts. This session was also meant as a chance for the interface design students to ask questions regarding narratological analysis that they had collected over the past weeks.

In the weeks before there had been some rearrangements within the groups and some conceptual reorientations. At the moment there are six groups and their concepts are summarized in the following.

Storylines

Idea of this concept is to mark frame narratives and embedded narratives in the text and visualize their nesting on different levels of detail. These representations will be combined with contentual information, so viewers are informed about the interplay between structure and content.

Narrative Levels

bildschirmfoto-2017-06-07-um-18-16-59
Narrative levels concept

In this circular visualization ring segments represent narrative levels of different order. In the center different relations between levels are visualized, like for example, character networks or the duration and frequency of certain events.

Manual Topic Modeling

bildschirmfoto-2017-06-07-um-18-19-12
Manual topic modeling tool concept

The students developed a tool that let’s you search for the occurrence of words and let’s you manually define topics by putting words together. The distribution of these words can then be visualized. The group is currently looking into other text analysis features that might be helpful for narratological analysis.

Influences

The group wants to visualize influences on the protagonist of a novel. From a narratological perspective it could be interesting to analyze the development of the characterization of the protagonist over the course of the novel. The next step for the group will be to collect all text passages that characterize the protagonist and develop a suitable metric that can be visualized to represent the characterization. In a further step this could be related to the context surrounding a characterizing passage.

Nodes

bildschirmfoto-2017-06-07-um-18-22-43
Nodes concept

In this concept the attraction and repulsion between different entities (characters, objects, text fragments, for example) in the text is visualized in a VR environment (Google Cardboard). Attributes that influence the degree of attraction or repulsion are, for example, frequency or proximity in the text. Different entities can be pinned to get an impression, how they relate to other entities. This way it becomes possible to assume the perspective of a particular entity.

Narratological Cards

analysebox
Narratological cards concept

In this concept a set of visual modules allows users to analyze a narrative in a physical way by putting together cards that represent different narratological features, like narrators or narrative levels, their position in the narrative and their interconnections.

Our hope is that this close collaboration between the two student groups over the whole course of the semester continues in such a fruitful way and will eventually lead to visualization tools that are truly user-centered and oriented towards the needs of narratologists.

Merken

Merken

Merken

Merken

Merken

Merken

]]>
http://threedh.net/getting-more-specific-refinement-of-our-narratological-use-cases/feed/ 0
Visualisation of literary narratives: How to support text analysis with visualisations? – Creating a narratological use case http://threedh.net/visualisation-of-literary-narratives-how-to-support-text-analysis-with-visualisations-creating-a-narratological-use-case/ http://threedh.net/visualisation-of-literary-narratives-how-to-support-text-analysis-with-visualisations-creating-a-narratological-use-case/#respond Mon, 15 May 2017 17:02:09 +0000 http://threedh.net/?p=346 Read more]]>  First Co-Creation Workshop in Potsdam, April 26th, 2017

The 3DH project aims to lay the foundations for a ‘next-generation’ approach to visualisation in and for the Humanities. As for the theoretical background, the project’s frame of reference are the particular epistemological principles that are relevant to hermeneutic disciplines and which must therefore also orientate our approach to visualisation.

For the 3DH visualisation concept we formulate four postulates. These are

  1. the “2 way screen postulate” (i.e. an interaction focused approach toward visualisation);
  2. the “parallax postulate“ (i.e. the idea that visualisation in and for the humanities should not just tolerate, but actively put to use the power of visual multiperspectivity in order to realise epistemic multiperspectivity);
  3. the “qualitative postulate” (i.e. the idea that visualisations should not just ‘represent’ data, but also offer a means to make and exchange qualitative statements about data);
  4. the “discursive postulate” (i.e. the idea that visualisations should not just be used to illustrate an already formed argument or line of reasoning, but should also become functional during the preceding/subsequent steps of reasoning, such as exploration of phenomena and data, generation of hypotheses, critique and validation, etc.).

During the 2016 summer term we organized a public lecture series on DH visualisations (see also 3DH blog and  https://lecture2go.uni-hamburg.de/l2go/-/get/v/19218).

One outcome of the lecture series was a need for bringing in the expertise of visual design specialists. By bringing together the “two worlds” of literary studies and visual design we hope to transcend the limitations of our respective visual(ising) routines.

Co-teaching seminar University of Applied Science of Potsdam and University of Hamburg

In the 2016 summer term we have begun to engage in a co-teaching project with the visual design specialists Marian Dörk and Jan Erik Stange from the University of Applied Science Potsdam. Two groups of students meet during four workshops alternately held in Potsdam and in Hamburg, one a class of German literature master students (Prof. Chris Meister, Universität Hamburg), the second a class of design students (Prof. Marian Dörk).  Their joint goal is  to answer two questions:

  • ‘In how far can visualisations be helpful for the analysis of literary texts?’ and
  • ‘Where do visualisations have their place in a subjective and interpretive structure?’

The literary text under discussion is the novel Johannisnacht by the German author Uwe Timm, published in 1996. It tells the story of a writer suffering from writer’s block, who gets the opportunity to write a report about the history of the potato. As trivial as this task seems to be at first, the writer’s research becomes more and more a life-threatening and odd adventure. In a formal aesthetic interplay between narratological categories such as the narrator, the discursive parameters of time and place and metafictional elements of self-reflection, Johannisnacht stretches out the genesis and the specific functionalities of narration.

Our two groups approach this text from two different perspectives: the German literature students try to identify and define possible visualisation needs for their work in text analysis (in which they focus on some of the novel’s narratologically salient aspects). The task of the design students on the other hand is to consider the literary text as a whole and find practical solutions for visualising those of its structural features that might be pertinent to the literary scholar’s analytical needs. The combination of both approaches – the narratological/literary studies perspective with the visualisation design perspective- is our first step toward defining a specific narratological visualisation use-case.

First Co-Creation Workshop and its results

After a short input presentation on relevant narratological concepts and methods (Chris Meister) we tried to gain a first understanding of some of the novel’s structural features and then co-created first drafts of visualisations. Our sketches focused on three questions:

  • What narratological questions are raised by reading and analysing the whole novel or a single chapter of Johannisnacht?
  • What texts and data are required for answering this question?
  • What kind of visual representation could be suitable for answering this question?

Here are our first visualisation ideas:

visualisation-sheet-2

This sketch tries to point out the narratological category of focalisation in the course of the story.

 

visualisation-sheet-1

This sketch shows an attempt of structuring the story of the novel by describing the relationship between character, narrator and narrated objects.

 

visualisation-sheet-4

This group gains an access to Johannisnacht by asking ‘What generates complexity in a novel?’

 

visualisation-sheet-3

This sketch shows the attempt to visualise the narrative levels of Johannisnacht.

 

visualisation-sheet-6jpg

This group focuses on the character constellation in the novel Johannisnacht: Who is talking about whom? On which narrative level is a character introduced?

 

 

Merken

]]>
http://threedh.net/visualisation-of-literary-narratives-how-to-support-text-analysis-with-visualisations-creating-a-narratological-use-case/feed/ 0
VIS2016 – K. Coles: Show ambiguity http://threedh.net/vis2016-coles-show-ambiguity/ http://threedh.net/vis2016-coles-show-ambiguity/#respond Fri, 25 Nov 2016 11:10:19 +0000 http://threedh.net/?p=330 Read more]]> This is the first post in a series of posts about what I think to be most relevant for 3DH from the IEEE VIS2016 conference in Baltimore.

The poetry scholar Katherine Coles gave a presentation on Poemage at the VIS4DH workshop at VIS2016. Poemage is a tool for exploring and analysing the sound topology of a poem. It is an interdisciplinary work between poetry scholars, computer scientists and linguists. Recommended reading is not only the presented paper Show ambiguity, which takes a more poetry scholar influenced perspective on Poemage but also the companion paper which complements “Show ambiguity” by adding the computer scientist stance to it. Besides the methodological principles that are covered by Poemage both papers give also great insight into the collaborative aspects of the project across disciplines.

poemage

The UI of Poemage offers three views. The Set View offers rhyme sets, which are sets of words that are connected by a specific rhyme scheme. The rhyme sets are organized by rhyme types. Each circle represents a specific rhyme set. The size of the circle depends on the number of words in the set. The Poem View shows the poem in its original form and the Path View gives a 2D space where the flow of the poem according to its rhyme topology is displayed. Each node in the path view represents a word in the poem and is positioned in relation to its position in the layout of the poem. The curves show the flow of a rhyme set through the poem. The views are linked by color coding and by interaction: e. g. selecting a rhyme set in the Set View also activates the visualization of that rhyme set in the other two views.

I like especially the openness of the tool. It supports and encourages multiple readings and the rhyme types are extensible in two ways. The simple way allows the scholar to group words freely to form custom sets without being bound to any predefined rhyme type. The more complex way allows the scholar to access the underlying rules engine or formalism to formulate new rhyme types in a notation which is geared to poetry scholars.

The representation of rhyme sets as paths allows exploration of the rhyme topology by examining spatial phenomena of the paths like intersections, mergings and divisions. There is a tight link between the visualisation and the poem that makes it easy to trace back observations in the visualization to the original data.

Another interesting aspect of her talk was when Coles shared her view on the humanistic idiosyncrasies of data visualization, especially in poetry scholarship. She wanted Poemage “to provide an aesthetically enriched experience” and emphasized the engagement between scholar and object of study which should extend to the visualization as well.

When we discussed the special needs for the humanities for visualization in the 3DH project so far, I (with a computer science background) was very sceptical about seeing the humanities on one side and the hard sciences on the other side. On the contrary I can see a lot of common ground between a physicist and a humanities scholar exploring and interpreting his or her data with visualizations. Instead of seeing the two as opposites we in 3DH started to work with a methodological continuum between the poles of subjectivity/uniqueness/particularity and objectivity/reproducibility/universality. I doubt that the kind of engagement Coles describes is the same engagement between a physicist and his or her data. I think Coles managed to describe at least part of the possible contribution of visualisation to one extreme of that continuum. And this really helps to track down the methodological differences 3DH visualizations need to account for.

]]>
http://threedh.net/vis2016-coles-show-ambiguity/feed/ 0
Lauren F. Klein: Speculative Designs: Lessons from the Archive of Data Visualization http://threedh.net/lauren-f-klein-speculative-designs-lessons-from-the-archive-of-data-visualization/ http://threedh.net/lauren-f-klein-speculative-designs-lessons-from-the-archive-of-data-visualization/#respond Sun, 03 Jul 2016 17:44:09 +0000 http://threedh.net/?p=315 Read more]]> Peabody Visualization
Peabody Visualization

Lauren Klein‘s paper looked at two 19th century pioneers of data visualization to see what we could learn from them. She asked,

What is the story we tell about the origins of modern data visualization?

What alternative histories emerge? What new forms might we imagine, and what new arguments might we make, if we told that story differently?

Lauren looked at Elizabeth Peabody for an alternative history who is often overlooked because her visualizations are seen as opaque. She compared it to Playfair who is generally considered the first in the canonical history of visualization. Lauren asked why visualizations need to be clear? Why not imagine visualizations that are opaque and learn from them? Her project is a digital recreation project of Peabody’s thinking.

Elizabeth Palmer Peabody (1804-1984) ran a bookstore out of Boston that acted as a salon for the transcendentalists. In 1856 she published a Chronological History of the United States for schools. She traveled around to promote her textbook with a roll of mural charts like domestic rugs (see above). Her charts were based on a Polish process that generated overviews of history.

For modern mavens of visualization like Edward Tufte these charts would not be clear and therefore not effective. By contrast Lauren sees the visualizations of Peabody not as clarifying but as a tool of process or knowedge production. You make knowledge rather than consume it when you make a chart. The clarity to those who didn’t make it is besides the point.

Peabody also sold workbooks for students of school that used the textbook so that they could follow the lessons and rules to generate patterns. Hers is is an argument for making and this making has a historical context. Peabody rejected a single interpretation of history and imagined a visualization system that encourages different interpretations.

This led to one of the points of the talk and that was that the very idea of visualization is itself historically situated and should be examined. And this led to looking again at the canonical works of William Playfair.

She then showed us some of Playfair’s visualizations (from The Commercial and Political Atlas) that are much more readable and for that reason he is often seen as a pioneer in data visualization. Playfair is widely considered one of the first to abstract phenomena to data for visualization. Lauren pointed out how Playfair was not sure how his visualizations would be interpreted, but he did want them to make an impression that was “simple and complete.” He was good at this.

She then showed Lyra: A Visualization Design Environment, an open source alternative to Tableau. There are a lot of Playfair emulators who use things from Lyra to everyday tools like Excel to recreate Playfair’s charts. There are plenty of tools now out there with which one can create visualizations including try to emulate Playfair.

What is interesting is that the designers of the professional tools made decisions about what visualizations should or could do. Thus we see a lot of line and bar charts and little resembling Peabody’s. The widely held belief is that visualization should condense and clarify.

Recreating Peabody

Lauren then shifted to describing an ongoing project to recreate some of Playfair and Peabody’s charts with different tools. They found the existing tools, like D3, hard to use. The tools all assume you start with data. This made her think of the status of data and its relationship to visualization.

She pointed out that when you use a tool for visualization you don’t worry about the shape of the curve, you let the tool do that. Playfair did, however worry about it. He had to engrave the curves by hand and he played with the lines trying to make them attractive to the eye.

Watt, for whom Playfair worked, suggested to him that he put the tables next to the charts. He did this in the first two editions of his book (and then removed the tables for the third.) Even with those charts some of the graphs are hard to recreate. To make one of Playfair charts they had to use data from two different places in Playfair. Again, almost all tools, like D3, now depend on data. The dependence on data is structurally woven in, unlike more artistic tools like Illustrator.

She then showed an engraving error detail and discussed how it could have come about due to Playfair being tired when making the copper plate. In the digital artefact we don’t see such errors – we only see the finished project. The digital masks the labour. Only in Github are changes/labour saved and viewable.

Then she showed the prototypes her team has made including a “build” mode where you can construct a Peabody chart. They are now planning a large scale project using LEDs on fabric to create a physical prototype as that would be closer to the fabric charts Peabody made.

This returned her to labour, especially the labour of women. Peabody made copies of the charts for classes that adopted her book. Alas, none of these survived, but we do have evidence of the drudgery in her letters.

To Lauren the Peabody charts remind her of quilts and she showed examples of quilts from Louisiana that were a form of community knowledge constructing genealogies. Such quilts have only recently been recognized as knowledge comparable to the logocentric knowledge we normally respect.

Lauren closed with a speculative experiment. How would we think differently if Peabody’s charts had been adopted as the standard to be emulated rather than the line charts of Playfair? How might we know differently?

Her team’s recreations of both the Playfair and Peabody charts are just such a sort of speculation – understanding though making.

You can watch the video with slides here.

]]>
http://threedh.net/lauren-f-klein-speculative-designs-lessons-from-the-archive-of-data-visualization/feed/ 0
Stan Ruecker: The Digital Is Gravy http://threedh.net/stan-ruecker-the-digital-is-gravy/ http://threedh.net/stan-ruecker-the-digital-is-gravy/#respond Sat, 25 Jun 2016 14:51:50 +0000 http://threedh.net/?p=308 Read more]]> Timeline Design
Timeline Design

Stan Ruecker gave the 3DH talk on the 23rd of June with the enigmatic title The Digital Is Gravy. He explained the title in reference to gravy being the what gives flavour to the steak. In his case, he wanted to show us how physical prototyping can give substance (steak) to the digital.

Stan started with an example of a physical prototype that materializes bubblelines that was developed by Milena Radzikowska who showed it at Congress 2016 in Calgary. (See Materializing the Visual.) He suggested that materialization of a visualization slows down analysis and leads to other lines of thought.

At the IIT Institute for Design Stan is weaving physical prototyping into digital design projects. His main research goal is to find ways to encourage people to have multiple opinions. He want to build information systems that encourage the discovery of different perspectives and the presentation of multiple opinions on a phenomenon. The idea is to encourage reflective interpretation rather than dogmatism.

How prototypes build understanding

He listed some ways that prototyping can build understanding:

  • Build something to collect information
  • The prototype is itself a kind of evidence
  • Learning through making. You don’t even need to finish a prototype. “Fail early and fail often.”
  • Prototype is also a representation of the topic area

Why physicality is important

After returning to the materialized bubblelines he talked

  • Materialized prototypes take time differently which can lead
  • It can produce results that can be used for comparison (with other results)
  • It can engage physical intelligence – embodied experience can leverage different ways of knowing
  • It involves collaboration (over time)  that involves community knowing
  • It encourages multiple perspectives from different people and different points of view

My experience with the pleasures of physical prototyping in a group reinforces the way the making of the

Timelines

He then talked about a project around timelines that has built on work Johanna Drucker did. He had gone through multiple prototypes from digital to physical as he tried to find ways to represent different types of time. He tried creating a 3D model in Unity but that didn’t really work for them. He now has a number of student designers who are physically modelling what the timeline could be like if you manipulated it physically and then that was uploaded to the digital representation (the gravy.)

Physical Qualitative Analysis

He then talked about how a multinational team is designing physical analytical tools. The idea is that people can analyze a text and model an understanding of it in a physical 3D space. It is grounded theory – you build up an emergent understanding. They tried creating a floating model like a Calder sculpture. They tried modelling technical support conversations. They used a wired up coat rack – hacking what they had at hand.

My first reaction is that doing this physically would be so slow. But that is the point. Slow down and think by building. They tried a digital table and that was no fun so they started making all sorts of physical

I’m guessing it would be interesting to look at Ann Blair’s Too Much To Know where she talks about the history of note taking and physical ways of organizing information like excerpt cabinets.

Stan then talked about a successful line of prototypes that had transparent panels that could be organized, joined, and on which ideas could be put with post-it notes. Doing this in a team encourages users to different views on a subject as the panels have two sides and can be jointed to have even more.

Finally, they are now trying to bring these back to the digital so that once you have an arrangement of panels with notes you can digitize it and bring it into the computer. This also suggests the possibility of automatically generating the model on the computer from the text.

He commented on how he has no industry industry interested in the analysis of conversations.

And that was the end.

 

 

]]>
http://threedh.net/stan-ruecker-the-digital-is-gravy/feed/ 0
Leif Isaksen: Revisiting the Tangled Web: On utility and Deception in the Geo-Humanities http://threedh.net/leif-isaksen-revisiting-the-tangled-web-on-utility-and-deception-in-the-geo-humanities/ http://threedh.net/leif-isaksen-revisiting-the-tangled-web-on-utility-and-deception-in-the-geo-humanities/#respond Sun, 19 Jun 2016 13:09:40 +0000 http://threedh.net/?p=304 Read more]]> Leif Isaksen gave the lecture on the 16th of June. He has a background in history, computer science, philosophy and archaeology. He spends a lot of time thinking about how to represent complex spatial arguments to other people and that has led him to ask how can we read (closely) the historical depictions of geographic space? How can we approach someone else’s visualization when we have only the visualization. He then joked that a better title for his talk might be “Thoughts on Predicting the Ends of the World” where “ends” can mean goals in representing the world.

Some of the things we have to think about when reading historical visualizations include:

  • Classification – how is the world classified when the visualization was drawn up?
  • Derived vs manually produced data – how did the data get to the cartographer and, for that matter, how did the map get to us?
  • Graphic vs. textual representations – we are continually transforming representations from visual to textual and back – what happens in the transcoding?
  • Epistemology – how do we know what we think we know?
  • Time and change – how is time and change collapsed in representations of space?
  • Completeness – we never have complete information, but sometimes we think we do
  • Data proxies – we are not interacting with the phenomenon itself, but with surrogates
  • Geography – what is special about the world?

He then showed 4 case studies.

Case Study 1: Roman Itineraries

Roman “station lists” exist that tell us about the stations if you went from somewhere to somewhere. There are a number of these lists, but we don’t know why they were created or stored. He showed an image of the Vicarello Goblets.

Then he showed ways to put these itineraries on a map or to visualize them in other ways like a topological map. A topological map can show closeness or betweeness. High betweeness is where you might get a bottleneck of travelers.

He showed that the data for itineraries has been put into databases for others to look at like that of the Ancient World Mapping Center. When we look at them the beginnings and ends of itineraries are often strange.

Leif commented that we need to be careful when drawing conclusions from small sets of data (like these lists) or combined data from different sets (and lists). We need to remember that the texts or lists are not the phenomenon. Access to texts is also changing rapidly.

Case Study 2: Ptolemy’s Geography

Ptolemy-World_Vat_Urb_82
Ptolemy’s world map from Codex Vaticanus Urbinas Graecus (from Wikipedia)

Ptolemy was famous early on for astronomy and geography. His Geographia was one of the earliest tools for geography. Ptolemy wrote about projections and provided a catalogue of places with coordinates that could be projected. We don’t have his maps, though we have maps created from his Geographia, to get those we try to project them from his theory and coordinates. (See Leif’s paper on Lines, damned lines and statistics: unearthing structure in Ptolemy’s Geographia (PDF).)

Leif showed a graph of the coordinate values that represented points in terms of time. Latitude is tied to length of longest day, but it isn’t even. You get an uneven grid. The coordinates of boundary locations seem to fall on the irregular time-based grid network. It is as if the edges of regions are meant to fall on the hours of the degree based system. It looks as if alignment was important to him – he seemed to have believed that alignment should be the case like grain in wood.

Leif talked about how Ptolemy’s data can tell us things about his theory. He talked about the warp and weft of the data. It is also important to see how visual arguments help us understand numeric ones at scale. The visual projection shows us something about Ptolemy’s coordinates (numbers).

Finally, Ptolemy is interested in mapping the world to the celestial which means he has to know the time.

Case Study 3: The Peutinger Map

The Peutinger map was his third case study. It is a copy of a map that we are fairly sure was produced in late antiquity. It is a weird parchment scroll that is long, but not high. Is it a guide for carrying on a trip? Rome is somewhere in the middle but not exactly in the middle which has led people like Richard Talbert in Rome’s World to argue that the long map was a presentation piece and Rome should be in the middle which means we are missing three parchments on one side.

Leif runs a project called Pelagios Commons which allows him to compare locations in the map to those from other itineraries (lists). The itineraries on the coast lines of the map seem similar to another work. He argued in support of Talbert suggesting that the scroll may have been copied from a wall.

This led to the general warning that the way that data is experienced and consumed is often very different now from how it was intended to be consumed and contextual evidence can come from other unexpected places.

Case Study 4: the Pelagios Map Tiles

Pelagios is aggregating information from early geographic documents. He showed how we need to be careful about flat maps – they can hide information. When you look at the 3D then you can begin to understand some of the flat decisions – like why you need to skirt certain mountains or wet areas (like the Po valley.)

Conclusions

Contemporary datasets are just like the old ones. They are composites, each with a long and complex heritage. They are incomplete and degraded. They need to be read closely and distantly. They need to be challenged just as we would any other type of evidence or claim.

He ended by saying that the point of the humanities is not to find answers so much as to question that answers we have been given. We need to rethink what we thought we knew and that includes visualizations.

]]>
http://threedh.net/leif-isaksen-revisiting-the-tangled-web-on-utility-and-deception-in-the-geo-humanities/feed/ 0
Voyant Workshop http://threedh.net/voyant-workshop/ http://threedh.net/voyant-workshop/#respond Sun, 19 Jun 2016 12:36:31 +0000 http://threedh.net/?p=299 Default View of Voyant
Default View of Voyant

Geoffrey Rockwell ran a Voyant workshop for interested students and faculty on Thursday the 16th of June. The workshop used this script.

]]>
http://threedh.net/voyant-workshop/feed/ 0
Laura Mandell: Visualizing Gender Complexity http://threedh.net/laura-mandell-visualizing-gender-complexity/ http://threedh.net/laura-mandell-visualizing-gender-complexity/#respond Sat, 11 Jun 2016 17:29:34 +0000 http://threedh.net/?p=294 Read more]]> Laura started her talk by showing some simple visualizations and talking about the difficulties of reading graphs. She showed Artemis, searching for words “circumstantial” and “information” over time. She then compared it to the Google NGram viewer. She talked about the problems with the NGram viewer like shifts in characters (from f to s) around 1750. Dirty OCR makes a difference too. She showed a problem with Artemis having to do with the dropping out of a dataset. Artemis has a set of datasets, but not all for all time so when one drops out you get a drop in results.

Even when you deal with relative frequency you can get what look like wild variations. These often are not indicative of something in the time, but indicate a small sample size. The diachronic datasets often have far fewer books per year in the early centuries than later so the results of searches can vary. One book with the search pattern can appear like a dramatic bump in early years.

There are also problems with claims made about data. There is a “real world” from which we then capture (capta) information. That information is not given but captured. It is then manipulated to produce more and more surrogates. The surrogates are then used to produce visualizations where you pick what you want users to see and how. All of these are acts of interpretation.

What we have are problems with tools and problems of data. We can see this in how women are represented datamining, which is what this talk is about. She organized her talk around the steps that get us from the world to a visualization. Her central example was Matt Jocker’s work in Macroanalysis on gender that seemed to suggest we can use text mining to differentiate between women and men writing.

World 2 Capta

She started with the problem of what data we have of women’s writing. The data is not given by the “real” world. It is gathered and people gathering often have biased accounting systems. Decisions made about what is literature or what is high literature affect the mining downstream.

We need to be able to ask “How is data structured and does it have problems?”

Women are absent in the archive – they are getting erased. Laura thinks these erasures sustain the illusion.

Capta 2 Data or Data Munging

She then talked about the munging of data – how it is cleaned up and enriched. She talked about how Matt Jockers has presented differences in data munging.

The Algorithms

Then she talked about the algorithms, many of which have problems. Moritz Hardt arranged a conference on How Big Data is Unfair. Hardt showed how the algorithms can be biased.

Sara Hajian is another person who has talked about algorithm unfairness. She has shown how it shows prestigious job ads to men. Preferential culture is unfair. Why Big Data Needs Thick Data is a paper that argues that we need both.

Laura insisted that the solution is not to give up on big data, but that we need to keep working on big data to make it fair and not give it up.

Data Manipulation to Visualization

Laura then shifted to problems with how data is manipulated and visualized to make arguments. She mentioned Jan Rybicki’s article Vive la différence that shows how ideas about writing like a man and like a woman don’t work. Even Matt Jockers concludes that gender doesn’t explain much. Coherence, author, genre, decade do a much better job. That said, Matt concluded that gender was a strong signal.

Visualizations then pick up on simplifications.

Lucy Suchman looks at systems thinking. Systems are a problem, but they are important as networks of relations. The articulation of relations in a system is perfomative, not a given. Gender characteristics can be exaggerated – that can be the production of gender. There are various reasons why people choose to perform gender and their sex may not matter.

There is also an act of gender in analyzing the data. “What I do is tame ambiguity.”

Calculative exactitude is not the same as precision. Computers don’t make binary oppositions; people do. (See Ted Underwood, The Real Problem with Distant Reading.) Machine learning algorithms are good at teasing out loose family resemblances, not clear cut differences and one of the problems with gender is that it isn’t binary. Feminists distinguished between sex vs. gender. We now have transgender, cisgender … and exaggerated gender.

Now that we look for writing scales we can look for a lot more than a binary.

Is complexity just one more politically correct thing we want to do? Mandell is working with Piper to see if they can use the texts themselves to generate genders.

It is also true that sometimes we don’t want complexity. Sometimes we want simple forceful graphics.

Special Problems posed by Visualizing Literary Objects

Laura’s last move was to  then looked at gender in literary texts and discuss the problem of mining gender in literary texts with characters. To that end she invoked Blakey Vermeule, Why Do We Care About Literary Characters? about Miss Bates and marriage in Austen’s Emma.

Authors make things stand out in various ways using repetition which may through off bag-of-words algorithms. Novels try to portray the stereotypical and then violate it – “The economy of character.

Novels are performing both bias and the analysis of bias – they can create and unmask biases. How is text mining going to track that.

In A Matter of Scale, Jockers talks about checking confirmation bias to which Flanders replies about how we all operate with community consensus.

The lone objective researcher is an old model – how can we analyze in a community that develops consensus using text mining? To do this Laura Mandell believes we need capta open to examination, dissensus driving change, open examination of the algorithms and then how visualizations represent the capta.

]]>
http://threedh.net/laura-mandell-visualizing-gender-complexity/feed/ 0