Geoffrey Rockwell – 3DH http://threedh.net Three-dimensional dynamic data visualisation and exploration for digitial humanities research Wed, 19 Dec 2018 18:43:20 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 http://threedh.net/wp-content/uploads/2016/04/cropped-3dh-siteicon-32x32.png Geoffrey Rockwell – 3DH http://threedh.net 32 32 Lauren F. Klein: Speculative Designs: Lessons from the Archive of Data Visualization http://threedh.net/lauren-f-klein-speculative-designs-lessons-from-the-archive-of-data-visualization/ http://threedh.net/lauren-f-klein-speculative-designs-lessons-from-the-archive-of-data-visualization/#respond Sun, 03 Jul 2016 17:44:09 +0000 http://threedh.net/?p=315 Read more]]> Peabody Visualization
Peabody Visualization

Lauren Klein‘s paper looked at two 19th century pioneers of data visualization to see what we could learn from them. She asked,

What is the story we tell about the origins of modern data visualization?

What alternative histories emerge? What new forms might we imagine, and what new arguments might we make, if we told that story differently?

Lauren looked at Elizabeth Peabody for an alternative history who is often overlooked because her visualizations are seen as opaque. She compared it to Playfair who is generally considered the first in the canonical history of visualization. Lauren asked why visualizations need to be clear? Why not imagine visualizations that are opaque and learn from them? Her project is a digital recreation project of Peabody’s thinking.

Elizabeth Palmer Peabody (1804-1984) ran a bookstore out of Boston that acted as a salon for the transcendentalists. In 1856 she published a Chronological History of the United States for schools. She traveled around to promote her textbook with a roll of mural charts like domestic rugs (see above). Her charts were based on a Polish process that generated overviews of history.

For modern mavens of visualization like Edward Tufte these charts would not be clear and therefore not effective. By contrast Lauren sees the visualizations of Peabody not as clarifying but as a tool of process or knowedge production. You make knowledge rather than consume it when you make a chart. The clarity to those who didn’t make it is besides the point.

Peabody also sold workbooks for students of school that used the textbook so that they could follow the lessons and rules to generate patterns. Hers is is an argument for making and this making has a historical context. Peabody rejected a single interpretation of history and imagined a visualization system that encourages different interpretations.

This led to one of the points of the talk and that was that the very idea of visualization is itself historically situated and should be examined. And this led to looking again at the canonical works of William Playfair.

She then showed us some of Playfair’s visualizations (from The Commercial and Political Atlas) that are much more readable and for that reason he is often seen as a pioneer in data visualization. Playfair is widely considered one of the first to abstract phenomena to data for visualization. Lauren pointed out how Playfair was not sure how his visualizations would be interpreted, but he did want them to make an impression that was “simple and complete.” He was good at this.

She then showed Lyra: A Visualization Design Environment, an open source alternative to Tableau. There are a lot of Playfair emulators who use things from Lyra to everyday tools like Excel to recreate Playfair’s charts. There are plenty of tools now out there with which one can create visualizations including try to emulate Playfair.

What is interesting is that the designers of the professional tools made decisions about what visualizations should or could do. Thus we see a lot of line and bar charts and little resembling Peabody’s. The widely held belief is that visualization should condense and clarify.

Recreating Peabody

Lauren then shifted to describing an ongoing project to recreate some of Playfair and Peabody’s charts with different tools. They found the existing tools, like D3, hard to use. The tools all assume you start with data. This made her think of the status of data and its relationship to visualization.

She pointed out that when you use a tool for visualization you don’t worry about the shape of the curve, you let the tool do that. Playfair did, however worry about it. He had to engrave the curves by hand and he played with the lines trying to make them attractive to the eye.

Watt, for whom Playfair worked, suggested to him that he put the tables next to the charts. He did this in the first two editions of his book (and then removed the tables for the third.) Even with those charts some of the graphs are hard to recreate. To make one of Playfair charts they had to use data from two different places in Playfair. Again, almost all tools, like D3, now depend on data. The dependence on data is structurally woven in, unlike more artistic tools like Illustrator.

She then showed an engraving error detail and discussed how it could have come about due to Playfair being tired when making the copper plate. In the digital artefact we don’t see such errors – we only see the finished project. The digital masks the labour. Only in Github are changes/labour saved and viewable.

Then she showed the prototypes her team has made including a “build” mode where you can construct a Peabody chart. They are now planning a large scale project using LEDs on fabric to create a physical prototype as that would be closer to the fabric charts Peabody made.

This returned her to labour, especially the labour of women. Peabody made copies of the charts for classes that adopted her book. Alas, none of these survived, but we do have evidence of the drudgery in her letters.

To Lauren the Peabody charts remind her of quilts and she showed examples of quilts from Louisiana that were a form of community knowledge constructing genealogies. Such quilts have only recently been recognized as knowledge comparable to the logocentric knowledge we normally respect.

Lauren closed with a speculative experiment. How would we think differently if Peabody’s charts had been adopted as the standard to be emulated rather than the line charts of Playfair? How might we know differently?

Her team’s recreations of both the Playfair and Peabody charts are just such a sort of speculation – understanding though making.

You can watch the video with slides here.

]]>
http://threedh.net/lauren-f-klein-speculative-designs-lessons-from-the-archive-of-data-visualization/feed/ 0
Stan Ruecker: The Digital Is Gravy http://threedh.net/stan-ruecker-the-digital-is-gravy/ http://threedh.net/stan-ruecker-the-digital-is-gravy/#respond Sat, 25 Jun 2016 14:51:50 +0000 http://threedh.net/?p=308 Read more]]> Timeline Design
Timeline Design

Stan Ruecker gave the 3DH talk on the 23rd of June with the enigmatic title The Digital Is Gravy. He explained the title in reference to gravy being the what gives flavour to the steak. In his case, he wanted to show us how physical prototyping can give substance (steak) to the digital.

Stan started with an example of a physical prototype that materializes bubblelines that was developed by Milena Radzikowska who showed it at Congress 2016 in Calgary. (See Materializing the Visual.) He suggested that materialization of a visualization slows down analysis and leads to other lines of thought.

At the IIT Institute for Design Stan is weaving physical prototyping into digital design projects. His main research goal is to find ways to encourage people to have multiple opinions. He want to build information systems that encourage the discovery of different perspectives and the presentation of multiple opinions on a phenomenon. The idea is to encourage reflective interpretation rather than dogmatism.

How prototypes build understanding

He listed some ways that prototyping can build understanding:

  • Build something to collect information
  • The prototype is itself a kind of evidence
  • Learning through making. You don’t even need to finish a prototype. “Fail early and fail often.”
  • Prototype is also a representation of the topic area

Why physicality is important

After returning to the materialized bubblelines he talked

  • Materialized prototypes take time differently which can lead
  • It can produce results that can be used for comparison (with other results)
  • It can engage physical intelligence – embodied experience can leverage different ways of knowing
  • It involves collaboration (over time)  that involves community knowing
  • It encourages multiple perspectives from different people and different points of view

My experience with the pleasures of physical prototyping in a group reinforces the way the making of the

Timelines

He then talked about a project around timelines that has built on work Johanna Drucker did. He had gone through multiple prototypes from digital to physical as he tried to find ways to represent different types of time. He tried creating a 3D model in Unity but that didn’t really work for them. He now has a number of student designers who are physically modelling what the timeline could be like if you manipulated it physically and then that was uploaded to the digital representation (the gravy.)

Physical Qualitative Analysis

He then talked about how a multinational team is designing physical analytical tools. The idea is that people can analyze a text and model an understanding of it in a physical 3D space. It is grounded theory – you build up an emergent understanding. They tried creating a floating model like a Calder sculpture. They tried modelling technical support conversations. They used a wired up coat rack – hacking what they had at hand.

My first reaction is that doing this physically would be so slow. But that is the point. Slow down and think by building. They tried a digital table and that was no fun so they started making all sorts of physical

I’m guessing it would be interesting to look at Ann Blair’s Too Much To Know where she talks about the history of note taking and physical ways of organizing information like excerpt cabinets.

Stan then talked about a successful line of prototypes that had transparent panels that could be organized, joined, and on which ideas could be put with post-it notes. Doing this in a team encourages users to different views on a subject as the panels have two sides and can be jointed to have even more.

Finally, they are now trying to bring these back to the digital so that once you have an arrangement of panels with notes you can digitize it and bring it into the computer. This also suggests the possibility of automatically generating the model on the computer from the text.

He commented on how he has no industry industry interested in the analysis of conversations.

And that was the end.

 

 

]]>
http://threedh.net/stan-ruecker-the-digital-is-gravy/feed/ 0
Leif Isaksen: Revisiting the Tangled Web: On utility and Deception in the Geo-Humanities http://threedh.net/leif-isaksen-revisiting-the-tangled-web-on-utility-and-deception-in-the-geo-humanities/ http://threedh.net/leif-isaksen-revisiting-the-tangled-web-on-utility-and-deception-in-the-geo-humanities/#respond Sun, 19 Jun 2016 13:09:40 +0000 http://threedh.net/?p=304 Read more]]> Leif Isaksen gave the lecture on the 16th of June. He has a background in history, computer science, philosophy and archaeology. He spends a lot of time thinking about how to represent complex spatial arguments to other people and that has led him to ask how can we read (closely) the historical depictions of geographic space? How can we approach someone else’s visualization when we have only the visualization. He then joked that a better title for his talk might be “Thoughts on Predicting the Ends of the World” where “ends” can mean goals in representing the world.

Some of the things we have to think about when reading historical visualizations include:

  • Classification – how is the world classified when the visualization was drawn up?
  • Derived vs manually produced data – how did the data get to the cartographer and, for that matter, how did the map get to us?
  • Graphic vs. textual representations – we are continually transforming representations from visual to textual and back – what happens in the transcoding?
  • Epistemology – how do we know what we think we know?
  • Time and change – how is time and change collapsed in representations of space?
  • Completeness – we never have complete information, but sometimes we think we do
  • Data proxies – we are not interacting with the phenomenon itself, but with surrogates
  • Geography – what is special about the world?

He then showed 4 case studies.

Case Study 1: Roman Itineraries

Roman “station lists” exist that tell us about the stations if you went from somewhere to somewhere. There are a number of these lists, but we don’t know why they were created or stored. He showed an image of the Vicarello Goblets.

Then he showed ways to put these itineraries on a map or to visualize them in other ways like a topological map. A topological map can show closeness or betweeness. High betweeness is where you might get a bottleneck of travelers.

He showed that the data for itineraries has been put into databases for others to look at like that of the Ancient World Mapping Center. When we look at them the beginnings and ends of itineraries are often strange.

Leif commented that we need to be careful when drawing conclusions from small sets of data (like these lists) or combined data from different sets (and lists). We need to remember that the texts or lists are not the phenomenon. Access to texts is also changing rapidly.

Case Study 2: Ptolemy’s Geography

Ptolemy-World_Vat_Urb_82
Ptolemy’s world map from Codex Vaticanus Urbinas Graecus (from Wikipedia)

Ptolemy was famous early on for astronomy and geography. His Geographia was one of the earliest tools for geography. Ptolemy wrote about projections and provided a catalogue of places with coordinates that could be projected. We don’t have his maps, though we have maps created from his Geographia, to get those we try to project them from his theory and coordinates. (See Leif’s paper on Lines, damned lines and statistics: unearthing structure in Ptolemy’s Geographia (PDF).)

Leif showed a graph of the coordinate values that represented points in terms of time. Latitude is tied to length of longest day, but it isn’t even. You get an uneven grid. The coordinates of boundary locations seem to fall on the irregular time-based grid network. It is as if the edges of regions are meant to fall on the hours of the degree based system. It looks as if alignment was important to him – he seemed to have believed that alignment should be the case like grain in wood.

Leif talked about how Ptolemy’s data can tell us things about his theory. He talked about the warp and weft of the data. It is also important to see how visual arguments help us understand numeric ones at scale. The visual projection shows us something about Ptolemy’s coordinates (numbers).

Finally, Ptolemy is interested in mapping the world to the celestial which means he has to know the time.

Case Study 3: The Peutinger Map

The Peutinger map was his third case study. It is a copy of a map that we are fairly sure was produced in late antiquity. It is a weird parchment scroll that is long, but not high. Is it a guide for carrying on a trip? Rome is somewhere in the middle but not exactly in the middle which has led people like Richard Talbert in Rome’s World to argue that the long map was a presentation piece and Rome should be in the middle which means we are missing three parchments on one side.

Leif runs a project called Pelagios Commons which allows him to compare locations in the map to those from other itineraries (lists). The itineraries on the coast lines of the map seem similar to another work. He argued in support of Talbert suggesting that the scroll may have been copied from a wall.

This led to the general warning that the way that data is experienced and consumed is often very different now from how it was intended to be consumed and contextual evidence can come from other unexpected places.

Case Study 4: the Pelagios Map Tiles

Pelagios is aggregating information from early geographic documents. He showed how we need to be careful about flat maps – they can hide information. When you look at the 3D then you can begin to understand some of the flat decisions – like why you need to skirt certain mountains or wet areas (like the Po valley.)

Conclusions

Contemporary datasets are just like the old ones. They are composites, each with a long and complex heritage. They are incomplete and degraded. They need to be read closely and distantly. They need to be challenged just as we would any other type of evidence or claim.

He ended by saying that the point of the humanities is not to find answers so much as to question that answers we have been given. We need to rethink what we thought we knew and that includes visualizations.

]]>
http://threedh.net/leif-isaksen-revisiting-the-tangled-web-on-utility-and-deception-in-the-geo-humanities/feed/ 0
Voyant Workshop http://threedh.net/voyant-workshop/ http://threedh.net/voyant-workshop/#respond Sun, 19 Jun 2016 12:36:31 +0000 http://threedh.net/?p=299 Default View of Voyant
Default View of Voyant

Geoffrey Rockwell ran a Voyant workshop for interested students and faculty on Thursday the 16th of June. The workshop used this script.

]]>
http://threedh.net/voyant-workshop/feed/ 0
Laura Mandell: Visualizing Gender Complexity http://threedh.net/laura-mandell-visualizing-gender-complexity/ http://threedh.net/laura-mandell-visualizing-gender-complexity/#respond Sat, 11 Jun 2016 17:29:34 +0000 http://threedh.net/?p=294 Read more]]> Laura started her talk by showing some simple visualizations and talking about the difficulties of reading graphs. She showed Artemis, searching for words “circumstantial” and “information” over time. She then compared it to the Google NGram viewer. She talked about the problems with the NGram viewer like shifts in characters (from f to s) around 1750. Dirty OCR makes a difference too. She showed a problem with Artemis having to do with the dropping out of a dataset. Artemis has a set of datasets, but not all for all time so when one drops out you get a drop in results.

Even when you deal with relative frequency you can get what look like wild variations. These often are not indicative of something in the time, but indicate a small sample size. The diachronic datasets often have far fewer books per year in the early centuries than later so the results of searches can vary. One book with the search pattern can appear like a dramatic bump in early years.

There are also problems with claims made about data. There is a “real world” from which we then capture (capta) information. That information is not given but captured. It is then manipulated to produce more and more surrogates. The surrogates are then used to produce visualizations where you pick what you want users to see and how. All of these are acts of interpretation.

What we have are problems with tools and problems of data. We can see this in how women are represented datamining, which is what this talk is about. She organized her talk around the steps that get us from the world to a visualization. Her central example was Matt Jocker’s work in Macroanalysis on gender that seemed to suggest we can use text mining to differentiate between women and men writing.

World 2 Capta

She started with the problem of what data we have of women’s writing. The data is not given by the “real” world. It is gathered and people gathering often have biased accounting systems. Decisions made about what is literature or what is high literature affect the mining downstream.

We need to be able to ask “How is data structured and does it have problems?”

Women are absent in the archive – they are getting erased. Laura thinks these erasures sustain the illusion.

Capta 2 Data or Data Munging

She then talked about the munging of data – how it is cleaned up and enriched. She talked about how Matt Jockers has presented differences in data munging.

The Algorithms

Then she talked about the algorithms, many of which have problems. Moritz Hardt arranged a conference on How Big Data is Unfair. Hardt showed how the algorithms can be biased.

Sara Hajian is another person who has talked about algorithm unfairness. She has shown how it shows prestigious job ads to men. Preferential culture is unfair. Why Big Data Needs Thick Data is a paper that argues that we need both.

Laura insisted that the solution is not to give up on big data, but that we need to keep working on big data to make it fair and not give it up.

Data Manipulation to Visualization

Laura then shifted to problems with how data is manipulated and visualized to make arguments. She mentioned Jan Rybicki’s article Vive la différence that shows how ideas about writing like a man and like a woman don’t work. Even Matt Jockers concludes that gender doesn’t explain much. Coherence, author, genre, decade do a much better job. That said, Matt concluded that gender was a strong signal.

Visualizations then pick up on simplifications.

Lucy Suchman looks at systems thinking. Systems are a problem, but they are important as networks of relations. The articulation of relations in a system is perfomative, not a given. Gender characteristics can be exaggerated – that can be the production of gender. There are various reasons why people choose to perform gender and their sex may not matter.

There is also an act of gender in analyzing the data. “What I do is tame ambiguity.”

Calculative exactitude is not the same as precision. Computers don’t make binary oppositions; people do. (See Ted Underwood, The Real Problem with Distant Reading.) Machine learning algorithms are good at teasing out loose family resemblances, not clear cut differences and one of the problems with gender is that it isn’t binary. Feminists distinguished between sex vs. gender. We now have transgender, cisgender … and exaggerated gender.

Now that we look for writing scales we can look for a lot more than a binary.

Is complexity just one more politically correct thing we want to do? Mandell is working with Piper to see if they can use the texts themselves to generate genders.

It is also true that sometimes we don’t want complexity. Sometimes we want simple forceful graphics.

Special Problems posed by Visualizing Literary Objects

Laura’s last move was to  then looked at gender in literary texts and discuss the problem of mining gender in literary texts with characters. To that end she invoked Blakey Vermeule, Why Do We Care About Literary Characters? about Miss Bates and marriage in Austen’s Emma.

Authors make things stand out in various ways using repetition which may through off bag-of-words algorithms. Novels try to portray the stereotypical and then violate it – “The economy of character.

Novels are performing both bias and the analysis of bias – they can create and unmask biases. How is text mining going to track that.

In A Matter of Scale, Jockers talks about checking confirmation bias to which Flanders replies about how we all operate with community consensus.

The lone objective researcher is an old model – how can we analyze in a community that develops consensus using text mining? To do this Laura Mandell believes we need capta open to examination, dissensus driving change, open examination of the algorithms and then how visualizations represent the capta.

]]>
http://threedh.net/laura-mandell-visualizing-gender-complexity/feed/ 0
Johanna Drucker: Visualizing Interpretation: A Report on 3DH http://threedh.net/johanna-drucker-visualizing-interpretation-a-report-on-3dh/ http://threedh.net/johanna-drucker-visualizing-interpretation-a-report-on-3dh/#respond Tue, 07 Jun 2016 17:59:12 +0000 http://threedh.net/?p=289 Read more]]> Johanna Drucker gave a special lecture on June 6th that reported on the state of the project and where we are going. She started by giving some history to the 3DH project. We went from “create the next generation of visualizations in the digital humanities?” to a more nuanced goal:

Can we augment current visualizations to better serve humanists and, at the same time, make humanistic methods into systematic visualizations that are useful across disciplines outside the humanities?

She commented that there is no lack of visualizations, but most of them have their origins in the sciences. Further, evidence and argument get collapsed in visualization, something we want to tease apart. In doing this, can we create a set of visualization conventions that make humanities methods useful to other disciplines? Some of the things important to the humanities that we want to make evidence include: partial evidence, situated knowledge, and complex and non-singular interpretations.

Project development is part of what we have been focusing on. We have had to ask ourselves “what is the problem?” We had to break the problem down, agree on practices, frame the project, and sketch ideas.

Johanna talked about how we ran a charette on what was outside the frame. She showed some of the designs. Now we have a bunch of design challenges for inside the frame. One principle we are working with is that a visualization can’t be only data driven. There has to be a dialogue between the graphical display and the data. Thus we can have visualization driven data and vice versa.

We broke the tasks down to:

  • Survey visualization types
  • Study pictorial conventions
  • Create graphical activators
  • Propose some epistemological / hermeneutical dimensions
  • Use three dimensionality
  • Apply to cases
  • Consider generalizability

Visualization Types

Johanna then went through showed the typology we are working with:

  • Facsimiles are visual
  • XML markup also has visual features, as do word processing views
  • Charts, Graphs, Maps, Timelines
  • 3D renderings, Augmented realities, Simulations
  • Imaging techniques out of material sciences

Graphical Activators

She talked about graphical primitives and how we need to be systematic about the graphical and interactive features we can play with. What can we do with different primitives? What would blurring mean? What happens when we add animation/movement, interactivity, sound?

With all these graphical features, then the question is how can we combine the activators with interpretative principles.

Using the 3rd Dimension as Interpretation

She then talked about how we can use additional dimensions to add interpretation. She showed some rich examples of how a chart could be sliced and projected. We can distort to produce perspectives. The graphical manipulation lets us engage with the data visually. You can do anamorphic mapping that lets us see the data differently.

She then talked about perspectivization – when you add a perspective to the points. You dimensionalize the data. You add people to the points. Can we use iconography?

She showed ideas for different problems like the hairball problem. She showed ideas for how visualizations that are linked can affect each other. She showed ideas for the too much Twitter problem.

She talked about the problem of how to connect different ideological taxonomies for time like biblical and scientific time without collapsing them? How can we show the points of contact without reducing one to the other?

She then talked about the issue of generalizability. Can we generalize the ideas she has been working with? How can we humanize the presentation of data? Can we deconstruct visualizations?

Some of the questions and discussion after her talk touched on:

  • To what extent are visualizations culturally specific?
  • Does adding more graphical features not just add more of the same? Does it really challenge the visualization or does it add humanistic authority?
  • How is adding more dimensions a critique of display rather than just more display?
  • We talked about the time of making the visualization and the time of the unfolding of the visualization.
  • We talked about how time can represent something or model something.
  • Can we imagine games of making visualizations? How does the making of the visualization constitute a visualization? Can a way of making visualizations be more useful?
  • How can any visualization have the APIs to be connected to physical controls and physical materializations?
]]>
http://threedh.net/johanna-drucker-visualizing-interpretation-a-report-on-3dh/feed/ 0
Materializing the Visual http://threedh.net/283-2/ http://threedh.net/283-2/#respond Tue, 07 Jun 2016 17:24:58 +0000 http://threedh.net/?p=283 Read more]]> Materialization of Bubblelines
Materialization of Bubblelines

The Canadian Society for Digital Humanities 2016 conference was held this year in Calgary, Alberta. Milena Radzikowska presented a paper on “Materializing Text Analytical Experiences: Taking Bubblelines Literally” in which she showed a physical system designed to materialize a Bubblelines visualization. (Bubblelines is a tool in the Voyant suite of tools.) In here talk she demonstrated the materialization filling tubes with different coloured sand for the words “open” and “free” as they appeared in a text. She talked about how the materialization changed her sense of time and visualization. Read more about the conference in Geoffrey Rockwell’s conference report.

]]>
http://threedh.net/283-2/feed/ 0
Mark Grimshaw: Rethinking Sound http://threedh.net/mark-grimshaw-rethinking-sound/ http://threedh.net/mark-grimshaw-rethinking-sound/#respond Fri, 27 May 2016 21:25:58 +0000 http://threedh.net/?p=227 Read more]]> Mark Grimshaw from Aalborg University, Denmark gave the lecture yesterday (May 26th) on  Rethinking Sound. (See video of talk here.)

Grimshaw has been interested in game sound for some time and how sound helps create an immersive experience. He is also interested in how games sonify others in a multi-player game (how you hear others). He is also interested in virtual reality and how sound can be used to give verisimilitude.

Why rethink sound? He started by discussing problems with definitions of sound and trying to redefine sound to understand sonic virtuality. The standard definition is that sound is a sound wave. The problem is that there are really two definitions:

  • sound is an oscillation of pressure or sound wave, or
  • sound is an auditory sensation produced by such waves (both from the ANSI documentation)

He mentioned another definition that I rather liked, that sound is “a mechanical disturbance in the medium.” This is from an acoustics textbook: Howard, D. M., & Angus, J. (1996). Acoustics and psychoacoustics. Oxford: Focal Press.

Not all sounds produce an auditory sensation (like ultrasound) and not all sensations are created by sound waves (eg. tinnitus). For that matter, sound also gets defined as that which happens in the brain. The paradox is:

  • Not all sounds evoke a sound, and
  • Not all sounds are evoked by sound.

He then talked about the McGurk effect when what we see overrides what we hear. Mouth movements cause us to hear differently so as to maintain a coherent version of the world. Perhaps sound waves are not all there is to sound. See https://www.youtube.com/watch?v=G-lN8vWm3m0

He provided some interesting examples of sounds that we interpreted differently.

What is interesting is that we defined sound as the sound source, as in that sound is a bird. This shows how our everyday sense of sound has nothing to do with waves.

Then there is the question of “where is the sound?” Does the sound come from the ventriloquist or from their dummy. The effect is known as synchresis (see here). We have the ability to keep more than one mapping system of sound sources. We locate sound very well, but in some cases, like cinema, we locate it in the world of the film. We can separate the location of the heard sound from its actual location.

Some other definitions include:

  • Democritus said sound is a stream of particles emitted by a thing (a phonon)
  • Sound is an event (Aristotle)
  • Sound is the property of an object
  • Sounds are secondary objects and pure events
  • Sound is cochlear (involving sound waves) and non-cochlear sound (synaesthesisa)

Needless to say, the language of science around sound is very different from our everyday language.

His definition is for “sonic virtuality”:

Sound is an emergent perception arising primarily in the auditory cortex and that is formed through spatio-temporal processes in an embodied system.

A sonic aggregate is all the things that go into forming the perception of sound (like what you see.) Some is exosonus (what is outside) and some is endosonus (non sensuous components).

He talked about how for some animals there might be a sense of “smound” which is some combination of sound and smell.

The emergence of sound can determine epistemic perspective, though in some cases the perspective forms the sound. Imagined sound is just as much sound as exosonus sound.

In sonic virtuality, sound localization is a cognitive offloading of the location of sound onto the world. We tend to put sound where it makes sense for it to be.

Cognitively what seems to happen is that we form hypotheses about sound aggregate and eventually select emergent version of sound. This is embodied cognition which is time pressured – ie. pressured to decide quickly. We don’t know for sure.

Immersion and Presence

He then shifted to talking about games and the difference between immersion and presence. Immersion is supposedly objective – how close to reality is the simulation of sensory stimuli. Presence seems more subjective.

The way we locate sound out in the world is what leads to differentiation of self and not-self and that leads to sense of presence. Sound tells us about space.

If we want a better sense of presence in virtual reality – is increasing the simulation the way to go? VR systems try to deliver discrete sensory stimuli of greater and greater verisimilitude.

RV or real virtuality suggests a different approach – that of an appropriate level of stimulation that lets the brain make sense of the virtual space. You want the brain to actively engage.

If we model sound as perception then can we extract it? Can we extract sound? This is an area called neural decoding. (Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). “Reconstructing visual experiences from brain activity evoked by natural movies.” Current Biology, 21, 1641–1646.) It seems they can now reconstruct what someone saw from the brain imaging.

Sonification

At the end he talked about sonification which connects to the 3DH project. Sonification is the audio equivalent to visualization. What is the value of representing data with sound? He gave some examples of sonification:

  • A geiger counter is a sonification of radiation
  • In radio astronomy sonification is used to help finding interesting or anomalous moments in radio waves. We can’t stop listening the way we can look away.
  • PEEP (PDF) is a tool that sonifies network activity.

If we can transform large amounts of data into sound, what would we do? Each sensory modality has some things it is good at and some it is not so good at.

  • Sound is good at time.
  • Ambiguity is hard to visualize and often left off. Sound might be a way to keep ambiguity.
  • Sounds can have meaning that could be used (but sound waves do not.)

Are there some sound primitives? Yes! there are some sound primitives that seem to be evolutionarily encoded in us like the sound of something rapidly approaching. Our brains seem to be attuned to certain sound wave attacks. What are the sound primitives that we can manipulate?

  • Attack
  • Loudness
  • Tone(s)
  • Texture (timbre)

Discussion

Some of the points that came up during discussion include:

Are there ways that sound can contradict vision as in a Jacques Tati movie like Playtime? It turns out that in most situations vision dominates hearing, but in others hearing can override vision. It seems that hearing is very sensitive to temporal changes, as in changes in rythym.

Are there ways of understanding the cultural and social in interpreting of sound?

Note

This was updated with corrections from Grimshaw.

]]>
http://threedh.net/mark-grimshaw-rethinking-sound/feed/ 0
Sustainability of Visualizations http://threedh.net/sustainability-of-visualizations/ http://threedh.net/sustainability-of-visualizations/#respond Thu, 26 May 2016 14:21:34 +0000 http://threedh.net/?p=211 Read more]]> Elaborate visual simulations for cultural heritage studies have a sustainability problem. As Erik Champion told us, they are often broken before the project even ends. For that matter, why do most museum interactive exhibits break before I get a chance to try them? If visualizations are to develop as a form of scholarly communication we need to imagine how to build visualizations that are sustainable.

Sustainability of digital scholarship has been addressed by organizations like Ithaka S+R in their Sustaining Our Digital Future: Institutional Strategies for Digital Content (PDF) and by scholars like Jerome McGann in Sustainability: The Elephant in the Room. The Ithaka report rightly points out all the human and technical infrastructure that supports projects that is overlooked and not considered by projects. Projects usually get funding to be created, but not maintenance funding and there are no strategies to develop units like libraries to sustain projects (as opposed to just preserving the data.) McGann points out how the third leg of scholarship, namely the scholarly publishers, is struggling and we need to imagine what a healthy scholarly publishing industry would look like in the digital age.

How can we imagine infrastructure for visualization that is sustainable, not only over the course of a project, but over the time that you share an insight?

]]>
http://threedh.net/sustainability-of-visualizations/feed/ 0
Videos available of the lectures http://threedh.net/videos-available-of-the-lectures/ http://threedh.net/videos-available-of-the-lectures/#respond Wed, 18 May 2016 20:41:45 +0000 http://threedh.net/?p=216 Read more]]> Did you know that the 3DH lectures are available online? Here are the recent lectures:

]]>
http://threedh.net/videos-available-of-the-lectures/feed/ 0