I was just reading an article on language and communication and I’d thought I’d share some provocations on livecoding language design & evolution. Here, I’ve laid out a couple of nice principles that I think I’ll try and live by when (if??) I’m making new languages, and more immediately, when I’m refining LivePrinter’s syntax.
First, an article on communication. Gärdenfors works with a pretty straightforward definition of “communication”:
“At its basics, communication is for solving problems of actions, i.e., coordination”
She then proposes a “division of communication into a hierarchical set of levels: (1) Instruction; (2) Coordination of common ground; and, (3) Coordination of meanings”
I’m really struck by this, because it’s what I’ve seen emerge from my project in both my personal experience and user workshops.
On the one hand, I need to often tell the computer (or printer, or other system) exactly what to do, e.g. send it precise instructions. In my code, this translates to sending 3D printer states that translate to movements of the printer’s motors. In audio, I’d argue that this is where you actually play the note.
Other times, I need to work with the audience and other participants to coordinate where we are in the process of making something and need to coordinate where we are in that process. I’m mapping this to “common ground,” which is declaring variables and setting the program state (speed, etc.) for things that don’t have immediate effect, e.g. not instructions for action but preparation for action that need to be agreed on prior to action.
The syntax itself needs to be negotiated through functions and other higher-level structures (during livecoding), up to changing the syntax itself (likely done in when coding the re/developing itself). I take this as “coordination of meanings.”
It’s interesting (or maybe obvious, in retrospect) that this kind of process comes up in design. I recently read a PhD thesis from Tabor Balint, a NASA engineer who studied at the RCA and was interested in changing the culture of NASA’s project planning and execution using ideas from Cybernetics and design research. He came up with this circular diagram which I’d suggest is a good summary of design stages for livecoding language design:
Lastly, Blackwell, Britton, Cox, Green et. al wrote about the “cognitive dimensions of notation” which Alan Blackwell linked to livecoding in some of his past papers. The CDNs were mentioned by @yaxu in a previous post and were to
to assist the designers of notational systems and information artifacts to evaluate their designs with respect to the impact that they will have on the users of those designs
“Notational systems” can be user interfaces, meaning the “cognitive ergonomics” of screen-based representations. In our case, they definitely apply to the hybrid text/editor systems for livecoding.
It’s interesting to look at their “activities in constructing notations” and how similar they are to the previous examples: I gave:
- (added later:) exploratory understanding
Examples they gave of these activities:
searching for a single piece of information (such as looking up a name in a telephone
book) and incrementally understanding the content of the information structure expressed by a notation (such as reading a textbook).
incrementing an existing structure by adding new information, transcribing information from one notational form to another, modifying the structure, or exploring possible new information structures in exploratory design.
I’m curious to hear what other people think of these examples? Is it something people are aware of and use?