Algorithms and language

I’m starting this topic as I’m reading “Language and the Rise of the Algorithm” by Jeffrey M. Binder, and would love a space to discuss it. I’ll be posting notes and thoughts as I read. I’m interested in not only the ways we have conceptualized algorithms, but also how those notions have changed over time, and how those changes may relate to various types of technology surrounding language and patterning systems (especially as relates to textiles).

3 Likes

This article that I came across of maybe of interest:

1 Like

This is super interesting! I wonder if there’s a way to do a more longitudinal study, given the (typically) short-term nature of the Antarctic voyages. I LOVE the “martian” dialect observation–hilarious.

Okay. So, having read the introduction, I’m struggling to differentiate the various meanings ascribed to the word “algorithm” over the centuries, but at the same time get the gist–that at various points in the past, algorithms were more overtly inclusive of language and generally, as there was more accepted intermingling between sciences and humanities (if they were even considered separate at all), whereas modern understanding of algorithm is more popularly defined within mathematics, and computational procedures, considered distinct from language and various cultural interpretations or contexts. Modern era algorithm could, in some way, be conceptually separate from meaning or be a mere vessel for a meaning to be ascribed.

From p 7 “The central claim of this book is that the modern idea of algorithm depends on a particular sort of subject-object divide: the separation of disciplinary standards of rigor from the complex array of cultural, linguistic, and pedagogical factors that go into making systems comprehensible to people.”

I’m having a hard time making the distinction because it (ironically) feels like a case of semantic differences? Or maybe even an artifice–to consider math/science/procedures as separate from humanities or cultural nuances. While I understand that 2 + 2 = 4 is the same here as it is in, say, Japan, does this computation not also depend upon the fact that 2 means two as a quantity, not six? Essentially, that the meaning has been ascribed, we learned it as children, and at some point in the distant past, someone chose to use the symbol “2” to describe it. 2 is “two” is “二” (Ni, Japanese word for two). And while I get that the symbol “2” is different than two things (one symbol/concept, the other two distinct entities), and that “2” can represent two of anything (even non-identical things), 2 is meaningless w/o the connection to language or a description of a quantity. 2 could have been 6, but it’s not. In order to understand “2” it must be defined, otherwise it’s just a random squiggle. Also, written language IS a system of abstract symbols to which we ascribe meaning or sound, so the distinction between mathematical symbols and linguistic symbols, to me, just isn’t so neat and tidy.

I guess what I’m trying to say is that language and algorithm, to me, are inextricable. And that even though the meaning and application of the word “algorithm” has changed over time–and likely will again, given the nature of language and humans-- there is no simple, clear way to extricate one from the other.

And perhaps my struggle, in some part, arises from my very minimal education in computation and computational theories, and distrust of attempts to distill big ideas and concepts into simple terms, or “universal” languages/description of any kind across all cultures. There are always variations and cultural nuances or understandings that are present (imo), and those give us a richer understanding of things and the world.

Which is why this book interests me–Binder is contextualizing our modern conception of algorithm along a wider arc of history and culture, especially in light of contemporary shifts happening in computing technology (specifically “AI” and llms like GPT-3, which utilize written language to inform output), complicating the modern conceptual division.

Thoughts?

1 Like

I think it is possisble to see a longitudinal study for this. some methods can be through the types of word used in a forum or if we are looking for how language and accents/dialects can be form, see how a conversation between a group of friends has changd over the year.

Also, Lyrics from certain genres can aid in this.

1 Like

Thanks @hschultestudio for setting this text for us! :slight_smile: I just read introduction too and enjoyed it a lot.

I read it after watching this webinar from Amy Ko, where she introduces her recent work on ‘wordplay’. In making it she looks to build principles of social justice into programming language design. So it was surprising to see her approach as having echoes of Leibniz, Condorcet, Bool etc as Binder describes them, looking to resolve political and cultural issues with symbols.
https://www.washington.edu/doit/webinars/?webinar=wordplay

I also struggle to separate ‘algorithm’ from ‘language’, and think this computer scientific and linguistic effort to separate them really holds computing back. As a live coder who likes to use programming languages to improvise music by changing code while it generates sound, there is no barrier between algorithm, language, communication, culture, etc. But this view and experience of code has made it difficult for me to relate to many computer scientists who generally have a very different view of what programming languages are for. They sometimes even get angry at the suggestion that computer programming can be used as a way to think about, feel and share things on-the-fly.

It is really difficult to see ‘algorithm’ outside of a contemporary frame, but it’s an interesting thought that machine learning and chatgpt can potentially help us return to previous ways of thinking about algorithms. I’m looking forward to reading on!

I thought I’d pull out more context from the paragraph you quoted from on page 7. I especially like the last two sentences!

The central claim of this book is that the modern idea of algorithm depends on a particular sort of subject– object divide: the separation of disciplinary standards of rigor from the complex array of cultural, linguistic, and pedagogical factors that go into making systems comprehensible to people. In the discipline of computer programming, these standards provide a way of thinking about computational procedures—of creating them and judging them—that grants these procedures an objective existence as mathematical abstractions, apart from concrete computer systems. This subject–object divide is deeply embedded not just in textbook definitions of algorithm but also in the design of modern programming languages, which generally make algorithmic logic as independent as possible from matters of communication; this abstraction facilitates the transfer of algorithms across computer systems and across application domains. This way of thinking was not firmly in place until the nineteenth century, and revisiting the conditions that produced it can help us better understand the implications of language-based machine learning systems like GPT-3. The idea of algorithm is a levee holding back the social complexity of language, and it is about to break. This book is about the flood that inspired its construction.

1 Like

Thanks for the webinar suggestion–looking forward to watching it.

I’m curious to see how Binder ties it all together in the end. Reading through the first chapter, I have a lot of notes about this idealistic pursuit of a symbol that can somehow capture the thing (whether an object, feeling, concept, etc) itself as it reminded me of a sort of magic or holy relic (possibly even a form of alchemy!) and also ideas behind abstract expressionism or other theories in modern art that attempt to render “pure” expressions. Those kinds of aspirations also carry over to some aspects of what the idealist/utopian views of AI hope for. I’m not a utopian or dystopian, more of a realist, so the extremes never really resonate with me.

I do hope we can inform our understandings of algorithm and the technology we wield more broadly, historically, and from much more diverse perspectives. So while I agree that bringing “old” ideas about algorithms that do not conform to neat subject/object divides, I also want it to go further and resist idealistic notions of technological saviorism, especially that which centers ONE symbol or object that might apply or define ALL of an idea. That smacks of religious hierarchy and trends toward removing human agency and accountability. And, as most written and digitized data has been western/classical in nature (correct me if I’m wrong, as I know there ARE wonderful projects working to complexify and diversify the data), those impulses will inform the systems we are building now. Who is the subject and What is the object is an essential part of the equation, as well as why this particular equation, how is it constructed, where will it exist, etc.

In short, revisiting the history can not only help us envision the implications of llm/GPT-3 tech, but also inform our choices in how to shape and wield it, as well as IF we should. I would argue it also gives a less… frenetic/fraught context in which to consider these things, as these ideas are not NEW, just placed in a new context.

1 Like