This is a page to explore questions about resonance. This can be as broad as your imagination - so see what you can come up with.

If you can, please add like a blog - i.e. add to the top, and write the date and your name / initials (I will use RW, JM = Jenny Mackness, MM = Matthias Melcher, RD = Richard Devon), so ...

30/5/17 (RW)

30/5/17 (RW) The Myths of Greece and (Christian) Rome, and the Ultimate Resonator
The post-modernists had great fun deconstructing (and undressing) the ultimate signifiers - and teleologies - of language, politics, religion, etc. But the question was seldom, if ever, asked of how we ended up with an ultimate signifier, or ultimate resonator (the 'one ring to rule them all, and in the darkness bind them') in the first place.

Seems to me there was a disruption, a philosophical coup somewhere between Greece and Christian Rome. The Greek (and Roman) gods (the guys upstairs who decide your fate) were, if my scant knowledge of them is correct, an unruly bunch who slept with each other's wives, fought amongst each other, squabbled, and were often a bit unhinged. Somehow the Abrahamic faiths (from Judaism through to Christianity and Islam) won all the metaphysical state capture battles in the Mediterranean, and displaced this these Gods with a 'benevolent', single God (give or take a Trinity here or there) and fired the rest. The victorious God/s looked after (several factions of) 'us' and somehow always managed to be on 'our' side in wars.

The notion of a single metaphysical CEO, blessed with benevolence to his (sic) people, as the 'guy upstairs', seems to be the core of Roman-Christian fusion metaphysics, which displaced the notion that the 'guys upstairs' in Greek and Roman mythology were more like the leadership of a mafia family. This is a serious problem, as this myth - that the guy upstairs in metaphysics is a unified, perfectly and ultimately resonant, and benevolent, single entity - functions as the role model, and conceptual model, for how we see his (sic) temporal counterparts - i.e. our democratically elected heads of state. In 2017 in particular, it might be a lot more honest if our God/s (and our rulers, by extension), were seen to be the squabbling, sharp-elbowed mafia families of Greek myths.

27/5/17 (RW)
(In reply to Richard Devon ... )
Agreed, a "class structure without work" - is precisely the new (and disruptive) mode of production we are moving into. Many years ago, in the UK, (1977) I was struck by what I thought was a 'weird' (and very large) protest march on London - for "the right to work" - now I understand it. A future in which most people will "have no labour" would really shift power to a 'charity-based' society, with all the dependency and humiliation that goes with it. Nurses are already joining the queues at food banks here, and no matter how small the numbers, that's indecent.

27/5/17 (RD) (Copied from Researchgate comments)
Vivek Wadha argues that AI runs the world now and the future will have no labour (jobs). I am cynical about the I bit as databases are full of errors, and I believe there is less A than meets the eye as it is owned and designed by people for the benefit of the few - a class structure sans work? Weird idea. I found blogging a useful learning device for my students so tried it myself for a while. Not sure if you will have access to this but it has a couple of short essays relevant to this discussion

13/5/17 (RW) Computer says 'no'

In reply to an ongoing conversation about the limits (if any) of 'AI', I refer to a conversation with Herman Hauser (IT financier and innovator) in Portugal, who asked much the same question.

My reply was that the missing link in "AI" is subjectivity, inter-subjectivity (and self-awareness).

And the simplest way to think about it is to consider the one, crucial, aspect of abstraction that is, by definition, not possible to programme into AI: i.e. the token swapping that occurs when you say “ I think x, what do you think?” and I reply: ‘I think x.1, does that make sense to you?”.

The crux of the matter is that all conversation is premised on the fact that we are passing around, and exchanging, the 'empty' signs, “I” and “you” which in a sense are 'meaning-less’ (because they are so unstable, their meaning reverses all the time, as each exchange takes place). Conversations use these paradoxically empty-signs to create the shared meaning that both of us have a personal stake in, which we can only create through the process of mutual exchange of these empty signs, the tokens and the mechanism of constructing our inter-subjective commitment to a common, semiotic, meaning.

This meaning is, paradoxically, not primarily abstract, in the sense of being ‘context- and subject- free. It is meaningful precisely because it is grounded in a shared context and shared (and jointly created) inter-subjectivity and history/ies.

That tiny point in time in which minds touch each other is denied to machines.

The question “what do you think?” cannot be asked of a computer, because the computer has no “I” to exchange. And simulating a ‘voice’ for a computer that would say “I” is a non-sequitur, which is perhaps why the David Walliams comedy sketch about bureaucracy always had the computer saying “computer says no”, rather than “I say no”. Which points to the crux of the problem, and of the absurd humour in Walliams's sketch, which is that if the computer ever did say “I said no” you really, really, couldn’t ask “And who the .... do you think you are?”

Of course this is nothing new. This is precisely the point of bureaucracy - whether computerised or not. Namely, to create a totally abstract set of rules and procedures, abstracted to such an extent that you cant ask ‘why’, because bureaucracy is precisely context- and subject- stripped; it has no “I”. It is constructed in such a way as to be immune from questioning, an embodiment of a kind of 'common sense' which has no individual subject. That is precisely the value and the frustration of bureaucracy: the 'subject' of bureaucracy, if it has one at all, is supposed to be the public / good - which includes you, and it makes no sense to question yourself, does it? That would merely be an infinite recursion.

The 'threat' is not the machines, or AI, but rather the people promoting AI (and 'big' data / 'big' alpha males) as the solution to human evolution, but leading to a "robot society which might keep us humans as pets [in which] the subjective recognition, done by diverse resonating individuals, is replaced by algorithms whose supervision is thought to be optimized by rationality (or by a dictator) to find the one true solution" (Matthias, by email).

This is just warmed over 'social engineering' [with algorithm-sauce added] from the 1970's, but it smells of 1984, and pseudo-religious teleology.

We are all trapped, and complicit, in the bureaucracies of the 'social contracts' that we (willingly?) vote for, again and again, and delegate blindly to our democratically elected 'leaders' until they call the next election. That's what we have to resolve.

13/5/17 (RW) Structure, Agency and Cross-talk
Resonance might usefully be split into different genres of resonance: those that reinforce, those that are cross-modal, those that are modality-free, and those that cross gross semiotic boundaries: i.e. across from text to embodiment, and back (Montessori learning, yoga).

Or to look at it another way, resonances that reduce possibilities and horizons, versus resonances that open up, and open out, what is possible and the domain and range of the possible.

Continued, here ...

8/5/17 (RW) Globalised alt-fascism and divide and rule

Two things have come to the surface, and they are inextricably linked:

1. Globalisation
Carole Cadwalldr's alt-mafia's organisational chart, below, and her analysis:

Alt-F flow organisation chart .jpg
Alt-Mafia's organisational chart 1.0

... which shows how the alt-mafia don't have to "take back [globalised] control", they have already done so, and what they actually mean by their attack on 'globalisation' is very partial - they want to cherry-pick a fight against the global resistance to their developing hegemony (see Trump's recent change in the US regulatory rules, to permit data companies to harvest and sell all the data that comes their way - without permission), while preserving all the global alliances and channels they need to continue the "gutting of democracy" (Pope Francis, here ...).

And it seems that Facebook is now openly marketing their services to help in the gutting of democracy.

2. Divide and Rule
The best way to undermine any - let alone global - resistance to their developing hegemony is for them to 'divide and rule'. So, target the UN (it has already weakened itself), and break up or dilute all transnational alliances - starting with the TPP and NAFTA, and the EU. (Which is why Katie Hopkins and Farage tweeted revolting responses to Macron's victory in France, which strengthens the EU).

So, what's to be done?

On the one hand, this is all just another form of ‘big-data' crunching, albeit applied to vicious and secretive social-engineering / 'social-meme-modification' (cf genetic-modification). Is the alternative a different approach to data mining and aggregation? Or a different encrypted web (other than the existing ‘dark web’) which could re-create Berners-Lee’s global commons?

There are a bunch of ‘aggregators’ that have been developed specifically for online collaboration, like gRSShopper, but maybe there’s space for more innovation. In short, it looks like it's back to the Berners-Lee drawing board, to create a parallel (not-for-profit) commons / data aggregation practice - and preferably to fund it through donated resources in kind (i.e. time) from progressive corporations and universities (see the Linux model, which is sustainable across sectors).

I posted a comment some years back that “convenience data is like convenience food - very tempting, but not necessarily good for you”. The alt-mafia have created their own ‘convenience-data-engines’, maybe it’s time for the libertarians (not liberals!) amongst us to do the same.

A little challenge: it is possible to automate harvesting, reverse engineering, and modelling of the outputs and activities of the alt-mafia’s aggregation engines (see Cadwalldr's work on her twitter site)? Or is something more required? A few years ago I would have said the solution is more public sector regulation, but the UK, US, China, Russia (and the EU?) are all going to refuse, on the grounds of ‘security/terrorism’. So it might be best to just get on with it, and aggregate and analyse the alt-mafia's aggregations, and make the results public domain, no?

13/3/17 (RW) Ecology, Resonance and Determinism
Horgan (see Teleology, etc, below) has a more nuanced approach to the Gould / Dawkins spat - he points out, usefully, that (Stephen Jay) Gould differs in two important ways from the reductionism of Dawkins: Gould insists that punctuated equilibrium, rather than continued gradualism is a better model of evolution, and that speciation occurs via changes as a species, and not via selfish genes.

This matters, because it removes the determinism and teleology from the overall approach to biology, and to evolution in particular. That matters, in turn, because it emphasises that life on earth, and our evolution as a species is a matter of chance, sudden change, and is entirely contingent. It could have been otherwise, it could not have happened at all, and it could (still) happen elsewhere instead - possibly in quite different forms.

That, in turn, emphasises an ecological approach, i.e. that many factors, and species, and geological and cosmic events all interact to shape evolution (or de-volution). Which (finally) means that we are lucky to be here, and we should be careful to make good out of what we have, while we still have it.

This is a rather more humble (and scientific) approach than the heavy-handedly anthropomorphic and teleological approach which in an ironic way both Dawkins and the creationists believe in.

What this amounts to is that contingency, complex-adaptive systems, emergence, evolution (and de-volution) are the fundamentals of biology and social life. Not only does (deterministic, predictive) science not have all the answers, it never will have. Moreover, some of the answers science has already given us (from global surveillance to global [self] destruction, etc) are already in danger of creating terminal, feral, genes, memes and temes.

Apart from the fact that the building blocks of biology (viruses and bacteria) are unashamedly promiscuous, (not 'selfish') with their genes, Dawkins's fundamentalist views not only maligns biology with 'selfishness' (a non-sequitur for non-conscious forms of life if ever there was one), but by labelling all of biology as 'selfish' also re-anthropomorphises H. Sapiens as 'inherently' and irredeemably selfish too.

8/3/17 (RW) Resonance and recognition
Matthias, beautifully put. Let me give you an example that still sticks in my mind (many years later). In the 1970's I was teaching at 'Trafs', a high school in Cape Town, and a young girl came into my classroom with a message for me.

One of the boys near the front passed a rather provocative remark about her, at which she looked down her nose at him and replied: "Your behaviour is atrocious". The boy had not expected this, and, lost for words (he was nothing like as articulate as she was), replied "And your language is 'jungle oats'" at which the whole class collapsed, laughing.

The point is simply that he did not have the same range of signs at his disposal to counter what the young girl had said, so he grabbed what he could - a 'complex sounding' term, which happened to be 'jungle oats'. It worked, the whole class - and the young girl - 'recognised' it as an appropriate (if ironic, and technically incorrect) term. Instant recognition and instant laughter. But ... I suspect that the term never passed into the lexicon of school English (or slang), and if it did, it probably soon fell out of use.

However, if he had used something less absurd, or something derived from a similar use (or practice) he might (?) have coined a new term, or a new derivative use of an established term. What you describe so well is "the moment when the new usage resonates with the listener is probably exactly when he/she 'recognizes' the meant referent" - and this is true despite the fact that neither of the people involved articulated the referent and, for the rest of the class, this didn't matter. Perhaps several of the people in the classroom discussed the incident afterwards, and tried to find the best term for what was meant. Who knows?

In one sense this new use of 'jungle oats' is clearly linked and grounded in the shared situation, but the boy could have used a new term that he and his friends had coined (previously), for what they thought her behaviour signified - in general - beyond the boundaries of this particular situation. That too would, of course, have been grounded in synaesthesia and shared experience.

But ... there is no reason why the a new referent (signified) cannot be dreamt up / imagined by a person way 'beyond' his or her individual or shared experience, and 'inserted' into the classroom interaction. However, for that referent (signified) to enter into the English language at that point, they would need to use a signifier (a marker), which would have to have been derived from one (or many more) of their experiences, although not necessarily from their 'intellectual' or even 'social' experience, perhaps just from their experience of sounds in the surrounding environment.

In short, to kick off the semiotic process you can grab a signifier from anywhere, it doesnt matter. What matters is whether the 'joke works' (in the classroom) - i.e whether the signified resonates with your audience, and whether the long term 'life' of the composite sign is sustainable or not. (Using 'jungle oats' for this type of patronising behaviour was probably not sustainable).

[Terminology: I tend to use de Saussurian semiotics, in which the sign is the relationship between the signified and signifier (any type of marker). I realise that this differs from Peircian semiotics which uses 'referent', etc, but to much the same effect].

26/02/17 (MM) Sign of a use to resonate
You said in "Taxonomy of Resonant Knowledges", section "-1.":
"all that you need to kick off the semiotic process is for a sign of a use to resonate, to make mutual sense, between two people,...".

Is this restricted to using new term usages for concepts that can "be gleaned from overt behavior in observable circumstances" (, or does it also apply to using new signs for a NEW concept that can NOT be gleaned from a shared situation? I think your idea of resonance covers the latter case, too, i.e., it works also for non-concrete ideas. Of course the new signs/terms would have to be carefully 'depicted' by vivid paraphrasing, or be derived from vivid other terms by way of metaphor (Lakoff) and synaesthesia, until the resonance is occurring in the listener. The moment when the new usage resonates with the listener is probably exactly when he/she 'recognizes' the meant referent. Even though they cannot yet recognize the sign as a lexicalized term/word (know the word), they recognize the concept described. The emergent term, then, would be, so to speak, the frozen resonance, or as Marx would probably say, the congealed resonance? For learning, this would have the important implication that the concepts can only be learned if the frozen resonance is re-enacted/ revitalized by the learner on their own, by recognizing the concept on their own?

20/2/17 (RW) Constructive resonance - Swedish style

From the Guardian ...

In response to Trump's bizarre speech: "Look what happened in Sweden last night":

"Sweden’s foreign minister, Margot Wallstrom, tweeted a link observing that “post-truth” was named word of the year by Oxford Dictionaries in 2016.

"She also tweeted an excerpt of a speech she gave in parliament last week. “Both functioning democracy and constructive cooperation between states require us to speak with, and not about, each other, to honour agreements and to allow ideas to compete,” Wallstrom said. “They also require us to respect science, facts and the media, and to acknowledge each other’s wisdom.”

12/2/17 (RW) Teleology and messianism.
More musings from reading Hogan ... He has some pithy quotes from Steven Weinberg:

1. “The more the universe seems comprehensible, the more it seems pointless” (The most Zen formulation of the issue I have come across for a long time)

2. “The tragic dilemma of particle physicists: they are damned if they find a final theory, and damned if they don’t”

Sounds to me physicists are going through the same existential crisis that Sartre went through in the 30’s, and the postmodernism-deniers went through in the 70’s (right through to the nineties, for some of them), i.e. the death of teleology. In postmodernist (and semiotic) terms, the thesis is simply that not only is there no ‘ultimate signifier’, but that signs and meaning are endlessly circular, there is no beginning, and no end.

So it's not only ‘turtles all the way down’ as the answer to the child’s problem of who holds up the person holding up the earth, but its also ‘signs all the way round’ as an answer to the problem of where does meaning begin or end (or "where is it going to?" - if you must).

In short, its great to try to make sense of our little corner of the universe, but the idea that we could ever make sense of ‘everything’ is beyond crazy, it’s deluded. Only an omnipresent God, in all universes, could attempt that, and none of us qualify.

But unless we can get the majority of H. Sapiens to agree that teleology is a non-sequitur (to put it bluntly), they/we will always be chasing after the next God / leader / messiah. And we currently seem doomed to live in a new age of messianism, the oldest form of fundamentalism in the book, and probably the oldest profession in the world too, actually.

The good news is that there are so many great moments to enjoy in the meantime, or as I wrote some time back:

Watching a sunrise
is a hazardous business
you only get one chance
per sunrise.

8/2/17 (RW) The alt-mafia / post-fascists.
It was once said that the mafia, like any self-respecting transnational corporation, never discloses its accounts – or taxes. The alt-mafia are now in power in both the aptly named 'white' house and the Kremlin. It could turn out to be a 'happy little marriage', or a new big power resonance, instead of rivalry - quite a weird development.

This arises (rather circuitously), from further reading of Hogan's End of Science (see below) …
Hogan is (still) pretty much absorbed in his own profession, and doesn’t confront the bigger issues:

1. For me, ‘science' has a number of functions:

1.1 Finding and exploring the big questions. This is surely still good stuff, and it should and will continue. But who, and what, gets funded, and by how much?

1.2 More important than the question of whether we have reached the "end of science" is the question of defining science in terms of ‘practical epistemology’ - quite simply, science is a way of describing, circulating and sharing new, precise, and useful ways of interacting with the world. This includes new ways of thinking about it, and new ways of acting on it / in it.

1.2, Occam’s razor must be applied: i.e. start with the idea that science is just a way of "describing, circulating and sharing, new, precise and useful ways of interacting with the world”. This covers both ‘hard and soft’ (or 'inanimate and animate') phenomena, and predictable and complex-adaptive events. So its a useful ‘epistemology of everything’ - to coin a phrase.

That done, you can then differentiate the predictable from the ordered-yet-unpredictable stuff (both within physics, as well as within the arts-and-sciences).

2. There are a number of issues that should be pushed up the global and social agenda:

2.1 Hogan touches on the politics of ‘big science’, but pulls his punches (and his investigations). The politics and ethics of scientific funding, patronage, and corruption (hard and soft ‘sciences’, in Kenya, AIDS, Vaccines, Drug trials, drug costs ... add to taste) is actually far more important.

2.2 The monetisation of science is broader than his remit, but if he is interested in the big picture, he should be engaging with it.

2.2.1 Prestige or, more to the point, vanity science (for institutions, as well as for many right-wing academics, I suspect) needs to be deconstructed for what it is. Calling it 'ironic science' (as Hogan does) is far too kind.

2.2.2 Absorbing the discourse of science (i.e. inquiry and understanding) into the discourse of off-shore wealth accumulation (aka 'outsourcing') on the current global range and scale marks a fundamental shift in society that is the ‘new epidemic’.

It turns professional decisions - on science, pharmaceuticals, education, health, transport, security, etc into the playthings of the global kleptocracy / oligarchs. Clinicians in hospitals, for instance, are subordinated to monetised management by ‘bed-managers’ (in the UK - and elsewhere?) and international consultancy firms: e.g. McKinsey, recently decided for the NHS that hernia and carpal tunnel operations would no longer routinely be paid for by the NHS, unless there was some surplus funding sloshing around somewhere, that could be found and applied for (while the patients conditions grew worse, often resulting in more extensive, painful and costly medical interventions. (A stunning example of outsourcing and off-shoring monetisation, on the advice of financial 'experts', which could also be seen as a kind of 'back-door' State capture - a meme which is all the rage in post-Mandela South Africa, following closely in Putin's footsteps).

2.2.3 And … in the process, this emasculates (or just junks) territorial-based democracy, which becomes the plaything of would be celebrities, pretending to be 'nationalist' politicians (see: the definition of a politician: someone who always desired to be a celebrity, but was not good looking (or talented) enough).

3. And ... the much more mundane issue of 'doing' science (as opposed to the heroics of 'discovery') still fails society, when, for instance, well educated people in the UK say that it's bad to take a whole course of antibiotics, as everybody knows that taking too many antibiotics is bad for the development of drug resistance. And in South Africa (and other countries too, I am sure) the whole AIDS epidemic was mishandled by a decidedly unscientific approach to prevention, cure, transmission, etc. For example, the erstwhile head of the S.A. AIDS campaign, Jacob (now President) Zuma, told the court where he was accused of rape that it was not only OK for him to have sex with his friend's daughter (you have to look after your guests), and it was also quite safe for him to have unsafe sex with her (he knew she had AIDS) because he took a shower afterwards. The judge said he was, literally, speechless.

4. The urgent need to define and operationalise a new cooperative commons (for which science, in some of its best moments, was a model) is written off in the noise generated by ‘hard nosed’ economists, an alt-mafia populists.

5. The upshot of this is the need to step back from the texts, the tactics, the tweets, the news ‘events’ of the day, to deconstruct what is actually ‘going on’ behind the media froth, in an analysis of the the ‘discourses’ of power (after Foucault, Derrida, etc) e.g. the behind-the-throne machinations of Steve Bannon and the alt-mafia, aimed at ‘bringing down the State’, and undermining years of progress in building the basics of a cooperative commons.

At least the fascists were committed to making some aspects of the commons work – even if it was just making sure the trains ran on time. The American alt-mafia are running public office as a zero-responsibility, zero-remuneration (they have already accumulated and stashed their billions) job, for a zero-commitment society (via zero-hours ‘employment’), keeping their money beyond the reach of the State and paying close to zero taxes (see Donald Trump's companies in the state of Delaware, which is deemed 'off-shore' for tax purposes). The global kleptocracy has taken over. (And eat your heart out, Friedrich Nietzsche).

6. The alt-mafia/state-shrinking-and-capture regime could also be seen as a macabre form of abstraction - a shift away from national democracy, perversely carried out in the name of nationalist populism, in layers and layers of irony/deceit. To wit, the new resonances:

"Imagine a world in which the old left-right divide and the east-west conflict of the cold war era were practically irrelevant. The conflict of ... would [then] be between traditionalists and pluralists, between internationalists and nativists, between autocracy and liberalism. This is Dugin’s world. In it, Trump and Putin, for all their differences, would be on the same side" (see above link).

(Just a thought - if the USA, Russia and China are all converging, maybe Fukuyama was right after all, and it is, finally, the end of history, just not the way he thought of it, and a new global elite of war-lords - as opposed to internationalists - is born. In hindsight, was it ever any different? )

2/2/17 (RW) Control or Responsibility
I have been reading Horgan’s book, The End of Science - starting with the chapter on Chaoplexity.

He does get around to speak to many of the key suspects, in person, but ends up delivering not much more that a ‘clever’ journalistic account of why all “chaoplexologists" are deluded.

Its just too easy to set up the Sante Fe group on Complexity as a straw man/institute, and then shoot them down in flames. I, like many others, have never subscribed to the Sante Fe take on complexity. It suffers from ‘physics envy’ and a yearning for predictability in social science and human behaviour, and biology, which is never going to happen.

It also suffers from the AI syndrome, i.e. the delusion that computers will (surely, one day) be able to mimic the human mind, and exceed its ability to communicate (i.e. above and beyond ‘compute'). There are a number of problems with this:

1. The whole point about complexity and emergence is that there are many events - all of biology, zoology and human sociology for starters - that are to varying degrees ‘unpredictable but ordered’.

The idea that we will one day be able to predict the unpredictable is, of course, a contradiction in terms. It’s ‘physics envy’ in another guise - e.g. as recently articulated in Fukuyama’s book on the The End of History - the deluded triumphalism of neoliberalism, some years back. (And Horgan's title seems to mimic Fukuyama's hubristic pretensions).

2. Many of these events are also what Soros (writing on economics) calls ‘reflexive’ - which is more than ‘clever feedback loops’, as it includes subjects (self-aware agents) maintaining their own identities, cultures and capabilities/power.

3. The ‘more’ is, crucially: intentionality, community, and shared responsibility. In games theory terms I guess it is, at its best, that ghastly Americanism, a ‘win-win’ situation. In semiotic terms, it's the double edged challenge that all signs are arbitrary and conventional - i.e. the opportunity to be creative with signs is always accompanied by the responsibility to account for the practices of the community of users in question. And it gets to be more and more of a challenge, the more 'open' the community is - whether that is scientists developing potential frankenfoods/biotics/nano-technology, or alt-wrong trolls promoting fascism.

4. The ordered part is made possible by hindsight, and by the courage to let the event ‘run’ with large degrees of freedom, and see what happens. (This requires an approach to responsible management which is quite different from 'control' see Snowden & Boone 2007).

5. What happens is history (and, more often than we would like, in the form: ‘shit happens’), and we live within it, as in: "we live our lives forwards, and make sense of them backwards".

6. Identity, in turn, is reflexive, certainly in socio-semiotics, with roots going right back to autocatalysis in RNA / DNA (which is not the same thing as the ‘selfish gene’ argument).

7. History, identity, and reflexivity have to be added to intentionality, community, and shared responsibility if we want to understand unpredictable yet ordered events. Subjects comes into it too, via identity (see 2, above). As does reconciliation (see: the South African Truth and Reconciliation Commission, in Country of My Skull).

8. Communication has to be built on all the above (see 7), if it is to be anything more that transferring information and issuing instructions - one way - even with the addition of a few clever feedback loops (the Funagalore paradox, for those who are familiar with the Southern African language of instruction).

To paraphrase Marx, the point is not to control the world, but to build a shared global community, with shared responsibilities.

A Universal Declaration of Human Responsibilities would be a good start.

17/01/16 (RW)
Sensory resonance and learning
In reply to an email and a very interesting blog post by Matthias Melcher, which responds to work by Kieran Egan on kinds of understanding and the tools that develop them, the following:

Having read some of Egan’s stuff (via the links) …

1. I am interested, but a little sceptical of his idea of cognitive development as a re-enactment of social evolution. It sounds too neat and linear for my liking, despite Egan’s qualifiers.

2. I am drawn back to Montessori practice, which is somatically based, but is also free-flowing (and incorporates regression), in classroom design. It frames learning within enactive perception (which Gibson later formalised as affordances). And it is based on the design and construction of enactive order (which incorporates systems of difference).

The principles of design for Montessori materials can be applied across many fields, but the superordinate principle is that education and the curriculum is NOT determined by the teachers (at preschool level in particular) but by ‘following the needs of the child’ - which requires detailed observation. Design is created as a responses to the child’s need to explore, and a need to order their world as they progress through a series of ‘sensitive periods’ for different exploratory needs/desires, starting with the kinaesthetic, the 'invisible' sense.

3. These sensitive periods provide rich affordances for the child to engage, and to create something - their ’selves’ perhaps - but not as individuals. What they create could equally be called ‘order’ or routines, or organisation or familiarity, or even comfort (within their own skin?) - it’s what is created as a result of interacting with the environment and creating micro-worlds of order - in the world and in ’them')selves' . Its a way of inserting / inscribing themselves into the world, not knowing 'about' the world. An engaged knowledge, in other words, not an 'academic' knowledge (in the - slightly - pejorative sense of 'academic)

For example, recently a friend brought her 2 year old daughter to stay for a few days. The mother wanted to know what ‘Montessori’ activities she could do with the child at home. The child, right at that moment, was taking toy animals out of a toy box, and putting them back in agiain. A visiting child (aged 8 or so) sat next to her, and eagerly used the opportunity to tell her the name of each one, which the child repeated - and was quite eager to learn, - although that was clearly not the focus of what she was doing. At this point the mother wanted to leave, to catch a train back home.

I pointed out to her that the child was ordering the box, the toys, what can come out and go back, what a box for toys can be used for, and the difference between animals and their names. But most important of all was that she had put all this together into a ‘task’ with sub-tasks, and a cyclical structure, with an opening point and a closing point, and some idea of when the task would be ‘finished’. I am sure this is recognisable as a good scaffold for programming: creating and combining sub-programmes/routines, creating a ‘run’ protocol, deciding when it is finished, testing it, and completing the task.

3.1 This is in other words a sophisticated way of ordering the micro-ecology of toys and toy boxes, and extending and developing the capability of the emergent ‘self’ as someone who can executing this task / programme. Above all, I said to the mother, what you must never do is interrupt the task (the equivalent of interrupting a computer programmer in the middle of writing and testing a new programme, to see whether, and if so how, it ‘runs’). And repetition is crucial to test whether the programme runs properly, under different conditions.

The task for the learning materials designer (mother, teacher, etc) is to recognise this designing / ordering process, respect it, and apply this knowledge to observation and analysis of other ‘play’ activities that the child engages in (which should actually be seen as self-motivated design and development ‘work’ - not play at all).

3.2 This is in many ways pre-linguistic, but as a programme design and test activity, it is simultaneously post-linguistic, or even in a sense a-linguistic. The most important thing for the child to develop, and for the mother/teacher to recognise, is to respect and enable is task design-and-definition, which is a lot more sophisticated than ‘somatic’ or ‘binary’ skills and competencies, and in which systems of difference play an important, but subsidiary part.

The process of designing exploratory tasks, and enabling children to explore them in a Montessori environment is applied across all senses, and across literacy and numeracy. Literacy is taught as an active, somatic, task - through writing first, in a very tactile way (using sandpaper letters) which is followed by reading, later on. Current touch- and voice-based media provide wonderful design opportunities to develop digital Montessori materials, although the basic sensory materials would not be replaced by them. There is lots more to explore, and perhaps a whole section on sensory resonance and learning will have to be written (sometime) and added to this wiki.

And ...
I do think that bibliometrics is taking on a life of its own, which has little or nothing to do with academic or intellectual value - its just another sad example of convenience data, if not of malgorithms.

23/11/16 (RW)
Epigenetics, memory, evolution, and resonance/alignment
Since the debate between the ideas of Lamark and Darwin kicked off, there has been scepticism about the resonance between experience, consciousness, etc. and genetics - particularly influences on the genotype (rather than just the phenotype). This is perhaps an extended sense of the term 'resonance', which leans towards alignment, (which I guess is a weak form of influence, or even a very weak form of causation).

Here is a great panel discussion on all these issues:

Deepak Chopra chairs the discussion, but digresses into broad speculative issues a little too much at times. He is excellent when he speaks as a specialist endocrinologist and researcher. The others, including the rap artist, are superb.

The discussion makes the point (amongst other things) that sustained changes in your diet get locked in to ‘automatic’ / semi-permanent behaviours / 'gut memory', in the epigenetic material around the genes, which starts to unpack Chopra's idea that the brain/consciousness interfaces with switches at the epigenetic level, and therefore also switches of gene expression. This offers a glimpse into potential mechanisms for Larmarkian influences from the environment (both the external environment and the internal one [i.e. consciousness, and the sub-conscious]), to influence changes to the phenotype (i.e. gene expression), which could in turn change predispositions for adaptation in the genotype.

This raises lots of very interesting questions. Not too many answers, but lots of interesting micro-studies on epigenetic markers, that start to provide the building blocks for identifying and thinking about specific micro-biological (and social) issues in the Darwin (Dawkins) v. Lamark debates. Dawkins gets discarded early, and Darwin emerges as a far more nuanced thinker than his self-proclaimed 'protege').

And there is a tellingly (horrifying) example of the experience of women at the 9/11 event causing detectable changes in the saliva of their offspring, which makes me think of the clinically troubling effect of the Rwandan genocide on the next generation of children, who had not even been born at the time the genocide happened. The crucial question of whether these changes are passed from one generation to the next generation culturally, or genetically, (or both) can now start to be explored through examining changes in specific epigenetic markers.

7/11/16 (RW)
Writing in/for the public
I am interested to know if the format of this 'resonance' exercise, i.e. ‘writing’ in a web of interconnected texts - both internal to the wiki and linking through to many external ones too, ‘works’ for the reader.

It works for me. One colleague asked if I was writing for pleasure [yes], or whether I want to use this as a resource, to later write a ‘sequential text’ type of ‘published paper’ too - at some stage - to which the answer is yes, maybe. I suspect that one (or more) publications might emerge from the process (from me or from others, why not?). But I am not sure if the sequential published paper/s will be as good (= useful to readers) as the current ‘web’ or hypertext. Perhaps we'll have to wait and see.

In another sense, this is just a matter of writing in [a] public [space], which might or might not be interesting to others - at the moment many people seem to be reading the additions to the wiki, on a daily basis - partly because there are lots of ‘members’ of the wiki, who all get updates on anything new, and partly because I often flag up additions to the wiki in my twitter account.

Writing in public then has a relationship to writing ‘for the public’ (i,e, writing in private, and then releasing a finished product). This is a bit different, as there is no reason why it should ever have a finishing point, or product. We will have to wait and see how that turns out, and in see whether writing can usefully remain unfinished ( or conversely, maybe we should ask: in what way can writing usefully remain finished?)

Whether anyone else will pick up on this way of writing (and run with it in their own, distinctive, way) is moot. But I am not doing this for others, rather for myself, really, so any wider use of the format would be on the one hand incidental, and on the other hand a [creative commons, with derivatives] bonus.

31/10/16 (RW)
Knowledge as becoming
Stephen Downes on knowledge: "rather than being a process of acquiring something, as commonly depicted, learning is in fact a process of becoming something"

And so, ... [WIP ...]

30/10/16 (RW)
Ironic Resonance?
Matthias, yes. And 'fit-ness' is dynamic, just as 'resonance' is. See the Rhino, here, who, by becoming 'lost' in the landscape, 'fits' in, to such an extent that the serendipitous camouflage works 99.9%. There is no genetic transfer / translation mechanism, unfortunately, so no phenotype-genotype reshuffling of the genetic code, and 'evolution'. But I love it as an example of fitness and ironic resonance - the resonance of getting totally lost, which in semiotic terms is a wonderful addition to the playful richness of 'resonance'. (This could also end up being a great children's book: The Rhino who got lost).

This also links up with your ideas on 'kennen / 'connaître' / kennis (Afrikaans), and 'ken' (Scots), and Stephen Downes' idea that knowledge is 'becoming', not 'storing' (see below). Or it could be seen as 'fitting' your self / neuronal networks around / into a particular response to an event in your environment - which might be much the same as his idea, that you refer to above, that "there's probably only one algorithm ... ". And ... see also my basic definition of knowledge as "the capacity for effective action" (which I got from St Onge, some time back .... [add link]) - which must surely be both a multi-synaesthetic, and an 'embodied mind' capacity, no?

Phew, food for thought ...
[WIP ... and ... chase up details in the links to downes, above]

29/10/16.1 (MM)
Your links from knowledge to Assimilation, "4. Ecological Resonance", or "6. Complex-adaptive systems", remind me of Downes's notion of recognition and of his writing: "I've also come to think there's probably one algorithm underlying perception, evolution, thought and consciousness." ( ). Can we say that evolution 'recognizes' that what is fit, when that 'resonates' with its environment?

29/10/16.2 MM to RW ..
... it is now the fourth time that you write something in your wiki that I was just thinking of. First it was bouba/ kiki, then derivation, then ecological adaptation (see my entry in your questions page), And now it is the idea of "being comfortable in your own skin" with some cognitive content.

For me, this is closely related with McGilchrist's idea of knowing in the sense of 'kennen'/ 'connaître', i.e., being
FAMILIAR with something, or knowing it like a family member. I often thought that a main difference between math haters and math well-doers, is that the latter have come to some familiarity with the abstract concepts, with the limited number of 'acquaintances' of concepts, such that the concepts can almost be seen as their 'cognitive furniture' in their mental living room.

Whereas math haters often are really blocking off all the pertinent ideas because they feel alien and hostile (and due to
the derivation effect, already the thinking of the homework causes physiological threat responses).

More generally, I have long suspected that the question of learning styles and preferences is much less a matter of
personality or ability or topic structure, but just a matter of how we like the order of things best in our heads, e.g. more
neatly hierarchical or more meshed and connected, and more serially lined up or all at a glance. This is, IMHO, a major
determinant for the preferences of modalities and presentation styles. While the styles are still busted as a myth, the picture may become very different when the emotional factor is included: On a single day last month, I encountered two findings that only now reveal the motivation implications: and

On an even more general layer, this well-feeling may also be mapped to what Stephen calls the Boltzmann mechanism of neural reconfiguration: trending towards a state of settlement and equilibrium.

... and a reply: RW to MM ...
Food for thought … The idea of familiar / friendly concepts versus alien/-ating concepts works for maths. It also works for reading - some people finds books and solid text alienating, and some find books friendly (and something you can ‘snuggle up with on a rainy day’). See the link to “the shock of the text” in the synaesthesia paper (in researchgate):

"… Quinn’s research into young people on the margins of education [37] describes learners who floundered in the sensorially and synesthetically impoverished abstract setting of text-based schooling, which assumes that the transition from rich, embodied, cross-modal experience to written text is natural, comfortable and self-evident for all students.

  • "Quinn’s learners ... thrived as soon as they were put (back) into rich synesthetic settings---in their case, in a farm-based school. They seemed to have been yearning for the “reinstatement of all their other (16 or so) senses” [38]. They were potentially quite capable of engaging with written text, given the right guidance and time, but they experienced the “cold turkey” approach of schooling and the “shock of the text” as intolerable and disorienting”.

The point here is not that some learners (always) find text disorientating, but that text (like all other modes?) has to be treated as a transition, or as a phase in a range of transitions and abstractions (which is one of the reasons why I am so obsessed with ‘unpacking’ the dynamics of different kinds of abstraction).

That being the case, it is (highly?) likely that just about all people will find text comfortable at some stage/context and uncomfortable at others. The question is - at what point is a particular mode un/comfortable for a particular learner? Text is not a ‘magical’ one-size-fits-all solution - just because it happened (at one particular stage in the history of media technology) to be the most cost-effective (i.e. the cheapest) mode of production to produce education for the masses - some of whom found text totally alienating, particularly if they were thrown into it too soon (in which case they were just seen as 'collateral damage’, on the margins of the great ‘emancipatory’ project of modernisation).

This puts learning modes [style is a related but distracting issue, because as your argument (and references) point out, it’s clouded by expectations and feel good factors] - it puts learning modes into socio-historical contexts, but also into the contexts of personal history and affordances. And … it treats all of these factors as personal, dynamic and changing, and even recursive (they can change ‘ back and forth’ over time).

So one-mode-fits-all is not appropriate - neither for all people, nor for all times/stages in learning. Its was the core strategy (masquerading as a value) of ‘modernisation’, which saw mass production (of single solutions) as the answer to the modes of production of education (and everything else) for an ‘equitable' society - only everybody had to be equal in exactly the same, standardised way, in the same standardised, one-directional sequence (aka: 'the march of progress’).

I need to know more about the "neural reconfiguration: trending towards a state of settlement and equilibrium” - any links?

... and a reply MM to RW:
If you scroll down to ‘Association’ in the attached word document, you’ll find all four mechanisms.
The Boltzmann kind is one that I often feel to experience directly, for example when I feel the urge to wrap up some topic or to reread a sent email, to somehow ‘get it out’ of my head. Downes may have explained it even better in other presentations but in most links I found with his full-text search the mention is very terse, except perhaps here or here .

19/10/16 (RW)

Assimilation seems to be something different from resonance, yet perhaps it is, actually, resonance, but just resonance refused. (This kind of ambiguous resonance might help us formulate a fuller description of resonance as a set of dynamic event/s, rather than a static 'thing').

Several commentators - on the chaos of Brexit, Trump, and the rise of populism more widely in Europe, not to mention the current conflict between President Zuma and Finance Minister Gordhan, and the ironic 'assimilation' between the class politics and practices of the National Party under Apartheid, and the ANC under Zuma, in South Africa - have said that the world seems to be going topsy-turvy.

Perhaps these are all just slips of the cosmic hand, captured so well in the poet Adam Small's comment on the plight of the so-called 'coloured' people under Apartheid: "die Here het gaskommel, en die dice het verkeerd geval vi’ ons, daai’s maar al" (tr: The Lord shook the dice, and the dice fell skew for us - that's all there is to it).

Maybe we should grasp this kind of cosmic crap game as a post-post-modern trope in place of the more traditional 'revolutions' of history - closing or opening up spaces for people to move/not move into, under popular - and counter-popular - banners/viral movements. This would be a different 'riff' on resonance, namely crap-game resonance (depending on where you stand on a particular issue).
Solomon Tshekisho Plaatje

It makes me think of some of the ‘half-spaces’ - the failed absences (in semiotic terms) of history - e.g. the early twentieth century and Sol Plaatje's attempted ‘assimilation’ into South African politics - he edited two newspapers, and was a founder member of the ANC, amongst many other achievements.

.. and the Jewish ‘assimilation' in Weimar Germany between the Two World Slaughters (aka WWI and II) - both attempts at assimilation got squashed by Fascism.

These are in contrast to Gandhi's counter-intuitive (from the West's point of view) political approach, and attire, in his negotiations with the British Empire. Was Gandhi non-assimilationist and non-confrontationalist, or did his non-violence just dissonate, and undercut, the violence of Empire?

Gandhi in London 1931.jpg
Gandhi in England 1931

30/9/16 (RW)
Post-truth: I feel therefore I am?

Truth or feelings?

Meredith Kercher.png
Meredith Kercher

Amanda Knox.png
Amanda Knox
The Directors.png
The Directors

The term “post-truth” has been cropping up regularly of late, and I have been irritated by it more than anything else; I failed to see what on earth it could mean. Today I read about the new Netflix documentary The making of a she-devil, about the Meredith Kercher/Amanda Knox saga, and it’s all starting to make sense.

McGinn (one of the directors of the film, see right) says the trial/s of Amanda Knox “was one of the first great examples of [the] post-truth age we now live in, where people respond to something primarily by how it makes them feel".

'Post-truth’, then, is when you temporarily (or permanently?) leave your brain at the door, and engage with the world through your feelings (and your twitter feed). That’s weird, and even rather frightening, but its also interesting.

Its important, because it explains how people increasingly engage (and ‘enrage’) with the world through the fog of social media, and as a result fracture into warring ‘tribes’, bizarre narratives (e.g. the Birther [add link] narrative about Obama), and pop-up diaspora (add link]. (Eat your heart out Lyotard – he who proclaimed the end of the Grand Narrative). As Blackhurst (the other director of the documentary) says, “we realised this story was about the changing landscape of journalism itself. In 2007, we saw the beginning of Facebook and the rise of digital media, [and] the takeover of the 24-hour news cycle. That gave rise to clickbait, of stories being driven by headlines rather than reporting in the traditional sense of the word.” (Which is a good example of convenience data, see here)

However, there’s more to it, as social evolution has never been linear or straightforward. To begin with, the previous eras – triumphant twentieth century modernism in particular (and even the sometimes over-intellectualised post-modernism) – was over-cognitive, and just replaced class (in the UK at least) with erudition and high rationality, and a decidedly written, 'straight', linear culture (even in radio).

Television (and more imaginative forms of radio) started to change that, into the 1960’s and onwards, but the legacy of the overwhelming authority of the written text continues to structure many aspects of society, as it still does in most universities.

The internet, social media, e-learning, and open collaborative learning (from the development of Linux to small scale blogs and wikis, arXiv & researchgate, and the brief bloom in really open MOOCs), brought about new communities, networks, groups, and instant ‘followings’ in which ideas and feelings happily rubbed shoulders, and flowed freely (at little or no cost). Serious academic, intellectual and professional networks developed a healthy mix of ideas, feelings, formal and informal aspects of people’s ideas and everyday lives, which was a welcome (and even inspiring) change for the better.

However … once the genie was let out of the bottle, parts of the internet became ‘the world’s greatest sewer’. The next swing of the pendulum between feelings/rage versus ideas/collaboration (or enragement versus engagement) is long overdue. Part of the problem is that total ‘openness’, like total censorship, can be self-defeating, and neither leads to sustainable adaptation or useful social evolution.

Openness can facilitate innovation and learning on the ‘edge of chaos’. But without any constraints, it often tips into noise, and kills off communication and any chances of collaboration. (See Paul Cilliers)

So ... responding to something “primarily by how it makes you feel” is perfectly legitimate, as long as you avoid the delusional comfort of a community of ‘nice warm (or hot and angry) feelings’, and you don’t leave your brain at the door.

The absurdity of the term post-truth, in fact, says it all.

24/9/16 (RW)
Sniffing out Difference

Eva Wiseman writes, of women (not men?) "All our long lives we've trained ourselves to smell out difference, and attack it, however sneakily", and "As a reaction to the cult of perfectionism ... there's a huge liberation in realising that everybody struggles, all the time. But by classifying and dividing out peers into good and bad [mums] ... we're doing ourselves a disservice. It's fine to be bad, but its not worse to be good."

The issue is that wrestling with identity as a woman / mum is an ongoing (and not very satisfactory) struggle between identity and difference, and where you fit in between 'us' and 'them'. The dialectic doesn't end, and Deleuze can't settle the matter that easily - unless he just wants to be provocative, which has (had) its uses (see below).

22/9/16 (RW)

Identity and difference (machines)

In response to a discussion with Matthias, some thoughts on abstractions, identities, and difference machines, and the question he raised about Deleuze, namely that:

"Traditionally, difference is seen as derivative from identity .. on the contrary, Deleuze claims that all identities.
are effects of difference.”

Difference machines …

I tend to take a ‘sociogenesis’ approach - in which the fundamental question is: How can (human) society have been generated - from primates - during a long process of evolution, and how can language have been generated, which functions beyond the levels of articulation of zoo-semiotics - i.e. the way other animals, (including primates) use gesture, sound, body language, etc, to differentiate and implement ‘uses’/functions - within their animal communities?

In part this is an extensive ‘thought experiment’ but in part it is based on empirical (and even experimental) comparative studies of human and primate growth - as individuals, and also within their communities - separately and in human/primate mixes (there are several studies of primates raised alongside human babies of the same age, in human families).

So …

My basic view is that ‘discourse’ (in Foucault’s sense: i.e. the practices of organising texts, bodies and things - human, animate and inanimate (i.e. non-biological, material) within a [discourse] community - comes first. In other words the way primates (for instance) organise hierarchy, privileges, power, affection, etc always comes first, and the ‘finer resolution’ differences that they make: gestures, sounds, etc, come second - these sign systems are a way for the ‘discourses' - of community, power, affection, solidarity’ (etc) - to ‘reach down' into a these finer layer of differences, to make more (and more detailed) differences, and in the end even more ‘abstract’ - differences.

Humans just happen to have a much finer set of differences (language - spoken, then written, then mathematics, then computer code) into which these social discourse can reach, to articulate finer (and richer, and [sometimes] more ‘open’, more ambiguous) distinctions.

So I would agree that the way humans (and primates) create, articulate, and police their identities is by creating differences in their practice and their communities. They don’t first create identities in some ‘inner space’ and then articulate them in the differences of practice - the two are interlinked, and identity emerges from the practice of interaction with the material and social worlds.

Which means that rather than thinking about identity and difference as opposed, and somehow ‘separate’ fields of activity, I would rather ask the question in quite a different way, and say that Identity is a practice of social difference but at the same time a process of social micro-cohesion and (generally) broader exclusion - i.e a material and a semiotic practice of making physical, communicative and social differences - between an 'us' and a 'them' (or more than one 'them').

Of course it is also possible for (humans, via language) to develop and articulate their identities as ‘separate’, or ‘abstracted’ from their practices, and to defend these abstract identities to the death - even in the light of material evidence that they are no longer viable.

And at a more abstract level, it is, again, also possible for humans to construct and create identities which are inclusive (e.g. the extensive history of the practice/campaign of/for 'human rights') for all humans - but then you just end up in the same old paradox - there is no identity without difference (which is just a variation on the theme: there is no meaning without difference). In this case the difference shifts from being different from other humans, and communities, to being different from other animals, and species - and it then raises the question: do primates, dolphins, etc (add to taste ...) have rights too?

Perhaps the whole 'rights' discourse is flawed to begin with?

Gandhi offers an alternative approach to Human Rights: when approached by the UN committee drafting the 'Universal' declaration of human rights, Gandhi said that it was a very bad idea, because humans have universal obligations, not rights, and its only when they start fulfilling those obligations that we can start talking about rights, at all. (And extending our obligations to other members (and parts) of our eco-system is much easier to think through than to try to reach agreement on which rights 'we' should 'allow' dolphins (etc) to have).

The principle of ahimsa is a good place to start: i.e. the idea that all life has a spark of the divine in it, and must be respected accordingly - which informed Gandhi's practice - as a karma yogi.

So ... all semiotics is a question of difference machines (and varying levels of resolution, richness, ambiguity and suturing - or 'closing down' - meanings and use), and all identity is derived from/within the practice of material, and semiotic, and social differences. Whether identity is then abstracted (and reified - in the Marxian sense) and given a ‘life' and a status of its own, is a subsequent question - about these ‘practices of abstraction’; abstractions which are used to frame, energise, limit, and even oppose 'other' identities and practices.

18/9/16 JM
This relates to the text on your Abstractions page

I wasn't sure about where you would prefer edits, but since this page has an invitation I thought it best to put it here. On reading the Abstractions page I was reminded of Halliday's work on nominalisation and wondered whether it is relevant. I leave it for you to decide.

6/9/16 JM
The second part of McGilchrist's book The Master and his Emissary is devoted to showing how there has been a succession of shifts of balance (in the Western world) between the left and right hemispheres over the past 2000 years. McGilchrist takes us through the cultural shifts that occur in the Ancient World, the Renaissance and the Reformation, the Enlightenment, Romanticism and the Industrial Revolution, and the Modern and Post-Modern Worlds. On the course he told us that civilization has risen and fallen three times, with each fall being associated with a lack of balance between left and right hemispheres and left hemisphere dominance. He sees us as being on the verge of another dark age and wonders whether the left hemisphere is now so dominant as to make the situation irretrievable. But he says he is a 'hopeful pessimist'.

You have written " For as you strip out context, you inevitably strip out responsibility too." I think this is true and relates to the keynote that Josie Fraser gave this morning to ALT-C2016 conference, where she was talking about 'trolls' online and our responsibilities. It seems to me that the internet has stripped out embodied context making it easy not to take responsibility. According to McGilchrist's there is not enough embodied experience in our lives and we have made/are making our bodies into machines

I think McGilchrist would agree that a 'hurry up' is needed and he says the starting point is the education of children. He has a couple of new books and a documentary coming out soon - so hopefully he will say more about it then.

5/8/16 RW
Divided brains, divided societies
Had a look at one of McGilchrist's videos.

Its very striking that the Left/Right brain divide coincides exactly with the complicated / complex divide (or linear / complex systems divide), and also explains why the 'left brain' is so strong in providing 'arguments' to prove that it is the only game in town, which is why the 'left-brain' world cannot countenance the 'ordered yet unpredictable' world of complex-adaptive systems (literally, of course, is the joke!).

How can it be that we are the product of (complex-adaptive) evolution, yet we can happily deny that complexity exists?

29/8/16 RW
Discipline and the left-brain (functions)
Jenny Mackness (28th August 2016) wrote a fascinating account of a seminar of Iain McGilchrist's.

This is my response ...

Jenny, I share his, and your (?) misgivings about a left-brain-thinking world.

But it's where we are. We have 'progressed' through an age of 'reason' (or 'rationality' more accurately) to a point where we have opened the pandora's box of scientific (abstract, context-less) tools (see Susan Blackmore on our "pandoran civilisation').

These tools have been stripped of context, and anyone can use them for anything they please. That's great for sharing, but it's a disaster for responsibility. For as you strip out context, you inevitably strip out responsibility too. So, given that we have let these tools, these currencies (atomic power, genomics, nano-engineering, etc) out of the box, into the hands of God knows who, we have to create a human context to contain 'them'.

But it is no use trying to contain the tools, or your hemisphere of choice - the tools have already taken on a life of their own, and can't (like Sauron's ring) be cast into a fire and destroyed. "Them" means us.

There is no point in being pessimistic or depressed about where we are - we have let these tools loose, once and (unfortunately) for all - all ways and all people, and, for McGilchrist line of thought, all time, too. It's up to us to decide what 'us' is, and to focus our energies there. We may not have much (human) time left, so there is a bit of 'hurry up' involved, too.

26/8/16 RW
Practical Epistemology ?

There are many ways to describe abstraction. Matthias Melcher points out that describing abstraction as a process of ‘stripping away’ (subjectivity and context) might be a bit too strong (see his blog post on Recognising, below). And Stephen Downes, in responding to Matthias, and addressing some related issues, questions the very idea of cognitive states, and endorses eliminative materialism instead, writing that “there are no representations, mental contents, etc., and that cognitive processes are not linguistic, logical or computational processes. See eliminative materialism”.

I am not convinced that searching for brain / hemispherical states (as in ‘right brain / left brain theories’), nor trying to prove that there are no ‘mental contents’ is a useful way to proceed. I find Matthias’s interpretation of McGilchrist’s views, very helpful, viz. that what is important is to distinguish between right and left brain area functions as functions rather than hemispherical states. (And this supports the broadly accepted idea of the substantial plasticity of the brain, too).

Instead, I am interesting in pursuing a practical epistemology, an epistemology of use, based on Barthes’ (and Wittgenstein’s) view that signs, meaning (and therefore knowledge) are crucially about use, i.e. about the way we create, maintain, and employ signs in our everyday (material) lives. Whether such uses of signs corresponds to areas (and even states?) of the brain is another matter, although, of course, it can be useful to explore the neurology of ‘sensory’ and ‘motor’ functions of areas of the brain, or even better, J. J. Gibson’s more synthetic notion of perceptual action and affordances, as well as Ramachandran’s views on the role of the evolution of particular parts of the brain and its role in modality based abstraction (see abstraction #1 below).

I prefer to see signs quite simply as tools - albeit sophisticated tools and even reflective tools, but tools nevertheless, which link back (in evolution and in sociogenesis) to tool-making and tool-using - i.e. right back through primate- and zoo-semiotics.

On this basis, we can make some progress if we simply look at the use of signs in science (for instance) and describe in some detail how it is that ‘technical’ scientific terms arise, in practice - some terms are completely new terms, some are borrowed from everyday language and repurposed in their ‘scientific’ sense, or more particularly, for their scientific use. And this process is a rigorous process of stripping out the everyday richness and ambiguity of signs, which is specific to a multitude of overlapping individual, psychological, familial, tribal, cultural, historical (etc) contexts, and leaving only those uses (or meanings) that are ‘scientific’, i.e. signs which are open to use by anyone, anywhere, any time, in the same way (this can also be seen as the ideal thread in science, towards 'universal' laws).

And these scientific signs can then be combined, or used in concert with each other, in algorithms, texts, programmes, theories, etc, with similar exemption from any reference to non-scientific criteria, or to particular people, places or times. They are, simply, scientific (and technical) currency, which is based on its 'internal consistency', or what we might call internal resonance, or an additional layer of cumulative resonance, which gives science its immense (and sometimes frightening) power.

Such signs and texts are, literally, abstracted from subjects, agents, institutions, cultures, and eras, etc, and they are paradoxically constructed very narrowly to perform (or help us to perform) only very precise functions, but in the broadest possible range of contexts (subject, of course, to the radical contingency, and the falsifiability criteria, of all scientific knowledge), and the possibility of being combined in some way with the broadest possible range of other 'scientific' signs, algorithms and texts.

Currency and Athenian science .... (WIP)

24/8/16 RW
Abstraction and Subtraction?

In response to Matthias's posing on his blog, on

Recognising (and neurobollocks)

Some initial responses ...

1. Abstraction - Stephen's idea of 'subtraction' has some resonance with the semiotics of knowledge, and particularly of 'meta-semiotic' knowledge (see the Article on Meta-semiotics and practical epistemology, on researchgate, at: which defines science as 'subject- and context-stripped' propositions and texts.

That looks like a case of 'subtraction', but it's more and less than that. It's based on a more general process of patterns of use (rather than of 'recognition' per se) - so the starting point is fundamentally semiotic, i.e. Barthes's idea that "every use becomes [or can become] a sign of itself". The process of 'abstraction' is then, fundamentally, a process of stripping away some uses to create new uses, rather than stripping away perceptions. ('Nested boxes' of uses within uses or 'disciplines' might work here too, no? This is a kind of 'pragmatics of abstraction', no?)

And to make matters more complicated, this resonates with a notion of 'perceptive action' or affordance, a la Gibson - i.e. the idea that the distinction between 'perception'/'recognition' on the one hand, and 'use' or 'action' on the other hand is at least not very helpful, and at most just a (negative) legacy-distraction from pre-ecological (or 'primitive' - in the literal sense of the term) psychology.

To get back to the 'stripping' - [add: McGilchrists's example of sculpture ...] what happens, for instance in the first uses of the word 'motion' in physics is that physicists (Newton, in particular) appropriated a word in everyday use ('motion') and then stripped and pared it down to the scientific term, 'motion', which no longer depends, nor is it linked, networked, associated, contaminated (etc) by a particular subject or context (person, agent, institution, culture). In the process, it becomes 'pure [intellectual] currency' - it has been stripped of subjectivity and context so thoroughly that anyone (or agent) anywhere can use it at anytime. This is (of course) an extrapolation of the ideal notion of monetary currency, which would (if it existed) similarly have 'universal' exchange value. (The problem with money, as we all know, is that exchange value seldom, if ever, translates to universal use value - whatever that would be. We would at the least have to make a distinction between 'use-ability' and 'use value' the former in principle context free, and the latter context dependent).

So ... 'abstraction' in this particular sense - (there are several others, see the posting on 11/8/16, below) - is paradoxically based on narrowing down a particular (scientific, meta-semiotic) use of the word 'motion', but at the same time broadening out the context of use/rs to the universe of all possible agents (or actants, in Latour's sense). And on top of that is another codicil (or meta-paradox?), i.e. the falsifiability, or contingency of all scientific knowledge - or the idea that scientific knowledge is only good for as long as it doesn't get refuted.

And to make matters even worse (or better, if you think it's more interesting) this process of 'universalised' scientific knowledge (e.g. the genome of anthrax) removes, or at the very least 'uncouples' this kind of knowledge from responsibility. So ... is this kind of abstraction a process of creating (potentially) 'rogue' abstractions, and what kind of a mixed metaphor is that?

Which leaves me to think that 'abstraction' itself is a useful example of how abstractions (plural) are actually a 'resonant family' of tropes of stripping out, or stripping away, or shifting above, shifting across (add to taste ...) ... relative to: modes, tropes, subjects, contexts, eras, 'paradigms' (add to taste here too ... ).

And, Matthias ... it might be interesting to explore what role/s the Master &/or Emissary might have in creating, implementing, and challenging some of these 'abstractions' and responsibilities. (I hope to get to some of those issues in comment #2 later, much later ...)

11/8/16 RW
How does Abstraction relate to Knowledge?
Resonant knowledge/s covers a large range of types of knowledge, which are based on a range of levels and types of abstraction. So, it might be useful at some point to unpack the range and types of abstraction, here ...

We start with a working definition of knowledge as the capacity for effective action (add link ...), and we also start with the foundations of semiotics: the idea that "every use becomes a sign of itself" (Barthes) (add link), or perhaps more specifically: 'every use can become a sign of itself'. So both definitions are grounded in pragmatics, although they very quickly develop in other directions too - epistemological, ontological, evolutionary psychology, etc, all of which are part of the broader field of socio-genesis.

In a very real sense, as signs become more complicated, and more abstract, they 'lift off' from their material, sensory, and pragmatic origins. (This is similar to Marx's analysis of the way exchange value is 'lifted off' from use value) (add link), and it also links to Rossi-Landi's definition of 'work' as 'appetite held in check', (add link) or as other writers might say, 'deferred desire', placing time (and history) at the foundations of semiotics.

So, working (or playing) with these notions, we can identify about 7 different types of abstraction, although they overlap to some extent.

Types of abstraction

1. Modal (i.e. sensory modalities: auditory, kinesthetic, visual, etc), to cross-modal (or patterns of patterns across sensory modes), to a-modal or modality-free (see Ramachandran's work and the examples in the Synaesthesia paper) (add links ... )

2. Interchangeable, token, or transactional signs: signs which have 'no' intrinsic meaning of their own. For example, 'I' and 'you' which are most commonly used as 'tokens' are 'passed' around conversations from one person to another, and signify different people, depending not on use per se, but on who is using that token at a particular time and place.

3. Signs which are, inherently, more abstract. Signs of number, for example. (This is perhaps a subset of the next type, 4)

4. Signs which are more complicated, as they become texts. These progress from gestures and words (roughly 'indexical' signs) to sentences, to paragraphs, to articles and books, etc, as their use moves from 'indication' to 'predication' for example the shift from an instruction to my dog to 'go outside' to a statement to my child that 'I would like you to go outside'.

As signs progress from indication to predication, to full blown texts, they gain the potential to be used across contexts - used in a different time, place, language, and era, although they do not always 'translate' that well. These sets-of-signs or texts are cross-contextual, in a similar way to the idea of cross-modal abstraction (see type 1). There is an additional layer to the analysis of this particular modality, which is Blackmore's gene / meme / teme analysis of our 'Pandoran' (and 'viral') society (add links). So in a variety of ways, signs, texts, and memes/temes take on a 'life of their own', independent of their original production, just as RNA / DNA / genes do in biology.

5. Context-free
Signs like 'time' and 'motion' are used (and thus have meaning) in everyday language, as well as in physics. In physics they have been highly formalised, resulting in much narrower meanings (and uses), which are stripped of subjectivity and of context. Here the subjectivity that is at the heart of 'predication', personal pronouns, and identity is set aside in the interests of a very different type of semiosis. This is deliberate, and methodological - not contingent or serendipitous (as is the case with cross-contextual signs and texts (see 4, above).

The result is a set of signs that is inherently abstract - its pragmatics is context-free (which is very paradoxical), as these signs and texts are constructed so that they can be used by anyone, anywhere, in any language, in any era - which makes them much more powerful, as well as potentially much more dangerous (e.g. the publication of the genome for anthrax). They are also, ironically, inherently falsifiable, which is the foundation of their (contingent) 'truth'. They can be seen as the set of meta-semiotic signs and texts (add links ...), a set which to various degrees share the essential characteristic of being context-free, i.e. the set comprising science, mathematics, bureaucracy, money, and representative democracy.

6. Complex-adaptive systems
Crucially, these systems are ordered yet unpredictable, although they do require (negative) constraints. (add detail ... )

7. Opaque
These signs are too complicated to interrogate - they operate at a scale (size, number, scale, resolution, speed) which is useful - they can run complicated systems and perform very complicated actions - but it is impossible to interrogate their workings, as they work too fast for the human brain / interface. The four-colour map problem is a case in point, as is the case of current versions of machine 'language' interaction with humans.

8. Viral / collaborative / cooperative
Within the overall description of knowledge as the capacity for effective action, and meaning as use (See Wittgenstein) the description of knowledge/s is somewhat dependent on structure (e.g. types of abstraction), and somewhat dependent on user-group, or discourse community (add links - Text and Discourse, Foucault, etc). Discourse communities can be highly informal (and random, even including avatars, nowadays) right through to the highly formal. The more informal they are, the more they can be viral, and based on emotional rather than cognitive criteria. On the other hand, uses too can be viral (see Blackmore's examples of memes and - in their more technically embedded form - temes (add link), and therefore more scientific.

9. Addictions
There are a number of addictions that would appear to be yet another category of abstractions. See here ...

The point about addictions is that they draw your attention and energy away from the messy materiality, and interconnectedness, and complexity, of daily life. Abstractions can be good and bad news, as they provide stronger, more deft, higher resolution, more exchangeable (etc) semiotic machines, systems of difference, and algorithms and code - for synthesis into further abstractions and programmes - all of which, in plain language, are simply tools for description and use.

So we can probably describe addictions as i) a type of abstraction (type #9, provisionally), and ii) as mal-formed abstractions - perhaps for the simple reason that their 'use' eats itself up, and they are not (like other abstractions which provide - or aspire to provide - universal currency, and even universal recombinant currency) sustainable. In the world of abstraction types, addictions deliver only malnutrition, no?

And, as unsustainable forms - of malnutrition - they contravene Monbiot (and many others') dictum that it is "by living lightly, we enrich our lives"

6/8/16 RW
Note: Here, as elsewhere in the Resonant Knowledge hypertext, 'chaos' will be used in the everyday sense to mean 'an unregulated mess' (and not in the technical sense of 'chaos theory'). On the other hand, 'complexity' will be used in the technical sense (i.e. as in 'CAST' - see below), and not in the everyday sense (i.e. to just mean 'complicated'). And ... apologies for the confusion, as it is the case that 'chaos theory' and 'complexity theory' are very similar, if not interchangeable. Where possible, I will use the term 'complex-adaptive' instead of just 'complex', which might help a bit.

Open or complex systems, or both?
Having worked quite happily, for some years, on open learning and open learning systems, it strikes me that it is high time to move on. What we now need (to make sense of theoretical and practical semiosis) is a distinction between chaos (in the everyday sense) on the one hand and, on the other hand, self-organising systems - i.e. recognisable systems, which are, however, flexible and adaptive, and open to ambiguity - or, in more technical terms, they are complex adaptive systems, in the sense of CAST: complex adaptive systems theory, which means that they are, crucially, ordered yet unpredictable - which, is most usefully unpacked by people like the late Paul Cilliers.

And see here for some fascinating examples of self-organising, complex-adaptive systems in practice (on the lab bench, but still ... )

So, for instance ...
Why is wikipedia a complex adaptive system, and twitter is not?

This question, like so many others, is not clear or absolute. For starters, wikipedia might have started as a potentially chaotic system, which only became a complex-adaptive system when they sorted out their protocols for editors (and managing editors), after which it changed from (somewhat) 'happy chaos' into a self-regulating, and therefore complex adaptive, system.

In effect, what they did was to put procedures in place effectively manage (and sanction) trolls - in some cases within minutes (or less).

Twitter, on the other hand, is still trying to develop and implement anti-Trolling protocols, so it remains chaotic, and still merits

Tim Garton Ash's complaint that it is 'history's greatest sewer'. What Ash does not necessarily see, though, is that Twitter is, potentially, a (paradigm case of a) complex adaptive system, and that the question (as Cilliers points out in some detail) is just a matter of managing, and constantly tweaking or adapting, the degrees of freedom that are tolerated.

Cilliers points out that at zero degrees of freedom, a system is neither complex nor adaptive, and certainly not 'open' in any useful sense of the word. On the other hand, with unlimited degrees of freedom, it will sooner or later turn into chaos. So to set up a complex-adaptive system, it needs to find the balance between the two - definitely with substantial degrees of freedom, and at times close to the edge of chaos, but always with some constraints.

It should, at least, have:
  • a large number of interacting agents, communicating frequently,
  • within reasonable degrees of freedom,
  • with, preferably, a small number of simple rules.
The effect of the large numbers is both that the system can be self-regulating and adaptive (if protocols for monitoring and tweaking are in place), and that no single agent can know what is going on in the whole system: it is ordered yet unpredictable. As chaos theory tells us, if you want to know what is going to happen, you have to put your hands behind your back, and run it, and see ...

So ... this line of questioning seems to be leading to the idea that completely 'open' systems are potentially hazardous, unless they are also complex-adaptive, and hence self-regulating (though still 'managed' in the sense of constant light-touch monitoring and tweaking. In which case calling a system, or a programme of learning, or a social network 'open' is no longer adequate enough, and could even be quite misleading.

Examples of CAST in:

MEDIATE: Synchronicity or resonance?
The case of Mr Purple in MEDIATE is an interesting case. He is a boy on the Autism spectrum, who engages with the MEDIATE interactive 'room' (with his body) and finds a way to successfully manipulate the playful 3D environment so that it produces the colour purple, and does so repeatedly. The MEDIATE interactive room involves no text, and the people who use it are given no instructions. When the boy is later asked why he focused in on a colour (instead of a shape, or sound, or movement - and why the colour purple, he said he just really liked the colour. When asked what he would like to do with the colour, he said he would like his room to be purple, so it was duly painted purple. From which day onwards he slept through the night and stopped sleepwalking, for the first time in several years.

So, what's going on here? This could be a case of synchronicity, but it probably makes more sense to talk of this as resonance (and resonant knowledge) - albeit intuitive knowledge, not 'cognitive' knowledge. In other words, his sensibility and the colour resonated in some way, and he felt good, or happy, in it.

This looks like an interesting case of intuitive resonant knowledge, which we will provisionally use as an exemplar, an indicator, of this particular mode or trope of resonant knowledge.

So we might start of our taxonomy of resonant knowledges by listing this, first one:

Taxonomy of Resonant Knowledges
1. Intuitive (see Mr Purple, in MEDIATE) WIP 2. Synchronized / context-free / context-independent?

Communities or Tribes, or both?
Meaning is always situated with cultures, sub-cultures, or the disparate post-modern 'tribes' that McLuhan correctly predicted, in 1969, that we would be living in, within our global village. So what are these cultures, and what is the nature of these tribes? In broad terms: are they ideological, epistemological, familial, ontological, or (also) meta-semiotic ? (i.e. framed and motivated by political belief, by the criteria of formal knowledge, by 'family/like' relationships, by ways of being, &/or by their abstraction and decontextualization, respectively?)

It must surely depend on the different types of tribes: for instance, if it is the 'tribe' of MEDIATE-users, it would be ontological.

Work in progress ...

Why do we need 'complexity'?

Complex-adaptive systems complex-adaptive v. linear systems, positive and negative constraints.

What has resolution got to do with it? ... at macro and micro/nano scale, in large numbers, and
at high speeds (see: the four colour map problem, Computer/human 'language' interaction and creating questions that cant be asked; effect of nano materials as bactericides; bacteria as gene-code incubators (along with virus