This is a page to explore questions about resonance. This can be as broad as your imagination - so see what you can come up with.

If you can, please add like a blog - i.e. add to the top, and write the date and your name / initials (I will use RW, JM = Jenny Mackness, MM = Matthias Melcher, RD = Richard Devon), so ...

5/6/18 Commoditising and Sublimating Anger
Political governance, in the era of 'democracy' is all about commoditising and sublimating anger. The trick is to convert resonance between particular points of view into the resolution of representation, and declare a workable truce for 4, 5, or so years.

But early millennial democracy is fractured and angry. Very angry - on the right and (with Occupy, Five Star in Italy, etc) and on the left. There could be many possible threads to this problem, such as: i) The failure to sublimate, and commoditise and transform and sublimate anger through the number crunching mills of representative democracy; ii) the failure to conserve the status quo ante (a largely innocuous see-saw between binary choices of two political parties of the establishment, iii) WIP

5/6/18 The Which way up is post/modernism?
Post-modernism has always been deliciously confused and ironic. The following wikipedia entry confirms all that, in spades ...

  • "Postmodernity (post-modernity or the postmodern condition) is the economic or cultural state or condition of society which is said to exist after modernity. Some schools of thought hold that modernity ended in the late 20th century – in the 1980s or early 1990s – and that it was replaced by postmodernity, while others would extend modernity to cover the developments denoted by postmodernity, while some believe that modernity ended after World War II. The idea of the post-modern condition is sometimes characterised as a culture stripped of its capacity to function in any linear or autonomous state as opposed to the progressive mindstate of Modernism.[1] (Emphasis added].
  • Postmodernity can mean a personal response to a postmodern society, the conditions in a society which make it postmodern or the state of being that is associated with a postmodern society as well a historical epoch. In most contexts it should be distinguished from postmodernism, the adoption of postmodern philosophies or traits in art, literature, culture and society. In fact, today, historical perspectives on the developments of postmodern art (postmodernism) and postmodern society (postmodernity) can be best described as two umbrella terms for processes engaged in an ongoing dialectical relationship, the result of which is the evolving world in which we now live".

To wit ...
Modernism was about experimentation, and breaking with the strictures of the past, and for many of the artists of the early 20th C (and beyond), their work moved deliberately into multiple and fractured perspectives. So it is just so ironic that modernism is described as a 'progressive mindstate' confined to linearity/ (aka progress). And the value judgement in the description of the post-modern condition (?) as "a culture stripped of its capacity to function in any linear or autonomous state ..." is revealing and self-ironic. Wow. And the final section is, at best, confused.

3/6/18 The Revenge
3/6/18 The Revenge against Post-Modernism
There are many people (Jordan Petersen, for example) who still, 40 or 50 years on, resent post-modernism, and want to get their revenge. But modernism is dead (see elsewhere) so their only possible response is a reductionist version of connectivism, in a desperate (and even,clinically, hysterical) attempt to displace the emergence of a constructive ecological / complex-adaptive networks approach with an everything goes approach, and in which every failed actor (even POTUS) can become a ham performance artist, particularly on a perverted version of Twitter, where they don't even have to complete their sentences. tbc ...

8/5/18 Resonances Trump Style
New vocabularies are required for new systems of difference. We need to use concepts like channeling to describe Trump channeling the memes, the social algorithms: of the NRA last week on gun controls, and of Netanyahu, this week on nuclear controls. Paradoxically, the boundaries (a rather flattering term, but lets run with it for a while) of Trump's channelling share a disdain for externalities of feral capitalism and feral science. These discourses / memes / algorithms all overlap, and all hunker down into their own closed (sutured) discourse - a domain in which knowledge is bounded by their own algorithms, and in which a discussion of a broader ethics is not possible.

Ironically, channeling is often done uncritically, thus reducing and narrowing down,rather than increasing, agency - contrary to appearances.

6/5/18 Resonance, narratives and histories
If semiotics is resonant, which it is, it follows several paths - at the one extreme, anecdotal narratives, and on the other extreme, a-historical formalisms. Of course, a-historical is not even a paradox, its an oxymoron. But that's not to say that some things (like 'objective science', not to mention 'second amendments to the Constitution', and bureaucracy, and open financial markets - aka 'free' markets') are placed behind a political and cultural fire-wall, and de-historisized, de-contextualised, de-subjectified - for a while). That's until the Anthropocene penny drops, and the myth of 'externalities' is declared dead (even if not yet buried).

5/5/18 Avatars and Algorithms
The distinction is subtle but crucial. We have always had algorithms - from the first signs, to aphorisms, to syllogisms, to cannonised texts. They have just become more complex, and more autonomous, and more dynamically auto-referential - currently incarnated as autonomous algorithms (AA) (not artificial intelligence, which is just smoke up ...). On the other hand, we also have avatars, which on the one hand are not new, but which may sit on and cross the borderline with AA. However, avatars can be proxies - just proxies, standing in, openly, for some institution or person (and this applies whether they are 'machines', i.e. particularly soft[ware] machines, or not), or more sequential modes, like academic theses.

The ethical borderline is the point where avatars cross into off-limits AA - i.e. secret (auto-referential code) - the dark domain of prostituted geeks, who have sold their algorithmic skills for a sack of bitcoins (even if, like some recent employees of Cambridge Analytica, they protest their abhorrence after the fact, and even after their failure to re-sell their souls to other prospective clients).

19/4/18 Resonance and Resolution

If ... knowledge is the capacity for effect action/use of resources, and
if ... resources (linguistic, social, material, theoretical, algorithmic) are created by the day to day build up of meaning - from the resonances and micro-resolutions of what we use, should we start constructing a typology, and ontology, not of things, but of the processes of how resonances form, and come to a resolution - or not?

The process of resonance / dissonance can lead to resolution or and dissolution - coming together, and to agreement on the one hand, and parting ways, and coming to blows - weaponised in various ways - on the other. Alongside that is a parallel track, in which both resolution and dissolution can result in silos - echo chambers - with a defined 'inside' and 'outside' domain - resulting in a domain of the Other - externalities and collateral / tolerable consequences/damage. This can apply, rather surprisingly, to both science and to fake news. To wit, the beguiling universalism and objectivity of scientific method and epistemology is no guarantee of an ethical life/style. Once we have resolved what we mean, we still have to decide who gets to use it, and for what purposes. And there are no objective scientific procedures to resolve that dilemma. Its n/a.

Writing as a contribution to cumulative resonance and resolution or to exploration?
As I was thinking about how to write this into a linear (-ish) 'article', two things struck me.

First, its a mug's game - why should there be one point of entry? Why on earth? It needs to be written with several points of entry - that's the whole point of writing this up as a wiki/blog. So I will be going hybrid for a while, to see if that works, and I will be writing several 'threads' of thought in parallel. In short, I will use what looks like a traditional sequence and structure of embedding (1, 1.1 1.1.1, 1.1.2 etc - but then I will use that again, and again, for different points of entry, starting at point 1 each time. Hopefully I will still get to point 2, 3, etc - time will tell.

Second, there are several ways to 'set up' the drama of systems of difference - competing and conflicting ideas, concepts, etc. One that has just struck me is the vector that potentially runs through from use to resonance to resolution. Some systems of difference start off as thoughts and actions - loosely defined affordances - on the edge of chaos. Through their use, they gain resonance/s with others, who also try them out and find them useful, till they get established (for a while). Then interesting things start to happen, and branches, cul de sacs (see Aortic Dissection as the one of the most shocking physical metaphors of this - in human biology - that I have recently discovered), as well as junctions, mergers and acquisitions, asset stripping, etc.

The results, the resolutions, of these resonances and dissonances - of a range of contesting differences - often end up in a veritable zoo - a typology of chaos, or a chaotic typology.

For instance, the resolution may narrow down, become highly formalised, almost artificially stabilized, stripped of context and subjectivity but yet, paradoxically, increase exponentially in currency - anyone can use it, anywhere. Chemical equations [avatars? algorithms?] at the heart of aerobic life (as we have come to know and treasure it), like, for instance, the use of carbon dioxide and water to produce the carbon rings of glucose, plus oxygen, are predictable and often even reversible. Two-way predictability. Remarkable.

Then there are our contemporary resolutions (AA - autonomous algorithms) which on the one hand are built upon science and logic, and the endeavour to create universal, and universally available, uses / affordances, but along the way, they may loop into a weird dialectic that takes them so far out of common use and common currency that they become impenetrable to an ever shrinking number of key/code holders. The 'inversion' of scientific logic?

The underlying process that produces this kind resolution of what started out as rather chaotic, informal resonance, is science. It resolves, slowly and painstakingly into a use, an affordance, of a stable-state, zero context (social or individual) super-currency and a universal affordance, which can be used by anyone anywhere.

At the other end of the spectrum, there are other instance of resonance that resolve quite differently. Trump-ism, probably as a fundamentalist nationalism, is a stunning example of not even edge-of chaos emergence, but just plain chaos - (even though he described himself as a 'very stable genius' - a bit of a paradox there, no?), which has instant, viral (and fast decaying) currency, in a particularly large/outsider [another paradox] constituency, which is highly contextual.

Question: (WIP) Is it possible for either/both science and fundamentalism to claim to be playing 'the long game'? And for both of them to be mistaken? [weird thought, that] ...

8/4/18 24/7 Food as a concoction of externalities
The NOVA food classification identifies the degree to which foods are nutritious - on the one end of the spectrum, or just mimic nutrition (faked foods?) - at the other end. Faked foods contribute to the environment of externalities, or living on junk (as in dangerous drugs) which has become the norm in the weaponized 'food' market many people (particularly the poor) are obliged to live in. The drug war / food war rules, OK? (not).

"A singular feature of NOVA is its identification of ultra-processed food and drink products. These are not modified foods, but formulations mostly of cheap industrial sources of dietary energy and nutrients plus additives, using a series of processes (hence ‘ultra-processed’). All together, they are energy-dense, high in unhealthy types of fat, refined starches, free sugars and salt, and poor sources of protein, dietary fibre and micronutrients. Ultra-processed products are made to be hyper-palatable and attractive, with long shelf-life, and able to be consumed anywhere, any time. Their formulation, presentation and marketing often promote overconsumption... "

So ... we have entered the global junk wars of the Anthropocene era, and the 6th mass extinction / 3rd world slaughter - if Don Trumpo doesn't get get us there first.

8/4/18 24/7 The new Tribalism
The point about Globalisation and the Nation State is that neither match the new fractured, diasporic tribes/families that people actually live in. The 1% (or the 0.1%) live in the globe, and drop in to a range of places for facetime and other pleasures. This has been the case from hundreds of years, in some form or another.

For most people political agency (aka 'voting') is confined to a few minutes every 5 or so years, in a nation state that has little or no relationship to the micro/global tribes they actually inhabit. Most people are just itinerant visitors to nation states -touching base with them from time to time (see the distinction between //digital visitors// and //digital residents//).

8/4/18 Surveillance Capitalism
The naked truth is out. The US and China are the bad-twin mirror versions of the same nightmare - surveillance capitalism
which has hoovered up 2,000,000,000 unsuspecting social media users, and is on track to add another billion Chinese too. Facebook Zucks (sorry, can't resist it).

9/2/18 Graphics, analysis, and health policy
In response to an interesting debate on health policy in Canada (on food, diabetes, salt, sugar, etc) ... , the following:

The Canadian Dietary Guidelines debate is very interesting, for a number of reasons:

1. The challenge to make it evidence based, and presumably to create an agreed evidence base which is untainted by lobbyists, is welcome. And there's a good presentation on the low/high fat debacle.

2. Citing evidence that low salt is not optimal? Maybe so. I’d like to see more argument and more sophisticated epistemological modelling - see below. (And I am still surprised at how far I, personally, have been able to reduce my salt intake, permanently, with only good effects).

3. The same goes for fructose. At an anecdotal level my partner and I, (and large numbers of other people who are ‘juicing’) have achieved immediate and long term health benefits from what is essentially a high-fructose diet (or part of the diet). This includes substantial weight loss, although that’s a byproduct of the exercise, really. And besides, the only analysis I have seen on HFCS (high-fructose corn syrup) in the diet indicates that it actually has a much higher concentration of glucose along with the fructose, so I’ll take a rain check on that for now. I don’t see the accompanying higher glucose being factored out of the evidence in the studies I have looked at.

4. However, the main part of the problem with the challenge (see 1) is that discrete univariate analysis seems to be the basis of the argument in these documents, as well the presentation of the argument divided up into single-issue, discrete, ‘problem areas’.

5. On top of that, even multivariate analyses need to be put into more dynamic models (these may or may not have been used here, but it doesn't look like it to me). For instance, reducing diabetes and obesity to a pancreas/insulin problem is far too simplistic. For starters, insulin is only relevant during the period when you are eating, and immediately afterwards. The liver, thyroid, pituitary, and other tissues, are all in some way responsible for blood-glucose homeostasis at all other times (which is of course the majority of your day). Moreover, the adrenal cortex also has a role in glucose management, not in response to eating, but to stress, and the fight-or-flight response that stress triggers.

This does of course have a secondary level influence on the process, as stress triggers the release of glucose, which triggers insulin production, which triggers hunger, which goes back into the hunger/eating/insulin/fat-storage-hypoglycemia cycle, and to insulin resistance. So high stress lifestyles would appear to have a key role in all this, as well as whether you enjoy (and thrive on / are addicted to the ‘adrenalin rush’ of modern urban life (or not), and even whether high-stress life-styles become part of a required cultural commitment in certain types (and levels) of employment.

This makes univariate intervention (or separate consideration and ‘rule-making’ / guidance, and policy presentation formats) for frameworks for each of the factors, discretely and separately, a bit ridiculous.

Systems thinking (and soft-systems analysis) - and I’m sure many other feedback-loop graphic mapping tools (e.g these) - should surely be able to be used to convey how an intervention changes the balance in the blood-glocose-homostasis system, no? Particularly when we must (by now) be able to create an app. or 2 to demonstrate all this interactively.

[Aside: interactive apps. of this sort could, functionally, be a much better format to create (and contest) analyses of a quite a few other epidemiological problems, no?]

So … interesting? ja-nee, or more to the point, yes-but .... A small step for Canada / mankind so far; the interactive multi-factoral graphic modelling app. has yet to be delivered (with creative commons licensing please).

And ...
This makes me think that a number of issues that face us (as Anthropocenes) should be faced in the same way. i.e. by putting aside our anachronistic, sequential, text-based arguments and analyses, and, instead, putting our efforts (and focusing our debates) on/into developing apps that are interactive graphic models of networks of co-factors (including animate and inanimate actors) that affect the issues that we face, such as addiction to sugars, obesity, adrenaline-junkie cultures (not to mention, flooding, droughts, the jet stream, ozone, etc. And while we are about it, how about a class action suit or two against corporations who might (?) knowingly have been promoting 'low-fat' / high-sugar / hyper-processed foods as more 'healthy', for years, in the same way that the tobacco industry promoted nicotine-junkie life-styles?

To return to the basic premise of this project ... if knowledge is the capacity for effective action, and if (many of) the big problems that face us are dynamic, emergent, and multi-factoral, maybe we should seek knowledge, and resonance, in a more appropriate format. And maybe the kind of systems-thinking apps envisaged above is what we need to do that.

The 'logic' of complex-adaptive emergent systems is different from the logical neatness of predictable engineering systems, and the public understanding and commitment to appropriate solutions (which include the public as active, reflexive participants) will not be achieved with more non-interactive, graphic-free texts.

29/1/18 Epistemology v. Ontology (knowing v. being)
  • Note: There are clearly two senses of the word 'ontology' being used in this wiki: the noun, as in a network of differences, to delineate a new field of inquiry and action; and the verb, as in a way of being (as opposed to a way of knowing, or an epistemology), explored on this page. The two do, of course, overlap somewhat; we can use the former to achieve the latter ... (There is also the more formal IT sense of ontology as a scaffold, so perhaps we need to sort all that out ... )

There was a time when metaphysical ontology (life in the hereafter, ever-after, eternal etc, etc) dominated society. Epistemology, (in the mode of the commodification of information and knowledge) made big inroads on faith, ever since the Greeks, with a big push during the renaissance (and even the reformation), and into modernism. Shooting ourselves in our own feet, to stumble into the Anthropocene era changed all that, as Latour points out (see Gaia). We are now dragging ourselves, as it were, kicking and screaming back to ontology once again, but a thoroughly secular ontology this time round. A kind of last chance saloon, and so far, it's not pretty.

That's not to say that the epistemology of the sciences and mathematics doesn't matter any more. They remain the best epistemological tools H. Sapiens (aka the 'tool making animal') ever had. It's just that the direct ontological consequences of earlier epistemological developments have displaced and relegated the natural sciences to a useful, if hazardous, and rather feral, technology. Some of the social sciences, on the other hand, are still stuck in physics envy (the yearning for predictability and control) - and the media and the public have yet to even fully embrace post-modernism, and the ontological implications that that entails.

Unfortunately we don't have long to do this, in fact we don't have any time at all to move into the ontological era of the Anthropocene. If we don't, we will just end up as a footnote (a foot-fossil deposit?) in geomorphology. As the ghost of a latter-day J. F. Kennedy might have quipped: "we are all ontologists now" (or bust!)

And that calls for a different kind of resonance. If knowledge is the capacity for effective action, we need a working consensus on our new ontology, ASAP. (Neither Davos nor the UN hold out much hope). Until then, all sorts of chancers and bruisers, (e.g. Presidents of the big three powers, and several random billionaires), are muscling-in on the game. On one thing they (and even poor old Michael Gove) are, unfortunately, right - the game has well and truly changed, and it's all to play for.

So it's time to add a new ontological cycle (#4), to add to the history of knowledge process cycles in the diagrams here. Watch this space ...

16/1/18 Physics envy and Gaia
Latour, in The politics of nature (2004) started to assemble a comprehensive 'actor network' including the natural, biological and social sciences. He has now taken that a large step further in his recent book, Facing Gaia to flesh out the redistribution of agency across the human and 'natural' worlds. He comprehensively shows why we are now 'all post-modern' (or pre-modern), i.e. why the domain of 'externalities' is now an empty set, and why even physics, or perhaps especially physics, can no longer operate under the delusion of predictability and objectivity which is stripped of context, subjects, and history. He does this with some help from a close reading of Lovelock's work (in particular). We are now all living within Gaia, and making our unmistakable and unfortunate 'marks' on geomorphology, within our very own Anthropocene era.

The interesting question is where the starting point for this era will be finally be pinned down: 1945? (the atomic era) - or some other mindless 'inscription' of our hubris in geological time: e.g. at the start of global warming, the 6th Great Extinction, the hole in the Ozone layer, etc.

The point is that however much the theoretical method of scientific knowledge (and physics par excellence) can continue to be based on 'objectivity', there are now always externalities. The history of science (Latour's chosen discipline) now embodies the practices and theories of science (not vice versa), and is no longer an external or add-on commentary. We can no longer make any claims for objective practice (with a straight face), only for an objectivist (even if potentially hazardous) method. The horrors of Rachel Carson's Silent Spring have finally taken hold across human 'civilization'. First God died, and now physics envy has (finally) died. It's now up to us ...

13/12/17 Structures, agency and governance
Jenny (see here .... ) wrote: "Am I right in thinking that what you are seeking is universal resonances which would help us answer the question "who decides what acceptable knowledge is, who decides what acceptable action is, and where and how the acceptability (of knowledge and action), is decided?" ...

and ...peter preston put the issue of global governance succinctly in the guardian 18.12.17:
"... as Mark Malloch Brown glumly observes, 'there is almost no sensible debate about how to manage a runaway world… when the global level has become largely a laissez-faire space that allows those who operate in it to choose which, if any, national jurisdiction they will subject themselves to on what issue'".

There are three paradoxes to resolve here: i) How do we understand, and manage, Knorr-Cetina's micro-global structures? ... the ones that often originate in Foucault's micro capillaries of power, but can, and often do, spiral out globally? ii) In a fractured, multi-diasporic world, how do we locate / site / validate governance at a global level? All we seem to have at the moment are fractured, insular, pop-up diasporic silos of desire, with increasingly dysfunctional 'national states', in an increasingly complex, open global environment, which is anything but 'social'. iii) Exponential increases in agency, leading only rarely to sustainable, consensual, governance structures fit for purpose.

The mobility of global capital/markets and its autonomous and anonymous self-'governance', 'off-shore', however, has never been better.

11/12/17 Whose Risks?
Many if not most doctors are fond of telling patients about risks: "your risk of having a ... [fill in the blank: e.g. heart attack, recurrence of cancer] will be doubled unless you take ...[fill in the blank: e.g. statins, hormone treatment]. What they should be telling you is that it the absolute risk (to you) increases from, say, 2% to 4% (if you dont accept the next layer of medical intervention, which is double, sure, but it still leaves you with a 96% chance of not having being affected in any way. Not bad odds.

So what is going on here? Are the doctors dishonest? No. Their exclusive focus on relative risks is just embedded (or even lost) in what passes for current NHS discourse, i.e., chase the big picture stats, and guide ('nudge') individual patients into it (whose role is to transformed into becoming a data provider for the big data that is coming to rule over us - sounds familiar?). In effect, the doctor's practice resonates with NHS discourse, which resonates with current financial management (i.e. provide an ever-diminishing NHS), which resonates with austerity and big data/ 'off'-shore / off-grid tax-haven oligarchies, which resonate with neo-conservatism, which resonates with obscene concentrations of wealth.

By which time everyone has forgotten about the patient, what (their) absolute risks actually resonate with, or the kinds of choices that patients might actually want to make about their own lives and life-styles. Agency v. Data is the new game in town; capitalism is just the chorus line.

11/12/17 Fake News and Propaganda
How does fake news morph into propaganda? By convincing people that giving them more 'choices' gives them more agency; that more Nationalism provides more equity, and that ....

5/12/17 Swarms as Zoo-semiotic Resonances

WIP ...

17/11/17 Uses of Resonances: logical / predictive / analytical / assertive
A mathematical proof has logical resonance. It is primarily internally consistent, within the parameters of its own logic and assumptions. A predictive algorithm has external (and internal) validity and resonance; it must be constantly tested against empirical evidence, and the accuracy and adequacy of its descriptive tools for describing and capturing data, and it must also be consistent with the logical assumptions of its own system. A retrospective algorithm (there must be a better term for this, no?) is primarily analytical, and describes complex-adaptive / emergent events post-hoc, using descriptive and logical tools that often overlap with the tools of predictive resonance. It is therefore also based on external and internal validity, but also takes reflexive thought and identity into account. An assertive 'proof' on the other hand is argued by assertion alone, and depends largely on internal resonance within a self-defined and often self-contained community, and the authority of, and loyalty to, the person, agency, or community making (&/or 'liking'/endorsing) an assertion; (this might or might not borrow tools from the other three modes of resonance, rather selectively, but they are not crucial to its success). Formal truth values are not relevant to assertive proofs.

As every use (can) become a sign of itself, so too every resonance (whether logical, predictive, analytical or assertive) tends to reinforce its validity, within one or more communities of users. We can discriminate between different types of resonance / reinforcement: emotional, logical, cultural, loyalty/patronage, predictive, analytical, etc, on the basis of the extent to which formal, external, protocols, and different types of communities are used to validate them (or not).

In the networked society there has, however, been a shift (particularly in open [social] media) to combine these different types of resonance, which can strengthen virtual communities. This is not necessarily a bad thing, except when combine becomes conflate &/or confuse, and when this is done deliberately and deceptively to try to establish validity more broadly than is warranted (logically and figuratively).

With the development of each of our new media, its often a long learning curve before society gets to grips with such conflation: see the extensive mis/use of radio, film, and later television by extremist propagandists (and others) in the 20th and 21st Century, and the current mis/use of targeted attack-ads/nudges/click-bait/data analytics on social media, by a range of State and proxy-State agencies - within and across nation states.

12/11/17 Autonomous Systems and Experts
We all use autonomous systems and networks, from syllogisms to complex computer programmes. The problem is that some of them can be controlled and reset only by experts. This problem was articulated clearly by the Free Software argument that "if you cannot sell a bicycle and prohibit the owner from fixing it if it goes wrong, why can you sell software without giving people access to the source code, so that they can fix it if goes wrong (as the Free Software, and Open Source community do?)

So, there is some sense in the slogan that we should not trust experts - willy nilly. ... WIP ....

22/10/17 (RW) Sleeping Beauty and the Trolls
The US legislators seem to have woken up at last to the issue of social media trolls. But the basic issues about governing open platforms have been around since the early days of Open Software (see elsewhere in this wiki). Why have legislators taken 25 years (or more) to wake up, and has it penetrated the Westminster bubble yet? Besides, its an inherent issue in the study of complex-adaptive (open) systems, an issue that was flagged up in the research literature even earlier.

And, even worse, is it only thanks (!) to electing a troll to the White House that we have woken up to this? Would anyone have been doing anything about feral capitalism /Russian adventurism without him? What a thought.

16/10/17 (RW) Paradigms, Zen, and the Ultimate Resonator
Nancy White recently quoted Donella Meadows on paradigms:
" 'There is yet one leverage point that is even higher than changing a paradigm. That is to keep oneself unattached in the arena of paradigms, to stay flexible, to realize that NO paradigm is “true,” that every one, including the one that sweetly shapes your own worldview, is a tremendously limited understanding of an immense and amazing universe that is far beyond human comprehension. It is to “get”, at a gut level, the paradigm that there are paradigms, and to see that that, itself, is a paradigm, and to regard that whole realization as devastatingly funny [emphasis added]. It is to let go into Not Knowing, into what the Buddhists call enlightenment.' "

Paul Kalanithi, neurosurgeon, in When Breath becomes Air, unfortunately seems to be pretty much stuck in precisely that quagmire:
"If you believe that science provides no basis for God, then you are almost obligated to conclude that science provides no basis for meaning, and therefore, life doesnt have any. In other words, existential claims have no weight: all knowledge is scientific knowledge. Yet the paradox is that scientific methodology is the product of human hands, and thus cannot reach some permanent truth [emphasis added]".

Kalanithi was a highly talented writer, surgeon and scientist, but I'm afraid he fails Zen 101. In short, as 'Emma' (Kalanithi's oncologist) might have said: you have to take responsibility for your own values, once you walk outside the church or laboratory of your choice. Value is as value does, or ... every value becomes a sign of it's uses.

2/11/17 (RW) Resonance is ... part 3: Counting cancer drug efficacy
A recent study in the BMJ concludes that the European Medicines Agency is not doing what it says on the tin:

  • Most new oncology drugs authorised by the EMA in 2009-13 came onto the market without clear evidence that they improved the quality or quantity of patients’ lives
  • After market entry, cancer drugs rarely show benefits on overall survival or quality of life in randomised trials
  • When survival gains over available treatment alternatives are shown, they are not always clinically meaningful. seems that the rhetoric on how well we are progressing with new drugs for cancer treatments is parting ways with reality. Which begs the question as to whether the key resonance is with financial interests, or evidence-based science.

6/10/17 (RW) Resonance is ... part 2: Counting is as counting does ...
Some people in the medical research community, and the media, seem to have recently woken up to the difference between 'survival rates' and 'quality of life', and to the question of whether gains were "often marginal' even when survival rates were improved. In addition, there is the question of why drugs approved, in effect, for substantial metastatic cancers (cancers which have already spread) have not shifted the emphasis of their metrics away from survival rates to quality of life measures. Nor has 'clinically meaningful' replaced 'statistically significant' as the key measure.

What you count (i.e. the precise systems of differences), is fundamental to quality data, research, and public health.

"Conclusions This systematic evaluation of oncology approvals by the EMA in 2009-13 shows that most drugs entered the market without evidence of benefit on survival or quality of life. At a minimum of 3.3 years after market entry, there was still no conclusive evidence that these drugs either extended or improved life for most cancer indications. When there were survival gains over existing treatment options or placebo, they were often marginal". (Listen to an interview with Naci, here: Davis, Naci, et al, 4.10.17).

4/10/17 (RW) Resonance is as Resonance Does
Public health would seem to be a no brainer. primary health-care trusts (recently killed-off in the UK). As would epidemiology.

However, there seems to be some dissonance afoot ...
For example: A patient on warfarin was recently told that her blood was 'too thin', which she attributes to taking raw ginger (in hot water) for her chest cold. But this does not seem to resonate with the medical profession, whose advice is to keep off the ginger (which has no known side-effects) rather than to keep on with the ginger, and reduce the warfarin (which has a long list of serious side effects).

This little no-brainer algorithm would seem to be applicable to several other 'critical health' diseases. It looks like there are several treatments which could be inserted in place of 'warfarin' (e.g. statins), and several zero-side-effect foods (kale, spinach, etc) that could be inserted for 'ginger' - the algorithm runs just as well.

Which raises several issues:
1. How empirical is the practice of medicine?
2. If 'do no harm' is the bottom line in medicine, why does it not apply here?
3. If 'medicine has to be 'financially efficient' (also a no-brainer), why does no one join the dots between the above, and make the effort to encourage (rather than ban) patients from combining high-tech/high-risk medicine with low-tech (even low-human-intervention), low-price foods.
4. Who/WHO is counting what?
5. There seems to be a discrepancy between 'survival rates' and 'wellbeing'/quality-of-life metrics. Surely these should be taken out of the closet, and included in public debates too?

16/09/17 (RW) Abstraction and Alienation.
George Monbiot raises some crucial problems that arise from our feral monetization of anything that moves.

Monbiot writes: "There are two inherent problems with the pricing of the living world and its destruction. The first is that it depends on attaching a financial value to items – such as human life, species and ecosystems – that cannot be redeemed for money [or even what happens when these ecologies become totally subsumed by the interests of optimising profits - see Sandels' examples in this link]. The second is that it seeks to quantify events and processes that cannot be reliably predicted".

This can be restated as a pivotal difference between alienation and abstraction. Resonances occur across systems which are the product of both alienation and abstraction, and the two often appear to be synonyms. There's the rub.

Although it is true that abstraction is often useful (that slippery little meme at the basis of semiotics), and it gives us a host of powerful (meta-semiotic, a-modal) tools (e.g. language, mathematics, finance, bureaucracy, etc), all the quantitative ones are based on the complete alienation of these signs from their contexts. That alienation is the root of their utility and currency, as well as the root of the problem, which is that quantitative systems, particularly in the form of market values, can become cut-off from the social and ecological use values of the events that provide the primary data for them.

This is clearly articulated by Monbiot:
" ... there’s a flaw. Hurricanes do not respond to market signals. The plastic fibres in our oceans, food and drinking water do not respond to market signals. Nor does the collapse of insect populations, or coral reefs, or the extirpation of orangutans from Borneo. The unregulated market is as powerless in the face of these forces as the people in Florida who resolved to fight Hurricane Irma by shooting it. It is the wrong tool, the wrong approach, the wrong system.

The issue is not that these alienated / qualitatively different systems cannot (ever) be made to 'talk' to each other, but that the internal validity and internal efficiency of markets, and the interests of market traders and investors, work so much better (for them) when they are unregulated. What are glibly called 'externalities' get in the way of the interests of efficiencies of quantitative systems. Getting non-quantitative, complex-adaptive agents and ecologies to interface with these abstract, alienated systems sometimes feels like trying to find a way to shoot at hurricanes with guns.

06/09/17 (RW/MM) Autonomy: a key concept in semiotics, social media and 'the social':
- inching towards a new ontology

The following is from recent correspondence between RW (in black, with pink for emphasis) and MM (in blue/s):

>> knowledge cannot be inspected/ viewed before purchasing?
... Not sure I fully understand this. Tell me more …

I simply meant that you have to buy a (copyrighted) book or pay a
degree program before you see the knowledge contained in it, and
this means that the exchange value of knowledge is probably
somehow distorted by this intransparency. I mentioned this because
I think it is the big problem underlying the whole OER discussion, or
print news decline, or that knowledge is often ‘bundled’ with something
else, such as ads, see also

My misunderstanding on the upper levels (iterative, Barthes)
was that I missed the 'meta' characteristic of the derivative
signs, i.e. that the derivative 'token' is of a different 'type'
(meta language rather than object language), while I was thinking
of more similar things.

Sure, part of what I am trying to explore are the points where derivatives develop/metamorphose, precisely, into signs of a different type. This is central to exploring processes (and levels) of abstraction - both as semiotic and epistemological issues, and as issues of political power, control, solidarity, etc. Marx’s classic analysis of use value / exchange value, as well as his analysis of the underlying alienation (of signs, from the domain of physical uses - and users - to the domain of meta/object semiotics & uses) is a powerful example.

The creation of autonomy, within signs (and nested signs, algorithms, the discourses of science, etc) as well as within individuals, with individual identities, is fundamental to the nature of semiotics and social semiotics.

Syllogisms / algorithms and their derivative, nested, ‘programmes’ / scientific theses, etc form the basis for autonomy – which provides both individuals and signs / discourses with ‘currency’: the facility to be ‘passed around’, and/or to ‘run around’ across contexts, and to emerge into new forms and new levels of abstraction, with or without the constraints of a common social framework. (Which is an extrapolation of Ferrucio Rossi-Landi’s word in Linguistics and Economics, and which I have described in various publications as the virtue (and problem) that as signs and sign uses develop, they tend to ‘take on life of their own’

Signs/algorithms, and ‘individuals’, are inherently viral and autonomous, and are, in principle, capable of being ‘free’ agents – i.e. free of contextual constraints – to develop innovative, emergent behaviour, for good or evil. That’s one of the central, if not THE central paradoxes of the social.

It is no wonder that radically ‘open’ social media provide a wealth of both. And it demonstrates, painfully, the inherent problem of emergent systems without suitable constraints. The fact that social media have thrown up a new generation of ‘oligarchs’ (financial and political) should come as no surprise – nor that they are, ironically & inherently, anti-social.

In a different vein (and following up on Latour’s thought experiment on Einstein’s theory of relativity, on ‘framing’), I described this as the issue of how well sited, sighted, and cited you are (i.e. your position, your ability to see [and engage through other senses], and the extent to which you act as a node in a field of light (or deep) resonances, including the ‘deep' ones such as formalized knowledge. Each step-change in abstraction puts you in a different position on all three counts - your ‘site’ offers you access to a larger range of events (and types of data); your ‘sight’ is thereby enhanced, and you can see, observe, monitor, and analyse more; and as a result, the degree to which you are ‘cited’: i.e. referenced as an attractor in fields of resonance - as an important node of information (and knowledge) - grows too.

Like, in your dog's example, perhaps
the reaching for the leash. When the dog takes the sign of
putting on the leash as 'representative' for going to walk,
the reaching for the leash would be, in turn, a representation
of the anticipated putting on, and hence a 'nested' sign for the
walk, with the accumulated meaning of all included signs.

l love the addition of ‘nested’ signs – with routes within, as well as sign-posts of/to other signs in a chain. (What then would we call the linking function between strings of nested signs?)

Regarding the financial derivatives that you were addressing,
I think these are more an example of the similar 'type'
scenario, because a speculation on future obligations is, like
the primary one, still a speculation and a money value.

Maybe so. It might be more of a spectrum of transformations from one ’type’ to another, rather than a sudden change. Marx’s analysis of use the use value/ exchange value relationship is definitely a discontinuous change in type (but it is in principle bivalent, i.e. reversible). The question (as in the 2008 financial crisis) is: when does a derivative become so far removed from its origins that no-one knows (or cares) where it came from?

That step change is from an obligation to someone you know, or can identify by tracking back through the chain of transformations and derivatives, versus an obligation to a field of totally alienated, and opaque derivatives, which become merely intra-referential to themselves, with no signified in sight. The end result of the application of iterative layers of derivatives to monetary values is a field of ‘promissory notes’ (i.e. money) stripped of any real obligations – i.e. equity that cannot be liquidated. By any other name, these are massive Ponzi schemes, where the autonomy of the financial instruments has gone 'wild' and feral.

The financial derivatives/ iteratives are probably better suited
for illustrating the power law effects that the "lightness"
of the global/ scale-free networks enables.

Agreed in principle, but the origin (and originator) of the sign in light social networks is still identifiable [even if it is a bot], whilst that is not true for the ultra- [or multi-recursive] derivatives of the 2008 crisis, which in practice become ‘lost in derivation’ - no-one knew or cared where they came from anymore, they were pure ‘playthings’ - entirely abstract tokens, of speculation for its own sake.

But we must not forget that lightness has its own merits too … (up to a point). Social media - and keeping ‘in touch’ without ever having met someone (like you and me) is a historically unique affordance, which should not be taken lightly (excuse the pun).

(BTW I like your idea of the exchange value of knowledge and
its characteristics of a commodity. But what difference does it
make that, unlike other commodities, knowledge cannot be
inspected/ viewed before purchasing?)

Not sure I fully understand this. Tell me more … [see above]

For me, there are three modes of knowledge, and I have not yet created a satisfactory way to clearly and simply distinguish them. Perhaps this ‘resonance’ project will enable me to make some progress, no?

One mode is knowledge as a commodity - or a sign (or, following your ideas above, a ‘nest of signs’ / an ontology of signs). Knowledge is on the one hand an abstraction that is created so that is has frictionless currency (particularly through digital media like this email), and a set of (nested?) affordances that can be universally applied - anywhere, anytime, by anyone - so it has maximal use value, but also maximal danger, as anyone can, in principle, now make nuclear bombs: if you let knowledge loose, you lose control over it, as it becomes fully autonomous. Knowledge which has frictionless (and free) currency could be called autonomous knowledge, which emphasizes the value/threat of the dichotomy that this presents. [However, these types of knowledge can be ‘inspected before purchasing’ – although this might be restricted by copyrights or patents, if they are in force and enforced].

On the other hand there is knowledge in the form of strategic knowledge (which is close to, but not, perhaps, the same thing as ‘complex adaptive’ knowledge) - which I have defined elsewhere as the ‘fit’ between formalised knowledge (whether predictive or complex), context, and resources. This is certainly a form of complex-adaptive knowledge, and cannot be inspected before hand. It is knowledge-in-use.

And lastly, complex knowledge (per se) has to be ‘run’ to fully appreciate how it works in context, as unlike commoditised [predictive] knowledge, its application is embedded in variable contexts.

On a lower level, I was tempted to conflate your
sociological notion of resonance with the 'recognizing' that is
so closely related: When person B feels A's post resonating, B
does also recognize his/her own thoughts from fragments of A's
thoughts. And partial elements leading to recognition of
the whole, are a fundamental mechanism of both Downes's and
McGilchrist's idea of knowledge generation. And when A reads
B's comments, s/he does not only become aware THAT his/her
thoughts resonate with B, but also WHICH fragments resonate,
and from several such reactions, A can refine their own
thinking a lot.

So, in terms of recognition, the "nice warm feeling" may have
been extremely important for our prehistoric ancestors, as it
may have encoded the primitive knowledge of whom to trust and
whom to escape?

Fair point. Recognition is a valid form of resonance in its own right, and is the most basic semiotic: recognised use is why (and how) it becomes "a sign of itself". This then splits (in ontological terms) into: i) consolidating particular ‘signs’ per se, (in zoo-semiotics and socio-semiotics) and ii), providing the basis for conscious thinking - i.e. reflecting on signs and their use, on their relation to other signs, and on nests and nested signs / algorithms / propositions / syllogisms, etc. This leads into the development of (autonomous) affordances, which in turn leads into identity - and micro/cultures.

However, this process may merely be within echo chambers, or as I might say, ‘circular resonances’ (an ongoing form of ‘ circular arguments’) i.e. recursions in closed contexts. Or, conversely, it may be recursion in a broader cultural context, or even in an open context. Either way, the path of development from recognition to consolidating signs (meaning through use), to nested signs, affordances, identity, and micro-cultures is similar.

It is the intention that differs- either to close-down change, or to be open to emergence of new signs, practices, affordances, identities, and cultures. We might usefully rebrand what used to be called corruption and nepotism as incestuous resonance, often supported by, or tacitly condoned by, the institutions of the State – including most of the ‘civilized West’ / ‘developed countries’.

An interesting case in point of closing down change, is the case of Chinese technological development, which predated ‘Western’ technologies by many centuries over a wide range of fields of application [add link], but which seemed to have ‘frozen’ particular (albeit very effective) practices - to the detriment of further technological change and progress. Having got the technology ‘right’, in practice as well as in the implicit ‘science’), these practices became the status quo, although they did not seem to have had a commoditized scientific basis, which could have opened them up for expansion. Orthodoxy (as opposed to autonomy and individualism) ruled, probably right until the time of the last Emperor, and the Dowager Empress Tsu-hsi, who modernised China in the late 19th/ early 20th Century.

The only specific case I know of (in any detail) is the banning of three-masted ships, because of an internal power struggle in the Chinese court, which destroyed the navy’s immense superiority, and the Muslim Admiral’s power base, in the 15th Century (See for instance - which might be a bit loose on details, but it makes the point. It is one of the great ‘what ifs’ of naval - and global colonial - history).

I agree that from a sociological angle, "formalized"
knowledge including a "dose of rigor" is the most interesting.
In particular since it is seen by many of its holders as an asset
that sets them apart from the new 'have-nots' in their current
'war on the stupid'

Very interesting links, particularly the distinction between the ‘unjustly unjust’ (in feudalism and before) and the ‘justly unjust’ (‘democracy’ onwards), and the links to the evils (sic) of ‘meritocracy’ (!/file/philosophicalcritique.pdf).

The war on the stupid is a real issue. I think a useful way to approach it is to first of all locate the domains that we are talking about, which are on the one hand formalised knowledge (including both predictive and complex) and on the other hand, cultures. The content of formalised knowledge should be incontestable, as empirical accounts (descriptions, using agreed ’systems of difference’), and should be valid across contexts.

It’s application in strategic knowledge, is however more problematic, as it inevitably takes place within a particular context and culture of use, including group and individual affordances. The unconstrained application of ‘autonomous’ knowledge has now become a global issue - from energy generation and use to nuclear power to genetics to bactericidal nanotechnology, etc. We have no globally agreed culture or governance institutions [or global resonance framework] within which to govern it. Quite who is 'stupid' then becomes a more debatable issue.

So, we live in a global village of fractured cultural narratives and pop-up diaspora, in which cultures and micro-cultures contest each other, calling on, and developing, very different historical, cultural, spiritual, tribal, national [etc] narratives and affordances, either by means of selective historical accounts, or within pop-up diaspora, to self-justify their positions. Social media is the ultimate affordance for magnifying our ability to invent, increase, disseminate, contest and/or consolidate, our resonances: both differences [&/or] similarities.

On the other hand, I was fascinated by the idea of resonance
and recognition since it also applies to the fundamental
levels of knowledge and its generation. Therefore I like that
your Knowledge Generation Framework (Figure 6) centers around
the "Ante-formal Networks" and "Routine Practices". "Routine",
as mundane as it may sound, also includes the idea of a
'normal' state of affairs that is the backdrop when an expert
notices a salient deviation from this normal picture. And this
salience recognition, in turn, is what is so interesting about
our 'right brain' mode of attention, because it is already
present in birds and animals, to notice a predator appearing
before the 'normal' background.

Salience and norms … interesting. This is also present in babies who apparently learn to screen out sound patterns that are NOT part of the sounds/tunes/nested signs) of the language/s they are learning to mimic, long before they learn to speak them. It seems that our difficulty, as adults, in learning new languages is that we have to unlearn these early auditory salience filters. I spent a some time experimenting with ‘meditation’ training (basically beta brain wave patterns: see the book, Superlearning), and its ability to open up, or allow people to bypass these salience filters, to allow the right brain to access and to explore the sound patterns of new languages as new norms, rather than as salient deviations. It really did enhance initial foreign language learning, although I regret that I did not have time to take it much further than the initial stages.

21/8/17 (RW) Co-opting Resonance against the Other? Take #2

[Copied to Ontology]

Resonance is interesting at a range of levels, the first is when the action of some agent resonates with another one. The second level, where reflexivity kicks in, happens when the second agent becomes aware of that, and realises that their action 'resonates' (or makes sense) in a wider social context. Or, to put it another way, we could add to the definition of a sign ("every use [potentially] becomes a sign of itself") the codicil that "and ... every sign [potentially] resonantes with other similar uses/signs/users".

The first level is not much more than what my philosophy tutor (Johann Degenaar) used to call "a nice warm feeling". The second level is the base line of the development of culture - or shared, self-aware, resonance. It would be easy to conflate this development of culture (or micro-culture) with the development of knowledge. But that's sloppy thinking.

Culture (as semiotics and as praxis) only becomes knowledge when it is shared within a framework which includes a rather substantial dose of rigour, scepticism and openness. Openness, in turn, is not just another level of nice warm feelings; rather, it is radical scepticism, and must satisfy the falsifiability test. (In short, if a statement cannot be falsified, it is faith at best, and hocum at worst).

This is the third level of resonance, where we find 'knowledge' per se, as a particularly formalised type of resonance.

And knowledge itself splits into two modes: it can be either 'predictive' or 'emergent' . In short, knowledge is either predictive into the future, or complex-adaptive (or emergent), i.e. in cases in which a rigorous logic of the development of events can be traced, retrospectively, i.e. only in hindsight. What this means is that knowledge can be used to ensure 'resonance' into the future (watch the error bars, though!) ... or ... it can be used to confirm 'resonance' from the past.

WIP ... to be continued ...

17/8/17 (RW) Co-opting Resonance against the Other?
Matthias (MM, 16/8/17) asked a series of very interesting questions, which I have responded to, below (see the non-italics sections). Many questions remain work in progress, though ... Matthias Melcher wrote (see italics sections):

... due to summertime, I have finally come to revisit your wiki.

Now I have stalled at your foundational explanation saying that resonance is mutually reinforcing. If some person A writes something on the web and some person B says "this resonates with me", would you be interested in this effect only after the re-inforcing started, i.e. once B comments on A's blog and A realises that his own point was even more relevant than he had noticed before? Or is already that very process of B's becoming aware of the resonating, sufficiently interesting?

RW: I use ‘resonance’ both physically (e.g. 'resonance' in strings [or sound waves] on a musical instrument) and metaphorically. In both cases the action of the second ‘agent’ (whether a person, a string, or a sound wave) produces resonance (and reinforcement) instantly, to the extent that the two phenomena overlap - in broad terms, they overlap as ‘waves’, and produce a resulting wave with is stronger than either of the two earlier ones.

There is a further level of ‘resonance’ though, which is probably the issue that you are raising (very interestingly). That is, a level of resonance between the two people involved, which includes not only that their ‘uses’ (and therefore their practices) overlap and therefore ‘resonate’, but also that their affordances and their identities also ‘resonate’, change, and shift closer together, even if only at a micro level. [Add link to affordances, etc, in Ontology, and more details too ... ].

The simplest form of this, perhaps, is the way that people with different accents, while talking to each other, often tend to mirror, or even merge, into one another’s accents, and they begin to sound more and more similar as the conversation proceeds. And, of course, the social politics also comes into play; the higher-status gender, accent, class, caste, ethnicity, etc etc usually becomes the dominant accent in the conversation.

The Anatomy of Bullying.
So, resonance starts off as a physical (mono- or multi- or cross - modal) reinforcement, but in humans, (and I’m sure in zoo-semiotics as well) that quickly and inevitably (also) becomes an issue of affordances and identity, in which the socially dominant actor often shifts the resonance to that side of the conversation, by nudging the other party into a kind of resonance which becomes increasingly like compliance (with the socially dominant accent / mindset / etc), or even ‘incorporation-and-surrender’ - into the dominant social mode, and the dominant social actor's accent / affordances / identity.

So (#2), attempts to establish resonance, and the location of that resonance in the social domain is a crucial aspect of social power / community.

Social power, then, can be based on either dominance-and-compliance, or mutual sharing and joint enterprise. Political leaders are expected to do both, but to keep them distinct, and to exercise discretion in how they are kept in balance. Open societies promote and celebrate the latter; closed, fascist ones insist on the former. Knowledge, as the capacity of effective action, could be said to cover both instances.

E.g. asking what kind of multiple channels it needs to trigger it -- channels of different modalities like in synaesthesia, or not? And perhaps considering whether these channels itself can be seen as reinforcing each other?

Channels definitely do tend to reinforce each other (from the Buba / Kiki example upwards - in complexity). That is the source of cross-modal and a-modal abstraction - from bio- and zoo- semiotics to language and mathematics. The key is to have a phenotypical anatomical expression (e.g. the angular gyrus in the brain), which provides the ‘neural infrastructure’ to facilitate that (see the synaesthesia paper for details).

The reason why I would like to separate A's subsequent reaction as a second step from the original recognition process, is that the iterative reaction mechanism is a very powerful concept in its own right that pervades a large part of your wiki: in particular when you describe the madness of all sorts of derivatives that get out of control. Or the fundamental statement that "every use becomes a sign of itself (Barthes)”.

The iterative reaction mechanism has two modes - within awareness and within conscious reflection. My dog and I build up a semiotic-ontology (sorry, I cant think of a better term for it right now) ... of sounds / words - that are signs - the dog's name, and various instructions, within a micro-ecology (an 'ontological field'?). These are certainly signs, as they primarily exist in the relational space (or micro-cultural space) that my dog and I share. They are also signs in that they have currency - they can (to some extent) be exchanged with other people, who can use the same signs to communicate with my dog, once I have told them how the sign 'works'. The dog in turn also builds up a repertoire of sounds and gestures he 'uses' in relationship to me, and which I have learnt to respond to, appropriately: he too contributes to our ontological field. And I can also teach other people how to respond to the dog's signs. (You can apply this thought experiment to relationships with pre-verbal babies with similar results).

So ... (#2.1), the dog and I build up a repertoire of signs that we use for communication, and we both build up a cumulative
'awareness' of how those relational signs (= correspondences between systems of difference) 'work'. And we are both aware of the semiotic affordances that the two of us share. Or to put it another way, we become aware of the overlapping ontology that we share (which comprises, to start with, of the two actors, and a repertoire of signs, in our-cross species micro-culture.

At the next level, the iterative reaction mechanism also provides the means for developing a conscious microculture, which we can differentiate from an awareness-based microculture, on the basis of the meta-cognitive affordances of conscious reflection, which (to start with) is no more than the ability to communicate about our micro-cultures with others. And I can talk to other humans about this dog-human micro-culture, but the dog doesnt have the same affordances to do so.

So ... (#2.2) our affordances for talking about how we talk to each other define a completely new domain (and micro-ontology?) for developing communication consciously. This enables us to consolidate our semiotic practices (and affordances), but equally it allows us to change (or even discard) them in favour of new or derivative practices. We can of course do this merely by changing our practices, and displacing the signs we associate with particular uses with new signs, which requires only the mutual awareness of the actors involved, or ... we can do this through conscious exchange of meta-cognitive, 'conscious', exploration, development and consolidation of new signs, affordances and semiotic practices.

So ... (#3), ‘resonance’, like all other aspects of semiosis, is grounded in physical phenomena, but it’s uses can develop in very different ways, as very different types of actors (verbal, pre-verbal, non-verbal) put them to work with them differently in their social interactions with others.

It is also very possible (as we have seen in the ’styles’ of Goebbels, Trump, Mugabe, Zuma - and many others) to convince many people that conformity (sutured, enforced, resonance) has more benefits for ‘our’ group than open resonance (which is less certain, and even more risky). The problem is that this is premised on the myth that a divisive and adversarial politics can be sustained (for 1,000 years?). And sooner or later the ‘Other’ bites back, and then the whole edifice collapses, and social cohesion and order collapse with it.

The topic of these iterative reactions, very much touches my current interest because they certainly map to the 'left-brain' mode of isolating/ wrapping/ nesting. And the multipoint characteristic of the resonating in the first step, is close akin to the recognizing that seems to be fundamental for the creation of new knowledge.

Perhaps separating the two might also help to navigate more easily through the dense network of the wiki sections.

I am not sure this makes matters simpler, but your questions (Matthias) have raised my own thinking to a new level - albeit still a bit ‘rough’, thank you. Hopefully this can be reworked into something clearer and more ‘useful’!

[And ... a recent definition of fascism: a condition in which, as one's intentions deviate from reality, it is reality that has to give way. This provides a nice bridge between Goebbels and Trump].

30/5/17 (RW) The Myths of Greece and (Christian) Rome, and the Ultimate Resonator
The post-modernists had great fun deconstructing (and undressing) the ultimate signifiers - and teleologies - of language, politics, religion, etc. But the question was seldom, if ever, asked of how we ended up with an ultimate signifier, or ultimate resonator (the 'one ring to rule them all, and in the darkness bind them') in the first place.

Seems to me there was a disruption, a philosophical coup somewhere between Greece and Christian Rome. The Greek (and Roman) gods (the guys upstairs who decide your fate) were, if my scant knowledge of them is correct, an unruly bunch who slept with each other's wives, fought amongst each other, squabbled, and were often a bit unhinged. Somehow the Abrahamic faiths (from Judaism through to Christianity and Islam) won all the metaphysical state capture battles in the Mediterranean, and displaced these Gods with a 'benevolent', single God (give or take a Trinity here or there) and fired the rest. The resulting victorious God/s looked after (several factions of) 'us' and somehow always managed to be on 'our' side in wars.

The notion of a single metaphysical CEO, blessed with benevolence to his (sic) people, as the 'guy upstairs', seems to be the core of Roman-Christian fusion metaphysics, which displaced the notion that the 'guys upstairs' in Greek and Roman mythology were more like the leadership of a mafia family.

This is a serious problem, as this myth - that the guy upstairs in metaphysics is a unified, perfectly and ultimately resonant, and benevolent, single entity - functions as the role model, and conceptual model, for how we see his (sic) temporal counterparts - i.e. our democratically elected heads of state. In 2017 in particular, it might be a lot more honest if our God/s (and our rulers, by extension), were seen to be the squabbling, sharp-elbowed mafia families of Greek myths that they are, more often than not.

27/5/17 (RW)
(In reply to Richard Devon ... )
Agreed, a "class structure without work" - is precisely the new (and disruptive) mode of production we are moving into. Many years ago, in the UK, (1977) I was struck by what I thought was a 'weird' (and very large) protest march on London - for "the right to work" - now I understand it. A future in which most people will "have no labour" would really shift power to a 'charity-based' society, with all the dependency and humiliation that goes with it. Nurses are already joining the queues at food banks here, and no matter how small the numbers, that's indecent.

27/5/17 (RD) (Copied from Researchgate comments)
Vivek Wadha argues that AI runs the world now and the future will have no labour (jobs). I am cynical about the 'I' bit, as databases are full of errors, and I believe there is less 'A' than meets the eye as it is owned and designed by people for the benefit of the few - a class structure sans work? Weird idea. I found blogging a useful learning device for my students so tried it myself for a while. Not sure if you will have access to this but it has a couple of short essays relevant to this discussion

13/5/17 (RW) Computer says 'no'

In reply to an ongoing conversation about the limits (if any) of 'AI', I refer to a conversation with Herman Hauser (IT financier and innovator) in Portugal, who asked much the same question.

My reply was that the missing link in "AI" is subjectivity, inter-subjectivity (and self-awareness).

And the simplest way to think about it is to consider the one, crucial, aspect of abstraction that is, by definition, not possible to programme into AI: i.e. the token swapping that occurs when you say “ I think x, what do you think?” and I reply: ‘I think x.1, does that make sense to you?”.

The crux of the matter is that all conversation is premised on the fact that we are passing around, and exchanging, the 'empty' signs, “I” and “you” which in a sense are 'meaning-less’ (because they are so unstable, their meaning reverses all the time, as each exchange takes place). Conversations use these paradoxically empty-signs to create the shared meaning that both of us have a personal stake in, which we can only create through the process of mutual exchange of these empty signs, the tokens and the mechanism of constructing our inter-subjective commitment to a common, semiotic, meaning.

This meaning is, paradoxically, not primarily abstract, in the sense of being context- and subject-free. It is meaningful precisely because it is grounded in a shared context and shared (and jointly created) inter-subjectivity and history/ies. (In a sense, 'subjectivity' is a non-sequitur, unless it is nested within the notion of inter-subjectivity).

That tiny point in time in which minds touch each other is denied to machines.

The question “what do you think?” cannot be asked of a computer, because the computer has no “I” to exchange. And simulating a ‘voice’ for a computer that would say “I” is a non-sequitur, which is perhaps why the David Walliams comedy sketch about bureaucracy always had the computer saying “computer says no”, rather than “I say no”. Which points to the crux of the problem, and of the absurd humour in Walliams's sketch, which is that if the computer ever did say “I said no” you really, really, couldn’t ask “And who the .... do you think you are?”

Of course this is nothing new. This is precisely the point of bureaucracy - whether computerised or not. Namely, to create a totally abstract set of rules and procedures, abstracted to such an extent that you cant ask ‘why’, because bureaucracy is precisely context- and subject- stripped; it has no “I”. It is constructed in such a way as to be immune from questioning, an embodiment of a kind of 'common sense' which has no individual subject. That is precisely the value and the frustration of bureaucracy: the 'subject' of bureaucracy, if it has one at all, is supposed to be the public / good - which already includes you and I, and it makes no sense to question yourself, does it? That would merely be an infinite recursion.

The 'threat' is not the machines, or AI, but rather the people promoting AI (and 'big' data / 'big' alpha males) as the solution to human evolution, but one leading to a "robot society which might keep us humans as pets [in which] the subjective recognition, done by diverse resonating individuals, is replaced by algorithms whose supervision is thought to be optimized by rationality (or by a dictator) to find the one true solution" (Matthias, by email).

This is just warmed over 'social engineering' [with algorithm-sauce added] from the 1970's, but it smells of 1984, and pseudo-religious teleology.

We are all trapped, and complicit, in the bureaucracies of the 'social contracts' that we (willingly?) vote for, again and again, and delegate blindly to our democratically elected 'leaders' until they call the next election. That's what we have to resolve. Bureaucracy and law are the foundations of equality, but that's a long stretch from justice.

13/5/17 (RW) Structure, Agency and Cross-talk
Resonance might usefully be split into different genres of resonance: those that reinforce, those that are cross-modal, those that are modality-free, and those that cross gross semiotic boundaries: i.e. across from text to embodiment, and back (Montessori learning, yoga).

Or to look at it another way, resonances that reduce possibilities and horizons, versus resonances that open up, and open out, what is possible and the domain and range of the possible.

Continued, here ...

8/5/17 (RW) Globalised alt-fascism and divide and rule

Two things have come to the surface, and they are inextricably linked:

1. Globalisation
Carole Cadwalldr's alt-mafia's organisational chart, below, and her analysis:

Alt-F flow organisation chart .jpg
Alt-Mafia's organisational chart 1.0

... which shows how the alt-mafia don't have to "take back [globalised] control", they have already done so, and what they actually mean by their attack on 'globalisation' is very partial - they want to cherry-pick a fight against the global resistance to their developing hegemony (see Trump's recent change in the US regulatory rules, to permit data companies to harvest and sell all the data that comes their way - without permission), while preserving all the global alliances and channels they need to continue the "gutting of democracy" (Pope Francis, here ...).

And it seems that Facebook is now openly marketing their services to help in the gutting of democracy.

2. Divide and Rule
The best way to undermine any - let alone global - resistance to their developing hegemony is for them to 'divide and rule'. So, target the UN (it has already weakened itself), and break up or dilute all transnational alliances - starting with the TPP and NAFTA, and the EU. (Which is why Katie Hopkins and Farage tweeted revolting responses to Macron's victory in France, which strengthens the EU).

So, what's to be done?

On the one hand, this is all just another form of ‘big-data' crunching, albeit applied to vicious and secretive social-engineering / 'social-meme-modification' (cf genetic-modification). Is the alternative a different approach to data mining and aggregation? Or a different encrypted web (other than the existing ‘dark web’) which could re-create Berners-Lee’s global commons?

There are a bunch of ‘aggregators’ that have been developed specifically for online collaboration, like gRSShopper, but maybe there’s space for more innovation. In short, it looks like it's back to the Berners-Lee drawing board, to create a parallel (not-for-profit) commons / data aggregation practice - and preferably to fund it through donated resources in kind (i.e. time) from progressive corporations and universities (see the Linux model, which is sustainable across sectors).

I posted a comment some years back that “convenience data is like convenience food - very tempting, but not necessarily good for you”. The alt-mafia have created their own ‘convenience-data-engines’, maybe it’s time for the libertarians (not liberals!) amongst us to do the same.

A little challenge: it is possible to automate harvesting, reverse engineering, and modelling of the outputs and activities of the alt-mafia’s aggregation engines (see Cadwalldr's work on her twitter site)? Or is something more required? A few years ago I would have said the solution is more public sector regulation, but the UK, US, China, Russia (and the EU?) are all going to refuse, on the grounds of ‘security/terrorism’. So it might be best to just get on with it, and aggregate and analyse the alt-mafia's aggregations, and make the results public domain, no?

13/3/17 (RW) Ecology, Resonance and Determinism
Horgan (see Teleology, etc, below) has a more nuanced approach to the Gould / Dawkins spat - he points out, usefully, that (Stephen Jay) Gould differs in two important ways from the reductionism of Dawkins: Gould insists that punctuated equilibrium, rather than continued gradualism is a better model of evolution, and that speciation occurs via changes as a species, and not via selfish genes.

This matters, because it removes the determinism and teleology from the overall approach to biology, and to evolution in particular. That matters, in turn, because it emphasises that life on earth, and our evolution as a species is a matter of chance, sudden change, and is entirely contingent. It could have been otherwise, it could not have happened at all, and it could (still) happen elsewhere instead - possibly in quite different forms.

That, in turn, emphasises an ecological approach, i.e. that many factors, and species, and geological and cosmic events all interact to shape evolution (or de-volution). Which (finally) means that we are lucky to be here, and we should be careful to make good out of what we have, while we still have it.

This is a rather more humble (and scientific) approach than the heavy-handedly anthropomorphic and teleological approach which in an ironic way both Dawkins and the creationists believe in.

What this amounts to is that contingency, complex-adaptive systems, emergence, evolution (and de-volution) are the fundamentals of biology and social life. Not only does (deterministic, predictive) science not have all the answers, it never will have. Moreover, some of the answers science has already given us (from global surveillance to global [self] destruction, etc) are already in danger of creating terminal, feral, genes, memes and temes.

Apart from the fact that the building blocks of biology (viruses and bacteria) are unashamedly promiscuous, (not 'selfish') with their genes, Dawkins's fundamentalist views not only maligns biology with 'selfishness' (a non-sequitur for non-conscious forms of life if ever there was one), but by labelling all of biology as 'selfish' also re-anthropomorphises H. Sapiens as 'inherently' and irredeemably selfish too.

8/3/17 (RW) Resonance and recognition
Matthias, beautifully put. Let me give you an example that still sticks in my mind (many years later). In the 1970's I was teaching at 'Trafs', a high school in Cape Town, and a young girl came into my classroom with a message for me.

One of the boys near the front passed a rather provocative remark about her, at which she looked down her nose at him and replied: "Your behaviour is atrocious". The boy had not expected this, and, lost for words (he was nothing like as articulate as she was), replied "And your language is 'jungle oats'" at which the whole class collapsed, laughing.

The point is simply that he did not have the same range of signs at his disposal to counter what the young girl had said, so he grabbed what he could - a 'complex sounding' term, which happened to be 'jungle oats'. It worked, the whole class - and the young girl - 'recognised' it as an appropriate (if ironic, and technically incorrect) term. Instant recognition and instant laughter. But ... I suspect that the term never passed into the lexicon of school English (or slang), and if it did, it probably soon fell out of use.

However, if he had used something less absurd, or something derived from a similar use (or practice) he might (?) have coined a new term, or a new derivative use of an established term. What you describe so well is "the moment when the new usage resonates with the listener is probably exactly when he/she 'recognizes' the meant referent" - and this is true despite the fact that neither of the people involved articulated the referent and, for the rest of the class, this didn't matter. Perhaps several of the people in the classroom discussed the incident afterwards, and tried to find the best term for what was meant. Who knows?

In one sense this new use of 'jungle oats' is clearly linked and grounded in the shared situation, but the boy could have used a new term that he and his friends had coined (previously), for what they thought her behaviour signified - in general - beyond the boundaries of this particular situation. That too would, of course, have been grounded in synaesthesia and shared experience.

But ... there is no reason why the a new referent (signified) cannot be dreamt up / imagined by a person way 'beyond' his or her individual or shared experience, and 'inserted' into the classroom interaction. However, for that referent (signified) to enter into the English language at that point, they would need to use a signifier (a marker), which would have to have been derived from one (or many more) of their experiences, although not necessarily from their 'intellectual' or even 'social' experience, perhaps just from their experience of sounds in the surrounding environment.

In short, to kick off the semiotic process you can grab a signifier from anywhere, it doesnt matter. What matters is whether the 'joke works' (in the classroom) - i.e whether the signified resonates with your audience, and whether the long term 'life' of the composite sign is sustainable or not. (Using 'jungle oats' for this type of patronising behaviour was probably not sustainable).

[Terminology: I tend to use de Saussurian semiotics, in which the sign is the relationship between the signified and signifier (any type of marker). I realise that this differs from Peircian semiotics which uses 'referent', etc, but to much the same effect].

26/02/17 (MM) Sign of a use to resonate
You said in "Taxonomy of Resonant Knowledges", section "-1.":
"all that you need to kick off the semiotic process is for a sign of a use to resonate, to make mutual sense, between two people,...".

Is this restricted to using new term usages for concepts that can "be gleaned from overt behavior in observable circumstances" (, or does it also apply to using new signs for a NEW concept that can NOT be gleaned from a shared situation? I think your idea of resonance covers the latter case, too, i.e., it works also for non-concrete ideas. Of course the new signs/terms would have to be carefully 'depicted' by vivid paraphrasing, or be derived from vivid other terms by way of metaphor (Lakoff) and synaesthesia, until the resonance is occurring in the listener. The moment when the new usage resonates with the listener is probably exactly when he/she 'recognizes' the meant referent. Even though they cannot yet recognize the sign as a lexicalized term/word (know the word), they recognize the concept described. The emergent term, then, would be, so to speak, the frozen resonance, or as Marx would probably say, the congealed resonance? For learning, this would have the important implication that the concepts can only be learned if the frozen resonance is re-enacted/ revitalized by the learner on their own, by recognizing the concept on their own?

20/2/17 (RW) Constructive resonance - Swedish style

From the Guardian ...

In response to Trump's bizarre speech: "Look what happened in Sweden last night":

"Sweden’s foreign minister, Margot Wallstrom, tweeted a link observing that “post-truth” was named word of the year by Oxford Dictionaries in 2016.

"She also tweeted an excerpt of a speech she gave in parliament last week. “Both functioning democracy and constructive cooperation between states require us to speak with, and not about, each other, to honour agreements and to allow ideas to compete,” Wallstrom said. “They also require us to respect science, facts and the media, and to acknowledge each other’s wisdom.”

12/2/17 (RW) Teleology and messianism.
More musings from reading Hogan ... He has some pithy quotes from Steven Weinberg:

1. “The more the universe seems comprehensible, the more it seems pointless” (The most Zen formulation of the issue I have come across for a long time)

2. “The tragic dilemma of particle physicists: they are damned if they find a final theory, and damned if they don’t”

Sounds to me physicists are going through the same existential crisis that Sartre went through in the 30’s, and the postmodernism-deniers went through in the 70’s (right through to the nineties, for some of them), i.e. the death of teleology. In postmodernist (and semiotic) terms, the thesis is simply that not only is there no ‘ultimate signifier’, but that signs and meaning are endlessly circular, there is no beginning, and no end.

So it's not only ‘turtles all the way down’ as the answer to the child’s problem of who holds up the person holding up the earth, but its also ‘signs all the way round’ as an answer to the problem of where does meaning begin or end (or "where is it going to?" - if you must).

In short, its great to try to make sense of our little corner of the universe, but the idea that we could ever make sense of ‘everything’ is beyond crazy, it’s deluded. Only an omnipresent God, in all universes, could attempt that, and none of us qualify.

But unless we can get the majority of H. Sapiens to agree that teleology is a non-sequitur (to put it bluntly), they/we will always be chasing after the next God / leader / messiah. And we currently seem doomed to live in a new age of messianism, the oldest form of fundamentalism in the book, and probably the oldest profession in the world too, actually.

The good news is that there are so many great moments to enjoy in the meantime, or as I wrote some time back:

Watching a sunrise
is a hazardous business
you only get one chance
per sunrise.

8/2/17 (RW) The alt-mafia / post-fascists.
It was once said that the mafia, like any self-respecting transnational corporation, never discloses its accounts – or taxes. The alt-mafia are now in power in both the aptly named 'white' house and the Kremlin. It could turn out to be a 'happy little marriage', or a new big power resonance, instead of rivalry - quite a weird development.

This arises (rather circuitously), from further reading of Hogan's End of Science (see below) …
Hogan is (still) pretty much absorbed in his own profession, and doesn’t confront the bigger issues:

1. For me, ‘science' has a number of functions:

1.1 Finding and exploring the big questions. This is surely still good stuff, and it should and will continue. But who, and what, gets funded, and by how much?

1.2 More important than the question of whether we have reached the "end of science" is the question of defining science in terms of ‘practical epistemology’ - quite simply, science is a way of describing, circulating and sharing new, precise, and useful ways of interacting with the world. This includes new ways of thinking about it, and new ways of acting on it / in it.

1.2, Occam’s razor must be applied: i.e. start with the idea that science is just a way of "describing, circulating and sharing, new, precise and useful ways of interacting with the world”. This covers both ‘hard and soft’ (or 'inanimate and animate') phenomena, and predictable and complex-adaptive events. So its a useful ‘epistemology of everything’ - to coin a phrase.

That done, you can then differentiate the predictable from the ordered-yet-unpredictable stuff (both within physics, as well as within the arts-and-sciences).

2. There are a number of issues that should be pushed up the global and social agenda:

2.1 Hogan touches on the politics of ‘big science’, but pulls his punches (and his investigations). The politics and ethics of scientific funding, patronage, and corruption (hard and soft ‘sciences’, in Kenya, AIDS, Vaccines, Drug trials, drug costs ... add to taste) is actually far more important.

2.2 The monetisation of science is broader than his remit, but if he is interested in the big picture, he should be engaging with it.

2.2.1 Prestige or, more to the point, vanity science (for institutions, as well as for many right-wing academics, I suspect) needs to be deconstructed for what it is. Calling it 'ironic science' (as Hogan does) is far too kind.

2.2.2 Absorbing the discourse of science (i.e. inquiry and understanding) into the discourse of off-shore wealth accumulation (aka 'outsourcing') on the current global range and scale marks a fundamental shift in society that is the ‘new epidemic’.

It turns professional decisions - on science, pharmaceuticals, education, health, transport, security, etc into the playthings of the global kleptocracy / oligarchs. Clinicians in hospitals, for instance, are subordinated to monetised management by ‘bed-managers’ (in the UK - and elsewhere?) and international consultancy firms: e.g. McKinsey, recently decided for the NHS that hernia and carpal tunnel operations would no longer routinely be paid for by the NHS, unless there was some surplus funding sloshing around somewhere, that could be found and applied for (while the patients conditions grew worse, often resulting in more extensive, painful and costly medical interventions. (A stunning example of outsourcing and off-shoring monetisation, on the advice of financial 'experts', which could also be seen as a kind of 'back-door' State capture - a meme which is all the rage in post-Mandela South Africa, following closely in Putin's footsteps).

2.2.3 And … in the process, this emasculates (or just junks) territorial-based democracy, which becomes the plaything of would be celebrities, pretending to be 'nationalist' politicians (see: the definition of a politician: someone who always desired to be a celebrity, but was not good looking (or talented) enough).

3. And ... the much more mundane issue of 'doing' science (as opposed to the heroics of 'discovery') still fails society, when, for instance, well educated people in the UK say that it's bad to take a whole course of antibiotics, as everybody knows that taking too many antibiotics is bad for the development of drug resistance. And in South Africa (and other countries too, I am sure) the whole AIDS epidemic was mishandled by a decidedly unscientific approach to prevention, cure, transmission, etc. For example, the erstwhile head of the S.A. AIDS campaign, Jacob (now President) Zuma, told the court where he was accused of rape that it was not only OK for him to have sex with his friend's daughter (you have to look after your guests), and it was also quite safe for him to have unsafe sex with her (he knew she had AIDS) because he took a shower afterwards. The judge said he was, literally, speechless.

4. The urgent need to define and operationalise a new cooperative commons (for which science, in some of its best moments, was a model) is written off in the noise generated by ‘hard nosed’ economists, an alt-mafia populists.

5. The upshot of this is the need to step back from the texts, the tactics, the tweets, the news ‘events’ of the day, to deconstruct what is actually ‘going on’ behind the media froth, in an analysis of the the ‘discourses’ of power (after Foucault, Derrida, etc) e.g. the behind-the-throne machinations of Steve Bannon and the alt-mafia, aimed at ‘bringing down the State’, and undermining years of progress in building the basics of a cooperative commons.

At least the fascists were committed to making some aspects of the commons work – even if it was just making sure the trains ran on time. The American alt-mafia are running public office as a zero-responsibility, zero-remuneration (they have already accumulated and stashed their billions) job, for a zero-commitment society (via zero-hours ‘employment’), keeping their money beyond the reach of the State and paying close to zero taxes (see Donald Trump's companies in the state of Delaware, which is deemed 'off-shore' for tax purposes). The global kleptocracy has taken over. (And eat your heart out, Friedrich Nietzsche).

6. The alt-mafia/state-shrinking-and-capture regime could also be seen as a macabre form of abstraction - a shift away from national democracy, perversely carried out in the name of nationalist populism, in layers and layers of irony/deceit. To wit, the new resonances:

"Imagine a world in which the old left-right divide and the east-west conflict of the cold war era were practically irrelevant. The conflict of ... would [then] be between traditionalists and pluralists, between internationalists and nativists, between autocracy and liberalism. This is Dugin’s world. In it, Trump and Putin, for all their differences, would be on the same side" (see above link).

(Just a thought - if the USA, Russia and China are all converging, maybe Fukuyama was right after all, and it is, finally, the end of history, just not the way he thought of it, and a new global elite of war-lords - as opposed to internationalists - is born. In hindsight, was it ever any different? )

2/2/17 (RW) Control or Responsibility
I have been reading Horgan’s book, The End of Science - starting with the chapter on Chaoplexity.

He does get around to speak to many of the key suspects, in person, but ends up delivering not much more that a ‘clever’ journalistic account of why all “chaoplexologists" are deluded.

Its just too easy to set up the Sante Fe group on Complexity as a straw man/institute, and then shoot them down in flames. I, like many others, have never subscribed to the Sante Fe take on complexity. It suffers from ‘physics envy’ and a yearning for predictability in social science and human behaviour, and biology, which is never going to happen.

It also suffers from the AI syndrome, i.e. the delusion that computers will (surely, one day) be able to mimic the human mind, and exceed its ability to communicate (i.e. above and beyond ‘compute'). There are a number of problems with this:

1. The whole point about complexity and emergence is that there are many events - all of biology, zoology and human sociology for starters - that are to varying degrees ‘unpredictable but ordered’.

The idea that we will one day be able to predict the unpredictable is, of course, a contradiction in terms. It’s ‘physics envy’ in another guise - e.g. as recently articulated in Fukuyama’s book on the The End of History - the deluded triumphalism of neoliberalism, some years back. (And Horgan's title seems to mimic Fukuyama's hubristic pretensions).

2. Many of these events are also what Soros (writing on economics) calls ‘reflexive’ - which is more than ‘clever feedback loops’, as it includes subjects (self-aware agents) maintaining their own identities, cultures and capabilities/power.

3. The ‘more’ is, crucially: intentionality, community, and shared responsibility. In games theory terms I guess it is, at its best, that ghastly Americanism, a ‘win-win’ situation. In semiotic terms, it's the double edged challenge that all signs are arbitrary and conventional - i.e. the opportunity to be creative with signs is always accompanied by the responsibility to account for the practices of the community of users in question. And it gets to be more and more of a challenge, the more 'open' the community is - whether that is scientists developing potential frankenfoods/biotics/nano-technology, or alt-wrong trolls promoting fascism.

4. The ordered part is made possible by hindsight, and by the courage to let the event ‘run’ with large degrees of freedom, and see what happens. (This requires an approach to responsible management which is quite different from 'control' see Snowden & Boone 2007).

5. What happens is history (and, more often than we would like, in the form: ‘shit happens’), and we live within it, as in: "we live our lives forwards, and make sense of them backwards".

6. Identity, in turn, is reflexive, certainly in socio-semiotics, with roots going right back to autocatalysis in RNA / DNA (which is not the same thing as the ‘selfish gene’ argument).

7. History, identity, and reflexivity have to be added to intentionality, community, and shared responsibility if we want to understand unpredictable yet ordered events. Subjects comes into it too, via identity (see 2, above). As does reconciliation (see: the South African Truth and Reconciliation Commission, in Country of My Skull).

8. Communication has to be built on all the above (see 7), if it is to be anything more that transferring information and issuing instructions - one way - even with the addition of a few clever feedback loops (the Funagalore paradox, for those who are familiar with the Southern African language of instruction).

To paraphrase Marx, the point is not to control the world, but to build a shared global community, with shared responsibilities.

A Universal Declaration of Human Responsibilities would be a good start.

17/01/16 (RW)
Sensory resonance and learning
In reply to an email and a very interesting blog post by Matthias Melcher, which responds to work by Kieran Egan on kinds of understanding and the tools that develop them, the following:

Having read some of Egan’s stuff (via the links) …

1. I am interested, but a little sceptical of his idea of cognitive development as a re-enactment of social evolution. It sounds too neat and linear for my liking, despite Egan’s qualifiers.

2. I am drawn back to Montessori practice, which is somatically based, but is also free-flowing (and incorporates regression), in classroom design. It frames learning within enactive perception (which Gibson later formalised as affordances). And it is based on the design and construction of enactive order (which incorporates systems of difference).

The principles of design for Montessori materials can be applied across many fields, but the superordinate principle is that education and the curriculum is NOT determined by the teachers (at preschool level in particular) but by ‘following the needs of the child’ - which requires detailed observation. Design is created as a responses to the child’s need to explore, and a need to order their world as they progress through a series of ‘sensitive periods’ for different exploratory needs/desires, starting with the kinaesthetic, the 'invisible' sense.

3. These sensitive periods provide rich affordances for the child to engage, and to create something - their ’selves’ perhaps - but not as individuals. What they create could equally be called ‘order’ or routines, or organisation or familiarity, or even comfort (within their own skin?) - it’s what is created as a result of interacting with the environment and creating micro-worlds of order - in the world and in ’them')selves' . Its a way of inserting / inscribing themselves into the world, not knowing 'about' the world. An engaged knowledge, in other words, not an 'academic' knowledge (in the - slightly - pejorative sense of 'academic)

For example, recently a friend brought her 2 year old daughter to stay for a few days. The mother wanted to know what ‘Montessori’ activities she could do with the child at home. The child, right at that moment, was taking toy animals out of a toy box, and putting them back in agiain. A visiting child (aged 8 or so) sat next to her, and eagerly used the opportunity to tell her the name of each one, which the child repeated - and was quite eager to learn, - although that was clearly not the focus of what she was doing. At this point the mother wanted to leave, to catch a train back home.

I pointed out to her that the child was ordering the box, the toys, what can come out and go back, what a box for toys can be used for, and the difference between animals and their names. But most important of all was that she had put all this together into a ‘task’ with sub-tasks, and a cyclical structure, with an opening point and a closing point, and some idea of when the task would be ‘finished’. I am sure this is recognisable as a good scaffold for programming: creating and combining sub-programmes/routines, creating a ‘run’ protocol, deciding when it is finished, testing it, and completing the task.

3.1 This is in other words a sophisticated way of ordering the micro-ecology of toys and toy boxes, and extending and developing the capability of the emergent ‘self’ as someone who can executing this task / programme. Above all, I said to the mother, what you must never do is interrupt the task (the equivalent of interrupting a computer programmer in the middle of writing and testing a new programme, to see whether, and if so how, it ‘runs’). And repetition is crucial to test whether the programme runs properly, under different conditions.

The task for the learning materials designer (mother, teacher, etc) is to recognise this designing / ordering process, respect it, and apply this knowledge to observation and analysis of other ‘play’ activities that the child engages in (which should actually be seen as self-motivated design and development ‘work’ - not play at all).

3.2 This is in many ways pre-linguistic, but as a programme design and test activity, it is simultaneously post-linguistic, or even in a sense a-linguistic. The most important thing for the child to develop, and for the mother/teacher to recognise, is to respect and enable is task design-and-definition, which is a lot more sophisticated than ‘somatic’ or ‘binary’ skills and competencies, and in which systems of difference play an important, but subsidiary part.

The process of designing exploratory tasks, and enabling children to explore them in a Montessori environment is applied across all senses, and across literacy and numeracy. Literacy is taught as an active, somatic, task - through writing first, in a very tactile way (using sandpaper letters) which is followed by reading, later on. Current touch- and voice-based media provide wonderful design opportunities to develop digital Montessori materials, although the basic sensory materials would not be replaced by them. There is lots more to explore, and perhaps a whole section on sensory resonance and learning will have to be written (sometime) and added to this wiki.

And ...
I do think that bibliometrics is taking on a life of its own, which has little or nothing to do with academic or intellectual value - its just another sad example of convenience data, if not of malgorithms.

23/11/16 (RW)
Epigenetics, memory, evolution, and resonance/alignment
Since the debate between the ideas of Lamark and Darwin kicked off, there has been scepticism about the resonance between experience, consciousness, etc. and genetics - particularly influences on the genotype (rather than just the phenotype). This is perhaps an extended sense of the term 'resonance', which leans towards alignment, (which I guess is a weak form of influence, or even a very weak form of causation).

Here is a great panel discussion on all these issues:

Deepak Chopra chairs the discussion, but digresses into broad speculative issues a little too much at times. He is excellent when he speaks as a specialist endocrinologist and researcher. The others, including the rap artist, are superb.

The discussion makes the point (amongst other things) that sustained changes in your diet get locked in to ‘automatic’ / semi-permanent behaviours / 'gut memory', in the epigenetic material around the genes, which starts to unpack Chopra's idea that the brain/consciousness interfaces with switches at the epigenetic level, and therefore also switches of gene expression. This offers a glimpse into potential mechanisms for Larmarkian influences from the environment (both the external environment and the internal one [i.e. consciousness, and the sub-conscious]), to influence changes to the phenotype (i.e. gene expression), which could in turn change predispositions for adaptation in the genotype.

This raises lots of very interesting questions. Not too many answers, but lots of interesting micro-studies on epigenetic markers, that start to provide the building blocks for identifying and thinking about specific micro-biological (and social) issues in the Darwin (Dawkins) v. Lamark debates. Dawkins gets discarded early, and Darwin emerges as a far more nuanced thinker than his self-proclaimed 'protege').

And there is a tellingly (horrifying) example of the experience of women at the 9/11 event causing detectable changes in the saliva of their offspring, which makes me think of the clinically troubling effect of the Rwandan genocide on the next generation of children, who had not even been born at the time the genocide happened. The crucial question of whether these changes are passed from one generation to the next generation culturally, or genetically, (or both) can now start to be explored through examining changes in specific epigenetic markers.

7/11/16 (RW)
Writing in/for the public
I am interested to know if the format of this 'resonance' exercise, i.e. ‘writing’ in a web of interconnected texts - both internal to the wiki and linking through to many external ones too, ‘works’ for the reader.

It works for me. One colleague asked if I was writing for pleasure [yes], or whether I want to use this as a resource, to later write a ‘sequential text’ type of ‘published paper’ too - at some stage - to which the answer is yes, maybe. I suspect that one (or more) publications might emerge from the process (from me or from others, why not?). But I am not sure if the sequential published paper/s will be as good (= useful to readers) as the current ‘web’ or hypertext. Perhaps we'll have to wait and see.

In another sense, this is just a matter of writing in [a] public [space], which might or might not be interesting to others - at the moment many people seem to be reading the additions to the wiki, on a daily basis - partly because there are lots of ‘members’ of the wiki, who all get updates on anything new, and partly because I often flag up additions to the wiki in my twitter account.

Writing in public then has a relationship to writing ‘for the public’ (i,e, writing in private, and then releasing a finished product). This is a bit different, as there is no reason why it should ever have a finishing point, or product. We will have to wait and see how that turns out, and in see whether writing can usefully remain unfinished ( or conversely, maybe we should ask: in what way can writing usefully remain finished?)

Whether anyone else will pick up on this way of writing (and run with it in their own, distinctive, way) is moot. But I am not doing this for others, rather for myself, really, so any wider use of the format would be on the one hand incidental, and on the other hand a [creative commons, with derivatives] bonus.

31/10/16 (RW)
Knowledge as becoming
Stephen Downes on knowledge: "rather than being a process of acquiring something, as commonly depicted, learning is in fact a process of becoming something"

And so, ... [WIP ...]

30/10/16 (RW)
Ironic Resonance?
Matthias, yes. And 'fit-ness' is dynamic, just as 'resonance' is. See the Rhino, here, who, by becoming 'lost' in the landscape, 'fits' in, to such an extent that the serendipitous camouflage works 99.9%. There is no genetic transfer / translation mechanism, unfortunately, so no phenotype-genotype reshuffling of the genetic code, and 'evolution'. But I love it as an example of fitness and ironic resonance - the resonance of getting totally lost, which in semiotic terms is a wonderful addition to the playful richness of 'resonance'. (This could also end up being a great children's book: The Rhino who got lost).

This also links up with your ideas on 'kennen / 'connaître' / kennis (Afrikaans), and 'ken' (Scots), and Stephen Downes' idea that knowledge is 'becoming', not 'storing' (see below). Or it could be seen as 'fitting' your self / neuronal networks around / into a particular response to an event in your environment - which might be much the same as his idea, that you refer to above, that "there's probably only one algorithm ... ". And ... see also my basic definition of knowledge as "the capacity for effective action" (which I got from St Onge, some time back .... [add link]) - which must surely be both a multi-synaesthetic, and an 'embodied mind' capacity, no?

Phew, food for thought ...
[WIP ... and ... chase up details in the links to downes, above]

29/10/16.1 (MM)
Your links from knowledge to Assimilation, "4. Ecological Resonance", or "6. Complex-adaptive systems", remind me of Downes's notion of recognition and of his writing: "I've also come to think there's probably one algorithm underlying perception, evolution, thought and consciousness." ( ). Can we say that evolution 'recognizes' that what is fit, when that 'resonates' with its environment?

29/10/16.2 MM to RW ..
... it is now the fourth time that you write something in your wiki that I was just thinking of. First it was bouba/ kiki, then derivation, then ecological adaptation (see my entry in your questions page), And now it is the idea of "being comfortable in your own skin" with some cognitive content.

For me, this is closely related with McGilchrist's idea of knowing in the sense of 'kennen'/ 'connaître', i.e., being
FAMILIAR with something, or knowing it like a family member. I often thought that a main difference between math haters and math well-doers, is that the latter have come to some familiarity with the abstract concepts, with the limited number of 'acquaintances' of concepts, such that the concepts can almost be seen as their 'cognitive furniture' in their mental living room.

Whereas math haters often are really blocking off all the pertinent ideas because they feel alien and hostile (and due to
the derivation effect, already the thinking of the homework causes physiological threat responses).

More generally, I have long suspected that the question of learning styles and preferences is much less a matter of
personality or ability or topic structure, but just a matter of how we like the order of things best in our heads, e.g. more
neatly hierarchical or more meshed and connected, and more serially lined up or all at a glance. This is, IMHO, a major
determinant for the preferences of modalities and presentation styles. While the styles are still busted as a myth, the picture may become very different when the emotional factor is included: On a single day last month, I encountered two findings that only now reveal the motivation implications: and

On an even more general layer, this well-feeling may also be mapped to what Stephen calls the Boltzmann mechanism of neural reconfiguration: trending towards a state of settlement and equilibrium.

... and a reply: RW to MM ...
Food for thought … The idea of familiar / friendly concepts versus alien/-ating concepts works for maths. It also works for reading - some people finds books and solid text alienating, and some find books friendly (and something you can ‘snuggle up with on a rainy day’). See the link to “the shock of the text” in the synaesthesia paper (in researchgate):

"… Quinn’s research into young people on the margins of education [37] describes learners who floundered in the sensorially and synesthetically impoverished abstract setting of text-based schooling, which assumes that the transition from rich, embodied, cross-modal experience to written text is natural, comfortable and self-evident for all students.

  • "Quinn’s learners ... thrived as soon as they were put (back) into rich synesthetic settings---in their case, in a farm-based school. They seemed to have been yearning for the “reinstatement of all their other (16 or so) senses” [38]. They were potentially quite capable of engaging with written text, given the right guidance and time, but they experienced the “cold turkey” approach of schooling and the “shock of the text” as intolerable and disorienting”.

The point here is not that some learners (always) find text disorientating, but that text (like all other modes?) has to be treated as a transition, or as a phase in a range of transitions and abstractions (which is one of the reasons why I am so obsessed with ‘unpacking’ the dynamics of different kinds of abstraction).

That being the case, it is (highly?) likely that just about all people will find text comfortable at some stage/context and uncomfortable at others. The question is - at what point is a particular mode un/comfortable for a particular learner? Text is not a ‘magical’ one-size-fits-all solution - just because it happened (at one particular stage in the history of media technology) to be the most cost-effective (i.e. the cheapest) mode of production to produce education for the masses - some of whom found text totally alienating, particularly if they were thrown into it too soon (in which case they were just seen as 'collateral damage’, on the margins of the great ‘emancipatory’ project of modernisation).

This puts learning modes [style is a related but distracting issue, because as your argument (and references) point out, it’s clouded by expectations and feel good factors] - it puts learning modes into socio-historical contexts, but also into the contexts of personal history and affordances. And … it treats all of these factors as personal, dynamic and changing, and even recursive (they can change ‘ back and forth’ over time).

So one-mode-fits-all is not appropriate - neither for all people, nor for all times/stages in learning. Its was the core strategy (masquerading as a value) of ‘modernisation’, which saw mass production (of single solutions) as the answer to the modes of production of education (and everything else) for an ‘equitable' society - only everybody had to be equal in exactly the same, standardised way, in the same standardised, one-directional sequence (aka: 'the march of progress’).

I need to know more about the "neural reconfiguration: trending towards a state of settlement and equilibrium” - any links?

... and a reply MM to RW:
If you scroll down to ‘Association’ in the attached word document, you’ll find all four mechanisms.
The Boltzmann kind is one that I often feel to experience directly, for example when I feel the urge to wrap up some topic or to reread a sent email, to somehow ‘get it out’ of my head. Downes may have explained it even better in other presentations but in most links I found with his full-text search the mention is very terse, except perhaps here or here .

19/10/16 (RW)

Assimilation seems to be something different from resonance, yet perhaps it is, actually, resonance, but just resonance refused. (This kind of ambiguous resonance might help us formulate a fuller description of resonance as a set of dynamic event/s, rather than a static 'thing').

Several commentators - on the chaos of Brexit, Trump, and the rise of populism more widely in Europe, not to mention the current conflict between President Zuma and Finance Minister Gordhan, and the ironic 'assimilation' between the class politics and practices of the National Party under Apartheid, and the ANC under Zuma, in South Africa - have said that the world seems to be going topsy-turvy.

Perhaps these are all just slips of the cosmic hand, captured so well in the poet Adam Small's comment on the plight of the so-called 'coloured' people under Apartheid: "die Here het gaskommel, en die dice het verkeerd geval vi’ ons, daai’s maar al" (tr: The Lord shook the dice, and the dice fell skew for us - that's all there is to it).

Maybe we should grasp this kind of cosmic crap game as a post-post-modern trope in place of the more traditional 'revolutions' of history - closing or opening up spaces for people to move/not move into, under popular - and counter-popular - banners/viral movements. This would be a different 'riff' on resonance, namely crap-game resonance (depending on where you stand on a particular issue).
Solomon Tshekisho Plaatje

It makes me think of some of the ‘half-spaces’ - the failed absences (in semiotic terms) of history - e.g. the early twentieth century and Sol Plaatje's attempted ‘assimilation’ into South African politics - he edited two newspapers, and was a founder member of the ANC, amongst many other achievements.

.. and the Jewish ‘assimilation' in Weimar Germany between the Two World Slaughters (aka WWI and II) - both attempts at assimilation got squashed by Fascism.

These are in contrast to Gandhi's counter-intuitive (from the West's point of view) political approach, and attire, in his negotiations with the British Empire. Was Gandhi non-assimilationist and non-confrontationalist, or did his non-violence just dissonate, and undercut, the violence of Empire?

Gandhi in London 1931.jpg
Gandhi in England 1931

30/9/16 (RW)
Post-truth: I feel therefore I am?

Truth or feelings?

Meredith Kercher.png
Meredith Kercher

Amanda Knox.png
Amanda Knox
The Directors.png
The Directors

The term “post-truth” has been cropping up regularly of late, and I have been irritated by it more than anything else; I failed to see what on earth it could mean. Today I read about the new Netflix documentary The making of a she-devil, about the Meredith Kercher/Amanda Knox saga, and it’s all starting to make sense.

McGinn (one of the directors of the film, see right) says the trial/s of Amanda Knox “was one of the first great examples of [the] post-truth age we now live in, where people respond to something primarily by how it makes them feel".

'Post-truth’, then, is when you temporarily (or permanently?) leave your brain at the door, and engage with the world through your feelings (and your twitter feed). That’s weird, and even rather frightening, but its also interesting.

Its important, because it explains how people increasingly engage (and ‘enrage’) with the world through the fog of social media, and as a result fracture into warring ‘tribes’, bizarre narratives (e.g. the Birther [add link] narrative about Obama), and pop-up diaspora (add link]. (Eat your heart out Lyotard – he who proclaimed the end of the Grand Narrative). As Blackhurst (the other director of the documentary) says, “we realised this story was about the changing landscape of journalism itself. In 2007, we saw the beginning of Facebook and the rise of digital media, [and] the takeover of the 24-hour news cycle. That gave rise to clickbait, of stories being driven by headlines rather than reporting in the traditional sense of the word.” (Which is a good example of convenience data, see here)

However, there’s more to it, as social evolution has never been linear or straightforward. To begin with, the previous eras – triumphant twentieth century modernism in particular (and even the sometimes over-intellectualised post-modernism) – was over-cognitive, and just replaced class (in the UK at least) with erudition and high rationality, and a decidedly written, 'straight', linear culture (even in radio).

Television (and more imaginative forms of radio) started to change that, into the 1960’s and onwards, but the legacy of the overwhelming authority of the written text continues to structure many aspects of society, as it still does in most universities.

The internet, social media, e-learning, and open collaborative learning (from the development of Linux to small scale blogs and wikis, arXiv & researchgate, and the brief bloom in really open MOOCs), brought about new communities, networks, groups, and instant ‘followings’ in which ideas and feelings happily rubbed shoulders, and flowed freely (at little or no cost). Serious academic, intellectual and professional networks developed a healthy mix of ideas, feelings, formal and informal aspects of people’s ideas and everyday lives, which was a welcome (and even inspiring) change for the better.

However … once the genie was let out of the bottle, parts of the internet became ‘the world’s greatest sewer’. The next swing of the pendulum between feelings/rage versus ideas/collaboration (or enragement versus engagement) is long overdue. Part of the problem is that total ‘openness’, like total censorship, can be self-defeating, and neither leads to sustainable adaptation or useful social evolution.

Openness can facilitate innovation and learning on the ‘edge of chaos’. But without any constraints, it often tips into noise, and kills off communication and any chances of collaboration. (See Paul Cilliers)

So ... responding to something “primarily by how it makes you feel” is perfectly legitimate, as long as you avoid the delusional comfort of a community of ‘nice warm (or hot and angry) feelings’, and you don’t leave your brain at the door.

The absurdity of the term post-truth, in fact, says it all.

24/9/16 (RW)
Sniffing out Difference

Eva Wiseman writes, of women (not men?) "All our long lives we've trained ourselves to smell out difference, and attack it, however sneakily", and "As a reaction to the cult of perfectionism ... there's a huge liberation in realising that everybody struggles, all the time. But by classifying and dividing out peers into good and bad [mums] ... we're doing ourselves a disservice. It's fine to be bad, but its not worse to be good."

The issue is that wrestling with identity as a woman / mum is an ongoing (and not very satisfactory) struggle between identity and difference, and where you fit in between 'us' and 'them'. The dialectic doesn't end, and Deleuze can't settle the matter that easily - unless he just wants to be provocative, which has (had) its uses (see below).

22/9/16 (RW)

Identity and difference (machines)

In response to a discussion with Matthias, some thoughts on abstractions, identities, and difference machines, and the question he raised about Deleuze, namely that:

"Traditionally, difference is seen as derivative from identity .. on the contrary, Deleuze claims that all identities.
are effects of difference.”

Difference machines …

I tend to take a ‘sociogenesis’ approach - in which the fundamental question is: How can (human) society have been generated - from primates - during a long process of evolution, and how can language have been generated, which functions beyond the levels of articulation of zoo-semiotics - i.e. the way other animals, (including primates) use gesture, sound, body language, etc, to differentiate and implement ‘uses’/functions - within their animal communities?

In part this is an extensive ‘thought experiment’ but in part it is based on empirical (and even experimental) comparative studies of human and primate growth - as individuals, and also within their communities - separately and in human/primate mixes (there are several studies of primates raised alongside human babies of the same age, in human families).

So …

My basic view is that ‘discourse’ (in Foucault’s sense: i.e. the practices of organising texts, bodies and things - human, animate and inanimate (i.e. non-biological, material) within a [discourse] community - comes first. In other words the way primates (for instance) organise hierarchy, privileges, power, affection, etc always comes first, and the ‘finer resolution’ differences that they make: gestures, sounds, etc, come second - these sign systems are a way for the ‘discourses' - of community, power, affection, solidarity’ (etc) - to ‘reach down' into a these finer layer of differences, to make more (and more detailed) differences, and in the end even more ‘abstract’ - differences.

Humans just happen to have a much finer set of differences (language - spoken, then written, then mathematics, then computer code) into which these social discourse can reach, to articulate finer (and richer, and [sometimes] more ‘open’, more ambiguous) distinctions.

So I would agree that the way humans (and primates) create, articulate, and police their identities is by creating differences in their practice and their communities. They don’t first create identities in some ‘inner space’ and then articulate them in the differences of practice - the two are interlinked, and identity emerges from the practice of interaction with the material and social worlds.

Which means that rather than thinking about identity and difference as opposed, and somehow ‘separate’ fields of activity, I would rather ask the question in quite a different way, and say that Identity is a practice of social difference but at the same time a process of social micro-cohesion and (generally) broader exclusion - i.e a material and a semiotic practice of making physical, communicative and social differences - between an 'us' and a 'them' (or more than one 'them').

Of course it is also possible for (humans, via language) to develop and articulate their identities as ‘separate’, or ‘abstracted’ from their practices, and to defend these abstract identities to the death - even in the light of material evidence that they are no longer viable.

And at a more abstract level, it is, again, also possible for humans to construct and create identities which are inclusive (e.g. the extensive history of the practice/campaign of/for 'human rights') for all humans - but then you just end up in the same old paradox - there is no identity without difference (which is just a variation on the theme: there is no meaning without difference). In this case the difference shifts from being different from other humans, and communities, to being different from other animals, and species - and it then raises the question: do primates, dolphins, etc (add to taste ...) have rights too?

Perhaps the whole 'rights' discourse is flawed to begin with?

Gandhi offers an alternative approach to Human Rights: when approached by the UN committee drafting the 'Universal' declaration of human rights, Gandhi said that it was a very bad idea, because humans have universal obligations, not rights, and its only when they start fulfilling those obligations that we can start talking about rights, at all. (And extending our obligations to other members (and parts) of our eco-system is much easier to think through than to try to reach agreement on which rights 'we' should 'allow' dolphins (etc) to have).

The principle of ahimsa is a good place to start: i.e. the idea that all life has a spark of the divine in it, and must be respected accordingly - which informed Gandhi's practice - as a karma yogi.

So ... all semiotics is a question of difference machines (and varying levels of resolution, richness, ambiguity and suturing - or 'closing down' - meanings and use), and all identity is derived from/within the practice of material, and semiotic, and social differences. Whether identity is then abstracted (and reified - in the Marxian sense) and given a ‘life' and a status of its own, is a subsequent question - about these ‘practices of abstraction’; abstractions which are used to frame, energise, limit, and even oppose 'other' identities and practices.

18/9/16 JM
This relates to the text on your Abstractions page

I wasn't sure about where you would prefer edits, but since this page has an invitation I thought it best to put it here. On reading the Abstractions page I was reminded of Halliday's work on nominalisation and wondered whether it is relevant. I leave it for you to decide.

6/9/16 JM
The second part of McGilchrist's book The Master and his Emissary is devoted to showing how there has been a succession of shifts of balance (in the Western world) between the left and right hemispheres over the past 2000 years. McGilchrist takes us through the cultural shifts that occur in the Ancient World, the Renaissance and the Reformation, the Enlightenment, Romanticism and the Industrial Revolution, and the Modern and Post-Modern Worlds. On the course he told us that civilization has risen and fallen three times, with each fall being associated with a lack of balance between left and right hemispheres and left hemisphere dominance. He sees us as being on the verge of another dark age and wonders whether the left hemisphere is now so dominant as to make the situation irretrievable. But he says he is a 'hopeful pessimist'.

You have written " For as you strip out context, you inevitably strip out responsibility too." I think this is true and relates to the keynote that Josie Fraser gave this morning to ALT-C2016 conference, where she was talking about 'trolls' online and our responsibilities. It seems to me that the internet has stripped out embodied context making it easy not to take responsibility. According to McGilchrist's there is not enough embodied experience in our lives and we have made/are making our bodies into machines

I think McGilchrist would agree that a 'hurry up' is needed and he says the starting point is the education of children. He has a couple of new books and a documentary coming out soon - so hopefully he will say more about it then.

5/8/16 RW
Divided brains, divided societies
Had a look at one of McGilchrist's videos.

Its very striking that the Left/Right brain divide coincides exactly with the complicated / complex divide (or linear / complex systems divide), and also explains why the 'left brain' is so strong in providing 'arguments' to prove that it is the only game in town, which is why the 'left-brain' world cannot countenance the 'ordered yet unpredictable' world of complex-adaptive systems (literally, of course, is the joke!).

How can it be that we are the product of (complex-adaptive) evolution, yet we can happily deny that complexity exists?

29/8/16 RW
Discipline and the left-brain (functions)
Jenny Mackness (28th August 2016) wrote a fascinating account of a seminar of Iain McGilchrist's.

This is my response ...

Jenny, I share his, and your (?) misgivings about a left-brain-thinking world.

But it's where we are. We have 'progressed' through an age of 'reason' (or 'rationality' more accurately) to a point where we have opened the pandora's box of scientific (abstract, context-less) tools (see Susan Blackmore on our "pandoran civilisation').

These tools have been stripped of context, and anyone can use them for anything they please. That's great for sharing, but it's a disaster for responsibility. For as you strip out context, you inevitably strip out responsibility too. So, given that we have let these tools, these currencies (atomic power, genomics, nano-engineering, etc) out of the box, into the hands of God knows who, we have to create a human context to contain 'them'.

But it is no use trying to contain the tools, or your hemisphere of choice - the tools have already taken on a life of their own, and can't (like Sauron's ring) be cast into a fire and destroyed. "Them" means us.

There is no point in being pessimistic or depressed about where we are - we have let these tools loose, once and (unfortunately) for all - all ways and all people, and, for McGilchrist line of thought, all time, too. It's up to us to decide what 'us' is, and to focus our energies there. We may not have much (human) time left, so there is a bit of 'hurry up' involved, too.

26/8/16 RW
Practical Epistemology ?

There are many ways to describe abstraction. Matthias Melcher points out that describing abstraction as a process of ‘stripping away’ (subjectivity and context) might be a bit too strong (see his blog post on Recognising, below). And Stephen Downes, in responding to Matthias, and addressing some related issues, questions the very idea of cognitive states, and endorses eliminative materialism instead, writing that “there are no representations, mental contents, etc., and that cognitive processes are not linguistic, logical or computational processes. See eliminative materialism”.

I am not convinced that searching for brain / hemispherical states (as in ‘right brain / left brain theories’), nor trying to prove that there are no ‘mental contents’ is a useful way to proceed. I find Matthias’s interpretation of McGilchrist’s views, very helpful, viz. that what is important is to distinguish between right and left brain area functions as functions rather than hemispherical states. (And this supports the broadly accepted idea of the substantial plasticity of the brain, too).

Instead, I am interesting in pursuing a practical epistemology, an epistemology of use, based on Barthes’ (and Wittgenstein’s) view that signs, meaning (and therefore knowledge) are crucially about use, i.e. about the way we create, maintain, and employ signs in our everyday (material) lives. Whether such uses of signs corresponds to areas (and even states?) of the brain is another matter, although, of course, it can be useful to explore the neurology of ‘sensory’ and ‘motor’ functions of areas of the brain, or even better, J. J. Gibson’s more synthetic notion of perceptual action and affordances, as well as Ramachandran’s views on the role of the evolution of particular parts of the brain and its role in modality based abstraction (see abstraction #1 below).

I prefer to see signs quite simply as tools - albeit sophisticated tools and even reflective tools, but tools nevertheless, which link back (in evolution and in sociogenesis) to tool-making and tool-using - i.e. right back through primate- and zoo-semiotics.

On this basis, we can make some progress if we simply look at the use of signs in science (for instance) and describe in some detail how it is that ‘technical’ scientific terms arise, in practice - some terms are completely new terms, some are borrowed from everyday language and repurposed in their ‘scientific’ sense, or more particularly, for their scientific use. And this process is a rigorous process of stripping out the everyday richness and ambiguity of signs, which is specific to a multitude of overlapping individual, psychological, familial, tribal, cultural, historical (etc) contexts, and leaving only those uses (or meanings) that are ‘scientific’, i.e. signs which are open to use by anyone, anywhere, any time, in the same way (this can also be seen as the ideal thread in science, towards 'universal' laws).

And these scientific signs can then be combined, or used in concert with each other, in algorithms, texts, programmes, theories, etc, with similar exemption from any reference to non-scientific criteria, or to particular people, places or times. They are, simply, scientific (and technical) currency, which is based on its 'internal consistency', or what we might call internal resonance, or an additional layer of cumulative resonance, which gives science its immense (and sometimes frightening) power.

Such signs and texts are, literally, abstracted from subjects, agents, institutions, cultures, and eras, etc, and they are paradoxically constructed very narrowly to perform (or help us to perform) only very precise functions, but in the broadest possible range of contexts (subject, of course, to the radical contingency, and the falsifiability criteria, of all scientific knowledge), and the possibility of being combined in some way with the broadest possible range of other 'scientific' signs, algorithms and texts.

Currency and Athenian science .... (WIP)

24/8/16 RW
Abstraction and Subtraction?

In response to Matthias's posing on his blog, on

Recognising (and neurobollocks)

Some initial responses ...

1. Abstraction - Stephen's idea of 'subtraction' has some resonance with the semiotics of knowledge, and particularly of 'meta-semiotic' knowledge (see the Article on Meta-semiotics and practical epistemology, on researchgate, at: which defines science as 'subject- and context-stripped' propositions and texts.

That looks like a case of 'subtraction', but it's more and less than that. It's based on a more general process of patterns of use (rather than of 'recognition' per se) - so the starting point is fundamentally semiotic, i.e. Barthes's idea that "every use becomes [or can become] a sign of itself". The process of 'abstraction' is then, fundamentally, a process of stripping away some uses to create new uses, rather than stripping away perceptions. ('Nested boxes' of uses within uses or 'disciplines' might work here too, no? This is a kind of 'pragmatics of abstraction', no?)

And to make matters more complicated, this resonates with a notion of 'perceptive action' or affordance, a la Gibson - i.e. the idea that the distinction between 'perception'/'recognition' on the one hand, and 'use' or 'action' on the other hand is at least not very helpful, and at most just a (negative) legacy-distraction from pre-ecological (or 'primitive' - in the literal sense of the term) psychology.

To get back to the 'stripping' - [add: McGilchrists's example of sculpture ...] what happens, for instance in the first uses of the word 'motion' in physics is that physicists (Newton, in particular) appropriated a word in everyday use ('motion') and then stripped and pared it down to the scientific term, 'motion', which no longer depends, nor is it linked, networked, associated, contaminated (etc) by a particular subject or context (person, agent, institution, culture). In the process, it becomes 'pure [intellectual] currency' - it has been stripped of subjectivity and context so thoroughly that anyone (or agent) anywhere can use it at anytime. This is (of course) an extrapolation of the ideal notion of monetary currency, which would (if it existed) similarly have 'universal' exchange value. (The problem with money, as we all know, is that exchange value seldom, if ever, translates to universal use value - whatever that would be. We would at the least have to make a distinction between 'use-ability' and 'use value' the former in principle context free, and the latter context dependent).

So ... 'abstraction' in this particular sense - (there are several others, see the posting on 11/8/16, below) - is paradoxically based on narrowing down a particular (scientific, meta-semiotic) use of the word 'motion', but at the same time broadening out the context of use/rs to the universe of all possible agents (or actants, in Latour's sense). And on top of that is another codicil (or meta-paradox?), i.e. the falsifiability, or contingency of all scientific knowledge - or the idea that scientific knowledge is only good for as long as it doesn't get refuted.

And to make matters even worse (or better, if you think it's more interesting) this process of 'universalised' scientific knowledge (e.g. the genome of anthrax) removes, or at the very least 'uncouples' this kind of knowledge from responsibility. So ... is this kind of abstraction a process of creating (potentially) 'rogue' abstractions, and what kind of a mixed metaphor is that?

Which leaves me to think that 'abstraction' itself is a useful example of how abstractions (plural) are actually a 'resonant family' of tropes of stripping out, or stripping away, or shifting above, shifting across (add to taste ...) ... relative to: modes, tropes, subjects, contexts, eras, 'paradigms' (add to taste here too ... ).

And, Matthias ... it might be interesting to explore what role/s the Master &/or Emissary might have in creating, implementing, and challenging some of these 'abstractions' and responsibilities. (I hope to get to some of those issues in comment #2 later, much later ...)

11/8/16 RW
How does Abstraction relate to Knowledge?
Resonant knowledge/s covers a large range of types of knowledge, which are based on a range of levels and types of abstraction. So, it might be useful at some point to unpack the range and types of abstraction, here ...

We start with a working definition of knowledge as the capacity for effective action (add link ...), and we also start with the foundations of semiotics: the idea that "every use becomes a sign of itself" (Barthes) (add link), or perhaps more specifically: 'every use can become a sign of itself'. So both definitions are grounded in pragmatics, although they very quickly develop in other directions too - epistemological, ontological, evolutionary psychology, etc, all of which are part of the broader field of socio-genesis.

In a very real sense, as signs become more complicated, and more abstract, they 'lift off' from their material, sensory, and pragmatic origins. (This is similar to Marx's analysis of the way exchange value is 'lifted off' from use value) (add link), and it also links to Rossi-Landi's definition of 'work' as 'appetite held in check', (add link) or as other writers might say, 'deferred desire', placing time (and history) at the foundations of semiotics.

So, working (or playing) with these notions, we can identify about 7 different types of abstraction, although they overlap to some extent.

Types of abstraction

1. Modal (i.e. sensory modalities: auditory, kinesthetic, visual, etc), to cross-modal (or patterns of patterns across sensory modes), to a-modal or modality-free (see Ramachandran's work and the examples in the Synaesthesia paper) (add links ... )

2. Interchangeable, token, or transactional signs: signs which have 'no' intrinsic meaning of their own. For example, 'I' and 'you' which are most commonly used as 'tokens' are 'passed' around conversations from one person to another, and signify different people, depending not on use per se, but on who is using that token at a particular time and place.

3. Signs which are, inherently, more abstract. Signs of number, for example. (This is perhaps a subset of the next type, 4)

4. Signs which are more complicated, as they become texts. These progress from gestures and words (roughly 'indexical' signs) to sentences, to paragraphs, to articles and books, etc, as their use moves from 'indication' to 'predication' for example the shift from an instruction to my dog to 'go outside' to a statement to my child that 'I would like you to go outside'.

As signs progress from indication to predication, to full blown texts, they gain the potential to be used across contexts - used in a different time, place, language, and era, although they do not always 'translate' that well. These sets-of-signs or texts are cross-contextual, in a similar way to the idea of cross-modal abstraction (see type 1). There is an additional layer to the analysis of this particular modality, which is Blackmore's gene / meme / teme analysis of our 'Pandoran' (and 'viral') society (add links). So in a variety of ways, signs, texts, and memes/temes take on a 'life of their own', independent of their original production, just as RNA / DNA / genes do in biology.

5. Context-free
Signs like 'time' and 'motion' are used (and thus have meaning) in everyday language, as well as in physics. In physics they have been highly formalised, resulting in much narrower meanings (and uses), which are stripped of subjectivity and of context. Here the subjectivity that is at the heart of 'predication', personal pronouns, and identity is set aside in the interests of a very different type of semiosis. This is deliberate, and methodological - not contingent or serendipitous (as is the case with cross-contextual signs and texts (see 4, above).

The result is a set of signs that is inherently abstract - its pragmatics is context-free (which is very paradoxical), as these signs and texts are constructed so that they can be used by anyone, anywhere, in any language, in any era - which makes them much more powerful, as well as potentially much more dangerous (e.g. the publication of the genome for anthrax). They are also, ironically, inherently falsifiable, which is the foundation of their (contingent) 'truth'. They can be seen as the set of meta-semiotic signs and texts (add links ...), a set which to various degrees share the essential characteristic of being context-free, i.e. the set comprising science, mathematics, bureaucracy, money, and representative democracy.

6. Complex-adaptive systems
Crucially, these systems are ordered yet unpredictable, although they do require (negative) constraints. (add detail ... )

7. Opaque
These signs are too complicated to interrogate - they operate at a scale (size, number, scale, resolution, speed) which is useful - they can run complicated systems and perform very complicated actions - but it is impossible to interrogate their workings, as they work too fast for the human brain / interface. The four-colour map problem is a case in point, as is the case of current versions of machine 'language' interaction with humans.

8. Viral / collaborative / cooperative
Within the overall description of knowledge as the capacity for effective action, and meaning as use (See Wittgenstein) the description of knowledge/s is somewhat dependent on structure (e.g. types of abstraction), and somewhat dependent on user-group, or discourse community (add links - Text and Discourse, Foucault, etc). Discourse communities can be highly informal (and random, even including avatars, nowadays) right through to the highly formal. The more informal they are, the more they can be viral, and based on emotional rather than cognitive criteria. On the other hand, uses too can be viral (see Blackmore's examples of memes and - in their more technically embedded form - temes (add link), and therefore more scientific.

9. Addictions
There are a number of addictions that would appear to be yet another category of abstractions. See here ...

The point about addictions is that they draw your attention and energy away from the messy materiality, and interconnectedness, and complexity, of daily life. Abstractions can be good and bad news, as they provide stronger, more deft, higher resolution, more exchangeable (etc) semiotic machines, systems of difference, and algorithms and code - for synthesis into further abstractions and programmes - all of which, in plain language, are simply tools for description and use.

So we can probably describe addictions as i) a type of abstraction (type #9, provisionally), and ii) as mal-formed abstractions - perhaps for the simple reason that their 'use' eats itself up, and they are not (like other abstractions which provide - or aspire to provide - universal currency, and even universal recombinant currency) sustainable. In the world of abstraction types, addictions deliver only malnutrition, no?

And, as unsustainable forms - of malnutrition - they contravene Monbiot (and many others') dictum that it is "by living lightly, we enrich our lives"

6/8/16 RW
Note: Here, as elsewhere in the Resonant Knowledge hypertext, 'chaos' will be used in the everyday sense to mean 'an unregulated mess' (and not in the technical sense of 'chaos theory'). On the other hand, 'complexity' will be used in the technical sense (i.e. as in 'CAST' - see below), and not in the everyday sense (i.e. to just mean 'complicated'). And ... apologies for the confusion, as it is the case that 'chaos theory' and 'complexity theory' are very similar, if not interchangeable. Where possible, I will use the term 'complex-adaptive' instead of just 'complex', which might help a bit.

Open or complex systems, or both?
Having worked quite happily, for some years, on open learning and open learning systems, it strikes me that it is high time to move on. What we now need (to make sense of theoretical and practical semiosis) is a distinction between chaos (in the everyday sense) on the one hand and, on the other hand, self-organising systems - i.e. recognisable systems, which are, however, flexible and adaptive, and open to ambiguity - or, in more technical terms, they are complex adaptive systems, in the sense of CAST: complex adaptive systems theory, which means that they are, crucially, ordered yet unpredictable - which, is most usefully unpacked by people like the late Paul Cilliers.

And see here for some fascinating examples of self-organising, complex-adaptive systems in practice (on the lab bench, but still ... )

So, for instance ...
Why is wikipedia a complex adaptive system, and twitter is not?

This question, like so many others, is not clear or absolute. For starters, wikipedia might have started as a potentially chaotic system, which only became a complex-adaptive system when they sorted out their protocols for editors (and managing editors), after which it changed from (somewhat) 'happy chaos' into a self-regulating, and therefore complex adaptive, system.

In effect, what they did was to put procedures in place effectively manage (and sanction) trolls - in some cases within minutes (or less).

Twitter, on the other hand, is still trying to develop and implement anti-Trolling protocols, so it remains chaotic, and still merits

Tim Garton Ash's complaint that it is 'history's greatest sewer'. What Ash does not necessarily see, though, is that Twitter is, potentially, a (paradigm case of a) complex adaptive system, and that the question (as Cilliers points out in some detail) is just a matter of managing, and constantly tweaking or adapting, the degrees of freedom that are tolerated.

Cilliers points out that at zero degrees of freedom, a system is neither complex nor adaptive, and certainly not 'open' in any useful sense of the word. On the other hand, with unlimited degrees of freedom, it will sooner or later turn into chaos. So to set up a complex-adaptive system, it needs to find the balance between the two - definitely with substantial degrees of freedom, and at times close to the edge of chaos, but always with some constraints.

It should, at least, have:
  • a large number of interacting agents, communicating frequently,
  • within reasonable degrees of freedom,
  • with, preferably, a small number of simple rules.
The effect of the large numbers is both that the system can be self-regulating and adaptive (if protocols for monitoring and tweaking are in place), and that no single agent can know what is going on in the whole system: it is ordered yet unpredictable. As chaos theory tells us, if you want to know what is going to happen, you have to put your hands behind your back, and run it, and see ...

So ... this line of questioning seems to be leading to the idea that completely 'open' systems are potentially hazardous, unless they are also complex-adaptive, and hence self-regulating (though still 'managed' in the sense of constant light-touch monitoring and tweaking. In which case calling a system, or a programme of learning, or a social network 'open' is no longer adequate enough, and could even be quite misleading.

Examples of CAST in:

MEDIATE: Synchronicity or resonance?
The case of Mr Purple in MEDIATE is an interesting case. He is a boy on the Autism spectrum, who engages with the MEDIATE interactive 'room' (with his body) and finds a way to successfully manipulate the playful 3D environment so that it produces the colour purple, and does so repeatedly. The MEDIATE interactive room involves no text, and the people who use it are given no instructions. When the boy is later asked why he focused in on a colour (instead of a shape, or sound, or movement - and why the colour purple, he said he just really liked the colour. When asked what he would like to do with the colour, he said he would like his room to be purple, so it was duly painted purple. From which day onwards he slept through the night and stopped sleepwalking, for the first time in several years.

So, what's going on here? This could be a case of synchronicity, but it probably makes more sense to talk of this as resonance (and resonant knowledge) - albeit intuitive knowledge, not 'cognitive' knowledge. In other words, his sensibility and the colour resonated in some way, and he felt good, or happy, in it.

This looks like an interesting case of intuitive resonant knowledge, which we will provisionally use as an exemplar, an indicator, of this particular mode or trope of resonant knowledge.

So we might start of our taxonomy of resonant knowledges by listing this, first one:

Taxonomy of Resonant Knowledges
1. Intuitive (see Mr Purple, in MEDIATE) WIP 2. Synchronized / context-free / context-independent?

Communities or Tribes, or both?
Meaning is always situated with cultures, sub-cultures, or the disparate post-modern 'tribes' that McLuhan correctly predicted, in 1969, that we would be living in, within our global village. So what are these cultures, and what is the nature of these tribes? In broad terms: are they ideological, epistemological, familial, ontological, or (also) meta-semiotic ? (i.e. framed and motivated by political belief, by the criteria of formal knowledge, by 'family/like' relationships, by ways of being, &/or by their abstraction and decontextualization, respectively?)

It must surely depend on the different types of tribes: for instance, if it is the 'tribe' of MEDIATE-users, it would be ontological.

Work in progress ...

Why do we need 'complexity'?

Complex-adaptive systems complex-adaptive v. linear systems, positive and negative constraints.

What has resolution got to do with it? ... at macro and micro/nano scale, in large numbers, and
at high speeds (see: the four colour map problem, Computer/human 'language' interaction and creating questions that cant be asked; effect of nano materials as bactericides; bacteria as gene-code incubators (along with virus