Building on the tentative ideas on assessing complexity elsewhere, here are some ideas on assessing emergent learning ...

1. Best option: separate out emergence and assessment.
Simplest way to do this is to design and event for learning and emergence, and manage it on that basis. No assessment criteria in sight. Then ask the learner not to determine the assessment criteria, in the abstract, but rather the assessment community, for and in which the assessment artefact will have value. Then task the learner (or a group of learners) with finding and negotiating with that practice community (outside of the course, please), to ask them what their assessment criteria would be for these emergent outcomes, and then see if you can get someone to assess the artefact against those criteria. Its simple, but time consuming, but at least its not narcissistic (as much academic assessment is - see option 2).

In elearning terms, DONT let anyone assess postings or interaction. Rather, assess a critical reflection on postings/ artefacts/ portfolios, written by the learner, within a Wenger-ian framework like 'learning trajectory', which could include events and interactions within and outside of the 'course'.

The point about emergence is that its about learning, which is not confined by the course, even though it does need to be confined within a field, which might go beyond the course field into related disciplines. If you just want to assess compliance with the course, that is essentially an assessment of learner compliance (at best) or probably really an assessment of the teacher's ability to achieve learner compliance (which might or might not include learning, but that's a separate issue).

I dont know what an extensive set of Nested Narratives would yield, as we have only worked small scale so far. However, I strongly suspect that the trend will be that there is substantial learning which is not assessed, and moreover that the proportion of un-assessed learning that is valued by the learner, and by the relevant practice community, will be even more substantial. And no, I dont think this is anything new, I think most of us have known this for a long time.

2. Default option (course manager's favourite)
That 'community' might default to the peers on the course, which in general terms I think is the 'least worst' option, and should be avoided if possible. Its based on the assumption that 100% of university graduates are going to be employed in universities, which is a bit off the mark, and on the assumption that people in the course have a deep understanding of the research criteria of the subject field (which is often what they are supposed to be learning, so that's a vicious circle).

3. Match emergent outcomes against existing outcomes
This tends to be limited by the range of existing outcomes avialable in the instution. Even worse, its very difficult to match against existing outcomes in more than one course, and even even more difficult to match against different levels (particularly undergraduate and postgraduate). So much assessment is locked into sequence, which defines and restricts the pace of learning, or rather, to be more honest about it, its locked into the pace of educational provision - the delivery of commodities to the educational market place.

However, It is possible to overcome these, and some of the best work-based learning succeeds in doing that, at least some of the time.

4. Negotiate new assessments against emergent outcomes
This effectively means negotiating not against outcomes, but against the curriculum itself. Professional bodies cant cope with this at all. Some portfolio assessment goes some of the way towards doing this.