September 12th, 2010 · No Comments
This is a â€œposition paperâ€ for the September 24th discussion meeting, â€œThe Future of Interoperability Standards â€“ Technical Approachesâ€, organised by JISC CETIS in association with the ICOPER project.
This contribution follows up on my position statement to the January 2010 â€œFuture of Interoperability Standardsâ€ meeting. One conclusion was that â€œwe need a better understanding of the different phases of standardisation work and the different stakeholders and types of expertise involvedâ€.
This September meeting is about technical approaches to LET standardisation. I would suggest to take two steps away from current practice to reflect upon issues related to how to improve the legitimacy of LET standardisation. Through a number of ICALT papers Paul Hollins and I have developed a Process and Product Legitimacy model of LET standardisation (Table1).
Table 1 Process and Product Legitimacy of LET standardisation (Hollins & Hoel 2010)
All â€˜interestsâ€™ considered and ideally represented
Inscription of stakeholdersâ€™ interests in the final standard
Enactment status of the standard (is the standard implemented and used in services?)
Balanced choice of Standard Setting Body
Technical maturity of the standard
We have seen that this model needs to be extended, especially to capture more of the technical aspects of the standards we produce. As a writing exercise I have tried to come up with some constructs that could inform a framework on LET standardisation, comprising of four ontologies (Figure 1).
Figure 1 Contextual dimensions of LET standardisation activity
The hope is that these ontologies could be used to develop a prescriptive model of LET standardisation that could improve the process and product of our activities. (For details on the construction of these ontologies, see the draft paper.)
Where the process â€œfirst meetsâ€ the product is when the New Work Item is developed. A brief textual analysis of the proposals presented to the CEN Workshop on Learning Technologies shows that technical issues related to how the standard should be developed is not discussed at this crucial time in the development process. The proposal is more designed to fit the funding schemes of the European Commission than the technical needs of the implementers. Any reference to a preferred design approach is highly superficial. If you want to foresee where the specification will be heading, you have to look outside the proposal text, e.g., at the standards that are referenced as building blocks for the new work.
To come up with an agreed design approach we need to make sure that this issue is addressed as early as possible in the standardisation process. To change the New Work Item process might be too optimistic. However, to make sure that the scope discussion is taken seriously after the project is funded or accepted should not be impossible. To make sure that such a discussion gives the project a good direction we should develop some best practice guidelines.
This draft paper is a first stab at developing such guidelines. The discussion is on a more abstract level (therefore two steps away from practise) than the recommendations in the position papers of Scott Wilson and Adam Cooper. For future version of the paper I will see if their contributions could serve as â€œextensionsâ€ of the proposed framework of mine.
The current version of prescriptive framework consists of a number of stages depicted in the following figures.
1st step â€“ Establishing the project, its team and analysing the background
The first step is to establish the project and its members. This should be done in parallel with a background analysis to establish the historical context of the project, characteristics of the application domain and the basic views underlying the proposal, its scope and it potential solution.
Figure 2 First step of a LET standardisation process â€“ analysis of history, application domain and basic assumptions.
2nd step â€“ what are the competing frameworks / design approaches?
The next step is to â€œestablishâ€ a competing or contrasting activity system working towards the same outcome as the proposed project , see Figure 3.
Figure 3 Second step of a LET standardisation process â€“ envisioning the project as part of competing activity systems
I’m not suggesting to establish parallel teams, but to create an awareness throughout the project of the ecosystem of activity systems working in the LET domain. The contrasting activity system could be a project working on similar tasks on a national level, in other countries, in other standards groups, or at other times in history, etc. The main objective of this alternative system is to challenge at all time the proposals of the development team.
3rd step â€“ Analysing design approaches through perspectives
Figure 4 describes the third step discussing alternative design approaches through three perspectives, the systelogical perspective (why); the infological perspective (what); and the conceptual perspective (what does it mean).
Figure 4 Third step of a LET standardisation process â€“ three different perspectives on the methods development
4th step â€“ Concluding on design approach â€“ model level and kind
The fourth and last step is to agree upon what kind of model that should be developed, and at what level (Figure 5).
Figure 5 Summary model of LET standardisation descriptive framework
The discussion on model level might take into considerations the discussion in Scott Wilsonâ€™s position paper on the use of UML or other modelling techniques, or abstract models like the Singapore Framework. It could also include the principles proposed by Adam Cooper for the structuring of data-oriented interoperability specifications.
Tags: Mind the Gap
International Journal of IT Standards and Standardization Research will soon publish its issue 8(2), which is a special issue on LET standardisation. Editors are Jan M. Pawlowski, Paul A. Hollins and Tore Hoel.
LET standardisation â€“ framing the activity space using the JITSR special issue
The simple framework of Wand and Weber (2002) (Figure 1) could be used to position standards, methods and contexts; and the special issue on LET standardisation of the International Journal of IT Standards and Standardization Research (to be published 2010) to give us the accounts of the current state of affairs.
Figure 1 Framework for Research on Conceptual Modelling, from Wand and Weber (2002)
The specifications and standards are the â€œconceptual modelling scriptsâ€ this community work towards. However, there is no consensus on how to define these outputs among the authors of the special issue papers. Cooperâ€™s definition is wide: â€œthe word â€œstandardsâ€ is used (…) for virtually any multi-laterally agreed set of technical conventionsâ€ (Cooper 2010). Pawlowski & Kozlow reuse the 1990 IEEE definition of a standard as â€œa set of mandatory requirements employed and enforced to prescribe a uniform approach in a specific areaâ€ (Pawlowski & Kozlow 2010). This leads to the definition of a reference model as being â€œa framework that can be used as a blueprint for system developmentâ€ (ibid.). Between a â€œconventionâ€ and a â€œ(mandatory) blueprint there is space for a range of ontological and epistemological considerations. We would claim that Cooperâ€™s definition opens more up for the discursive and consensus process aspects of standardisation, while the definition of a standard as a technical blueprint is more part of technical engineering tradition looking for a true representational model of the domain.
The choice of conceptual modelling grammar is seldom raised as an issued within the standards community. â€œLanguage skillsâ€ seems to go with the territory, and information scientists are equipped with a toolbox that make it routine to turn out information and data models in notations universally understood. However, in the European discussion on concepts and standardisation in areas related to competence we have seen more focus on conceptual modelling as a means of communicating with the stakeholders and to better scope the work in LET standardisation projects.
Conceptual models provide a vital underpinning for information models, helping ensure that the concepts represented in different information models are compatible, and that specifications built on those information models will actually help with interoperability and portability. (Grant & Rowin 2010)
The broader conceptutal modelling method discussion is mostly focussed on the process aspects of standardisation, e.g., if formal standards setting bodies are the appropriate means to come up with the specifications that the LET community needs (Wilson 2010).
Related to conceptual modelling context Wand and Weber (2002) point to individual difference factors, task factors and social agenda factors. Reviewing the JITSR Special Issue related to these factors we see that we deal with a domain where the boundaries are still under negotiation. Some authors seem to presuppose that they work with sub-domains that are well scoped and where more targeted approaches could apply (e.g., approaches that could be subject to automatic conformance testing) (Dahn & Zimmerman 2010; Najjar et al. 2010, Pawlowski & Kozlow 2010). Other authors envisage a domain that is emergent, complex and unruly (Cooper 2010), that is more adapt to a pragmatic and community-driven approach (Wilson 2010), in which collaborative modelling building up common conceptual models is needed (Grant & Young 2010) to explore the new boundaries of learning technologies (Livingstone & Hollins 2010).
This short overview of the LET standardisation domain, based on recently published research, shows that there is no unified view on how we go about to design the building blocks we need to innovate learning technologies. This underline the need for a continued discussion based on models and constructs that will help us to improve both the process and product of this activity.
Wand and Weberâ€™s simple framework is a starting point for building a research agenda. The next step is to come up with an approach for building concepts to deal with the scripts and their development, the methods used and the contexts that frame the activity.
Tags: European Standardisation · Mind the Gap · Standardisering
Methodology for analysing and validating standards was a major issue during the Vienna General Assembly last week. The issue is on the agenda for a Flashmeeting in two weeks, where partners will elaborate on how we go from a conceptual domain model to Data Models, Service Models and Process Models.
For some time I have been a little concerned that we are making life too simple for ourselves when mapping the different layers of the ICOPER reference model to the relevant standards. We should also observe that there are different “families” of standards, and allow for a discussion on what family is the best match for the competency domain.
In Simon Grants excellent blog posting on our last deliverable with the long title â€œModel for describing learning needs and learning opportunities taking context ontology modelling into accountâ€ there are some seeds for the discussion I call for.
Then it [the deliverable] goes on to suggests an information model for â€œLearning Outcome Definitionsâ€. This is a tricky one, as one cannot really avoid IMS RDCEO and IEEE RCD. As Iâ€™ve argued in the past, I donâ€™t think these are really substantially more helpful than just using Dublin Core, and in a way the ICOPER work here implicitly recognises this, in that even though they still doff a cap to those two specs, most of RDCEO is â€œprofiledâ€ away, and instead a â€œknowledge / skill / competenceâ€ category is added, to square with the concepts as described in the EQF.
When Simon is referring to Dublin Core, it is the DC Abstract model he thinks of, the model based on RDF and semantic web. I wonder if the specifications we turn out as a result of the ICOPER project is “web architecture enabled” â€“ or if our focus is still the repository view of walled “content spaces”, be it OICS or others.
Tags: European Standardisation · iCoper
This post is my position paper for a CETIS meeting on the future of interoperability specifications in education.
How do we represent our ideas, positions, and, for that matter â€“, our domain models or enterprise architectures? While the cool guys were still mostly talking to themselves, keeping the hang arounds at a certain distance, nobody questioned which representation framework to use. UML was the state of art and few asked whether these diagrams communicated well, or whether we needed a broader view of the domain before we embarked upon the information model. A standard consists of an information model and a binding. It should have a scope, and there should be some good use cases justifying the new work item. That’s it!
Then two things happened. First, the interoperability standards in the LET domain failed miserably. Second, the ICT developed more to the benefit of Learning, Education and Training than anybody could dream of. All of sudden, anybody (well, so we claim) can do almost anything with technology to support what they want in learning, e.g., finding information, expressing views from different perspectives, building communities, etc. Who asks any more for standards? Well, the enduser shouldnâ€™t anyway, but then the ones that should ask for LET standards are not very enthusiastic either!
The technology has changed. However, the need to communicate about our understanding of the domain has not changed. Consequently, modelling and enterprise architecture are put on the agenda. An emerging understanding of the need for good tools for modelling and modelling frameworks resulted in a number of actions within the standards community. The modelling workshops at the last JISC-CETIS conferences was only one sign of a new interest in different more or less formal ways to keep the conversation going about how we understand and support our domain with ICT services. Even some standards bodies have been looking for new tools and representation techniques. ISO/IEC JTC1 SC36 decided last year to use Cmap Tools for conceptual modelling (and the committee secretariat announced they were going to host a Cmap server). We see now that concepts maps start to be published in work group drafts for new standards. Also in CEN WS-LT there is an active use of conceptual modelling, many of the maps hosted at the Cmap server of the European ICOPER project (www.icoper.org:8080).
Not surprisingly, the move towards new languages shakes up the old power structure and is met with counter measures. We will look at some instances when this new interest for alternative representation frameworks and techniques, to see if we could better understand the actions taken, and also to better be able to recommend appropriate steps concerning the governance and support of the process.
The DC-ED case
During one week in December 2009 the Dublin Core Educational process building an “Application Profile Domain Model” was kick-started with a flurry of mails to the list server and a Flashmeeting to discuss the draft model. The draft was made in Microsoft Visio, an all purpose drawing tool that does not restrict what models you can draw.
(The author of the map was nearly talked into using Cmap Tools and store the map versions at the European ICOPER project Cmap server together with maps from a number of other related projects. However, a tweet informed “given up on Cmap to do new version of DC-Ed AP Domain Model. Back to Visio. Sorry @tore just couldn’t get it to do what I wanted!”.)
In the Flashmeeting several of the participants said they were pleased with the opportunity to have a “walk through” of the model. At the end of the meeting one of the co-editors of the map wanted to have a clarification of what was being represented by the cloud and the purple boxes.
â€œSo I think what I am asking one of the things we need to clarify with this is what is being represented where i terms of what is a class of entities and what is a property of the entity in that class.â€
The main author of the map gave an excuse for not being better to use the Microsoft tool, being “useless at graphic modelling”, with a promise to do better:
â€œSo, yes, you’re quite right, we need to clarify what are classes of things, and what are properties of things that are in that class. And we need to do it visually so things are very clear in different visual appearance, and have a key somewhere so it is clear.â€œ
This reply is followed up by a supportive remark from the co-moderator of the DC-Ed Community. Drawing on ethnomethodology and research on “talk at work” we see there is more actions going on in this 5:30 minutes long conversation at the end of a virtual meeting than â€œwhat meets the eyeâ€. Skipping the tedious analysis of the turns, we may observe the following:
- There is a conspicuous uneasiness about the use of software tools and how master their use
- This false (?) modesty may be covering up what is really difficult: to draw diagrams with more formal representation techniques, e.g., UML
- The talk reflects strongly power or authority relations: The ones that master UML (and are not pray to vagueness) should be in control, and the ones that find the clouds and pink boxes OK should learn to draw with proper tools
- The â€œtrue representationâ€ in this talk is from a top down perspective (truth as in God knows what is correct). The down up perspective (the model might communicate well with the stakeholders) were not represented in the talk, even if both the modeller and the participant raising the questions had been drawing the â€œvague modelâ€.
We have other case stories demonstrating that the choice of representational framework has the potential to shudder the peaceful struggle for new standards. In ISO/IEC JTC1 SC36 we have seen that moving away from spreadsheet tables to more figurative representation techniques may alter the discourse order dramatically. All of a sudden new experts have the floor, and the theme of the discussion is on a different level than presence types and linguistic indicators.
In the European work on competency modelling (in the ICOPER project and the European Learner Mobility project of CEN WS-LT) we have seen that the use of concept modelling has been instrumental in the process of negotiating a common understanding of the domain. However, we observe that this community is struggling to find ways to use the conceptual models to build consensus. Simon Grant writes
So I proposed in the meeting what I have not actually proposed in a meeting before, that we schedule as many one-to-one conceptual encounters as are needed to facilitate that mutual growth of models at least towards the mutual understanding that could allow a meaningful composite to be assembled, if not a fully constituted isomorphism. I donâ€™t know if people will be bold enough to do this, but Iâ€™ll keep on suggesting it in different forums until someone does, because I want to know if it is really an effective strategy .
We have also seen in the WG3 of ISO/IEC JTC1 SC36 (the working group that deals with competency) that there are challenges in understanding what a more conceptual model is, compared to the information models that the standardisation community has been used to.
- We need a better understanding of the different phases of standardisation work and the different stakeholders and types of expertise involved. There is a time for fuzzy models with weak formalism. And there is a time for UML diagrams. When we move from one phase to the other should be subject to open negotiations, bearing in mind that we need to communicate with different groups of stakeholders and experts in todayâ€™s standardisation of LET.
- We should recognise that communication, community engagement and openness are key factors in development of LET standards. Therefore sharing of models is of great importance. Open maps stores like the one hosted by the ICOPER project might serve as hubs for co-ordinating the efforts of the different communities and organisations.
- We should acknowledge the affordances and trade-offs of the different modelling tools. A simple concept mapping tool like Cmaps Tools invites the user to come up with concepts and to type relations between concepts. The standards community should engage in a discussion on what set of types we should use to create the best models for our purpose. (Simon Grant has proposed to use â€œprocessâ€, â€œmaterial or social thingâ€ and â€œinformationâ€ as key for high level concept maps.)
- We should explore â€œbest practicesâ€ on how to use models in consensus building activities.
- We should develop training opportunities for new experts who want to take part in LET standardisation, giving modelling techniques a prominent place in this activity. There are other ways to do standardisation than sitting in formal meetings in standards bodies.
The UmeÃ¥ meeting of ISO/IEC JTC1 SC36 confirmed that the co-editors of Part 5 Educational are on the right track. It seems that the consensus to build the MLR framework on semantic web / web architecture approach is more stable now, and that we have thumbs up for continuing our way to explore how to describe educational aspects of learning resources.
In this blog post I will point you to where to find the activities leading up the next version of MLR Part 5, with a deadline of 18 December for the next Working Draft. I will also give some brief comments on the framework and principles for development, closing with some comments on this current model from WD3.
Work towards WD4 (to be turned into a First Committee Draft right after the Osaka meeting in March 2010)
We will have three online meetings, the first Friday, October 16th. If you want to contribute, please do so by joining the FlashMeeting. A wiki is set up to support the work and provide background information.
In the first meeting we will concentrate on conceptual models, looking at what is happening in projects like ICOPER, European Learner Mobility project, etc.
Where do metadata live â€“ the MLR approach
With the most fierce fights over MLR behind us, we are now pretty sure that MLR will support some sound principles for developing the new parts and application profiles that the communities need. First, we don’t think that metadata are restricted to repositories of well defined and conforming complex metadata records. We think metadata live on the web in the cloud somewhere, and that they come in bits and pieces without some benevolent metadata schema that is so kind to explain to us how all the elements are to be interpreted. In a future semantically enabled web of information and resources we will be able to make sense of the fragments of metadata we come across because they are identified in a way that allows us to search for more linked data.
In this approach the concept of interoperability is not so much on a record level. The emphasis is on a semantic level. We need to make sure that we are talking about the same thing, using the same concepts. And if we don’t understand the detailed concept we have got, we should be able to track back to a higher level concept that we recognise from the international standard at hand. This does not mean that we don’t care about interoperability between repositories. But we think that the value of such interoperability is within well defined communities that agree upon developing their own application profiles. Interoperability on a global scale for learning resources is just a dream that is never come true. Between very different cultures we should be happy if we are able to reuse some of the concepts and definitions, to make the global conversation about learning, education and training a little bit more easy to facilitate.
So, what are the sound principles of MLR? In my opinion it boils down to some very simple guidelines.
- Every resource (and data element specification) shall by identified by a URI.
- We should reuse data elements, especially the generic ones, using Domain and Range to make sure that elements “do their job” in the context they are used.
- All elements should be extendible to allow “graceful degradation”, i.e., where refinements of an element is used, it should be possible for implementations to be able to identify and use their supertypes.
- The different parts of the standard need an Applications Profiles. This is where you where you restrict your model and define structures. The standard itself should be as simple as possible, capturing the elements needed to build applications for a varied group of users.
Implications for development of the Educational part
When building the Educational Part of MLR we should continue our work on trying to come up with some high level conceptual model of educational aspects of the use of learning resources. First we should try to map out the high level concepts, knowing that the finer granularity will be taken care of in the application profiles, given that we are able to create the necessary elements to connect to. Second, when we have a pretty stable conceptual map, we should put it to test to see if it possible to accommodate the different usage scenarios we have identified. (For instance, if we have only the two concepts of Context and Learning Outcome related to a learning resource we will not be able to make a useful inference of the relation between one resource and two contexts and outcomes. To make that happen we would need some kind of usage or activity element, as Pete Johnston demonstrated in an earlier discussion in the MLR Part 5 online meeting series.
Tags: MLR · Mind the Gap
Simon Grant has written an interesting account of his modelling efforts after the last week’s meeting of CEN WS-LT in Lyon. He describes the discussions leading up to a revision of his ongoing modelling work of the learning, assessment and awarding space - what could be seen as the abstract model of the European Learner Mobility space.
What I find particularly interesting is the way Simon reflects on the process of modelling, working towards a deeper understanding of what is expressed in the modelling notations we use. Simon is using CMap tools for his models, a tool that I am glad I have helped to introduce to the standardisation community in CEN WS-LT and ISO SC36.
I am no encouraging colleagues to store their map at the ICOPER CMap server at the ICOPER Discourse Tool. If we succeed to host most of the conceptual maps the standardisation community is working on here, we will have a unique resource for negotiation concepts and re-using definitions. Simon has stored his map in the European Learner Mobility folder at the ICOPER CMap server.
Tags: MLR · Mind the Gap
In their recent book chapter Jarke, Klamma & Lyytinen (2009) are exploring a framework for classifying and describing different metamodelling approaches. Their approach is captured in a nice diamond model consisting of a triangle of ontologies, notations and processes, with another triangle on top driving the analysis based on the identified goals of the information systems development.
Four aspects of metamodel-based environments (Jarke et al. 2009, p. 52)
Every time I see a triangular model, the activity system model of the Cultural-Historical Activity Theory comes to mind. So, are there any similarities between the two models? Well, there are. Both models are kind of object oriented, directed towards an outcome, a goal. “The diamond shape of the model is intended to illustrate that the model’s three aspects are neither exclusive nor orthogonal” (Jarke et al.). In CHAT this would be stated as the dialectical aspect of the model; you always study one aspect, e.g., the actors’ actions in the light of the others aspects, i.e., the mediated nature of the actions and the ways they are oriented towards an object and an outcome.
Activity system model (EngestrÃ¶m)
If you turn the models around you may even find that they overlap quite well. The Process aspect is pretty much congruent with the Subject entity of the AT model; the Notational aspect is clearly about the mediating artefacts (tools) we use; the Ontologies are about the way we capture the subject domain the ISD process is all about, i.e., the Object in the AT model.
So, why mixing and matching activity theory and metamodelling theory? It is just a hunch, driven by the fact that I find AT a very nice framework for describing human and social activities, especially the LET (Learning, Education and Training) aspects of these activities. And the fact, that I find AT (or rather the AT believers) to have a rather limited vocabulary when it comes to describing what’s going on when for example technology meets learning activities.
From a AT perspective, what would be the contribution from this metamodelling framework to a better analysis and understanding of the LET domain?
- Treating the object as an ontology will give a better understanding of the nature of the object. At least it will lead to an exploration of the different aspects of the object, instead of just treating it as an endpoint for the activities.
- AT has a very good grasp of the role of artefacts. However, looking at notations as tools might give AT a more formal language or constructs to express relationships in the AT triangles .
- Furthermore, adding process models to the AT toolkit might enrich how we understand activities - or at least give a better way (or some tools) to describe these activities.
From a metamodelling perspective, what would be the benefits to have AT as a backdrop for the modelling exercises?
First, AT could be seen as a particular metamodel that is well suited for making sense of certain activities or domains. It is used in design and workplace development research; it explains well technology enhanced learning, etc. As such, AT could inform metamodelling activities within these domains.
Second, at a more methodology level I would say that AT could
- give a better understanding of the role tools, e.g., notations, play in the design of information systems.
- AT has methods to tease out the contradictions in activity systems, and thus capturing the dynamics of the design process itself.
- AT could shed light on the goal aspect of metamodelling, which according to Jakre et al. is the least-studied aspects of metamodelling.
The book Metamodeling for Method Engineering is available from MIT press.
Tags: Mind the Gap
The Nordlet Open Forum is a one day conference in connection to the SC36 plenary meting in UmeÃ¥ (September 19-25, 2009). The Nordlet Open Forum will be arranged on September 18, and is open for everyone, you may register here to participate. The main target groups for the conference are researcher, teacher and policy makers. The Nordlet Open Forum is organized by Nordlet (The Nordic Baltic Community for Open Education).
More information, program and registration
Technorati Tags: icoper, nordlet, SC36
De politiske partiene må ta stilling
Bloggstafett for personvern – mot datalagringsdirektivet.
Personvern er en grunnleggende verdi i et demokrati. Personvernet innebærer en rett til å være i fred fra andre, men også en rett til å ha kontroll over opplysninger om seg selv, særlig opplysninger som oppleves som personlige. Etter EMK artikkel 8 er personvern ansett som en menneskerettighet.
Med en mulig norsk implementering av Datalagringsdirektivet (direktiv 2006/24/EF), som pålegger tele- og nettselskap å lagre trafikkdata om borgernes elektroniske kommunikasjon (e-post, sms, telefon, internett) i inntil to år, vil nordmenns personvern bli krenket på det groveste.
Datalagringsdirektivet ble vedtatt av EU 15.mars 2006, men fremdeles har Storinget ikke offisielt tatt stilling til om direktivet skal gjøre til norsk lov eller ikke. Gjennom EØS-avtalen har Norge en reservasjonsrett. Denne har aldri før blitt brukt, men så har man heller aldri stått overfor et direktiv som representerer en så stor trussel mot demokratiets grunnleggende verdier som det datalagringsdirektivet gjør.
Vi krever at samtlige politiske partier sier ifra nå før valget om de vil gjøre datalagringsdirektivet til norsk lov eller ikke. Å ikke ta stilling, slik som de fleste politiske partiene på Stortinget (med unntak av Venstre og SV) har gjort i over tre år, er det samme som stilltiende aksept.
De politiske partiene må ta stilling nå – si nei til datalagringsdirektivet!
(Gilse Hannemyr inspirerte meg til dette blogginnlegget)
Technorati Tags: datalagringdirektivet, #dld
Tags: Generelt nytt
If you have got wookie and a Moodle installation it is done within seconds to create a great game and learning experience out of a boring presentation on standards and high level concepts. Well, at least if you are Scott, sitting next to Mark and Simon, listening to Jan and Tore projecting the ICOPER framework on the wall.
Scott put a screenshot of the ICOPER diagram on the Moodle server, added some Wookie magic and voila, there you had the opportunity to move a number of standard & specification labels around to find where they fit in the big picture. It is best done collaboratively in a small group where you can see where the labels are moving - and you may have a chat or voice channel to help you negotiate what is the best fit.
What a nice way to survive a standards meeting!
Try it yourself at the Univesity of Bolton VLE
Technorati Tags: icoper