Learning analytics to identify exploratory dialogue within synchronous text chat
Cloud created by:
8 April 2011
Presentation by Rebecca Ferguson as part of the learning analytics sympsium at CAL 2011.
Learning analytics are limited if they focus solely on quantitative data such as the number of times individuals contribute within seminars, without incorporating qualitative understanding of the context and meaning of these figures. This is important in the case of dialogue, which may be employed to share knowledge and jointly construct understandings of shared experience that support learning (Crook, 1994) but which also involves many superficial exchanges. Previous studies have shown that exploratory dialogue is important in learning environments because it involves sharing knowledge, challenging ideas, evaluating evidence and considering options (Ferguson, Whitelock, & Littleton, 2010; Mercer & Littleton, 2007). The study reported here investigates whether such dialogue is employed in synchronous online settings, and whether it is possible to automate the identification of such dialogue using learning analytics.
Data were collected from Elluminate® – a web conferencing tool that supports synchronous chat alongside video, slides and presentations. The focus of this study was on synchronous discussion related to a teaching and learning conference run online by The Open University /http://cloudworks.ac.uk/cloudscape/view/2012/. The Elluminate text chat in four conference sessions, each between 150 and 180 minutes in length (24,530 words in total) was investigated using sociocultural discourse analysis (Mercer, 2004). Contributions were first categorized as relating to academic content, to social elements or to tools (such as Elluminate). Key words and phrases indicative of exploratory dialogue (such as ‘does that mean’, ‘here is another’ and ‘misunderstanding’) were then identified in these exchanges, and an automated process was used to highlight these in each of the four conference sessions. In this way, clusters of exploratory dialogue were identified. The majority of these clusters were found in exchanges related to academic content, although some learning dialogue also took place around tool use and social connections. More than 80% of the contributions of some participants contained markers of exploratory dialogue.
If analysis of other datasets confirms the validity of these markers as learning analytics, they could be used to support the work of both learners and teachers. Learners could be offered opportunities to view their contributions with the markers highlighted, together with explanations of why these markers are associated with exploratory dialogue and learning. Teachers could use these markers to identify sessions or sections of sessions that provoke learning dialogue, and those that do not. Recommendation engines could also employ these markers to recommend Elluminate sessions that provoked participants to engage in exploratory dialogue.
Crook, C. (1994). Computers and the Collaborative Experience of Learning. London: Routledge.
Ferguson, R., Whitelock, D., & Littleton, K. (2010). Improvable objects and attached dialogue: new literacy practices employed by learners to build knowledge together in asynchronous settings. Digital Culture and Education, 2(1), 116-136.
Mercer, N. (2004). Sociocultural discourse analysis: analysing classroom talk as a social mode of thinking. Journal of Applied Linguistics, 1(2), 137-168.
Mercer, N., & Littleton, K. (2007). Dialogue and the Development of Children's Thinking. London and New York: Routledge.
Livedblogged by Gill Clough
Rebecca starts by talking about ways of using analytica to find and recommend sections of chat in large bodies of data on SocialLearn.
Mercer identified 3 types of talk to construct knowledge.
- Disputational Talk
- Cumulative Talk
- Exploratory Talk
Rebecca is focusing on exploratory talk.
The computer can identify sections where people appear to be constructing knowledge together by identifying sections of text. Cannot tell if they are constructing knowledge, only identify indicators.
For example, "but if", "have to response" "my view" indicate possible challenges.
She goes through with examples of how the software has identified portions of text that could be indicators of exploratory talk. She reviewed these portions and felt that these were good potential candidates for analysis of exploratory talk.
When looking for markers - she found it best to break it down into 5 minute sections. 1min sections were too short. She checked it for reliability, picked out sections herself that she thought would be useful and sections that she felt would not be useful and found that the computer program picked out the same sections.
One way of using it is to recommend ways into resources.
Another way of using this software is to present it to the learners. The people who are engaged in this chat. Say "I've picked out 10 markers of you challenging here" "12 markers of you evaluating" but no markers of you extending". This is not a tool to be hidden, but one to be shared.
More nuanced than generic analytics. More nuanced than the conference timetable. Highlights the importance of context.
- More reliability checking for this form of analysis
- Validity checking of this form of analysis
- Investigate the relationship between chat and video
- Automat it because it is currently quite fiddly to do
Questioner 1: Sounds like useful too (was questioner 2 on Denises talk). You don't necessarily want to include the numbers when using with students as they might focus in on numbers, but it looks like a great idea for analysing assignments for critical discourse. Could pick up a thread and analyse which was stronger or weaker. With formative feebdack, you could use to suggest to learner to review assignment as maybe not critical enough. As lecturer this could save you time of having to look through it thoroughly. To have a piece of software to do it for you would be helpful.
Rebecca: With assignments you could probably do something even more clever. This analytics is for chat. v. basic.
Questionner 2: Has used interloc. Suggests using interloc categories of openeres.
Rebecca: Has used interloc to some extent to guide the selection of words she has used here.
14:39 on 14 April 2011 (Edited 14:47 on 14 April 2011)