Research Evidence on the Use of Learning Analytics: Implications for Education Policy

The EU published our report on ‘Research Evidence on the Use of Learning Analytics: Implications for Education Policy’: https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-research-reports/research-evidence-use-learning-analytics-implications-education-policy

From the abstract: Learning analytics is an emergent field of research that is growing fast. It takes advantage of the last decade of e-learning implemResearch Evidence on the Use of Learning Analytics: Implications for Education Policyentations n education and training as well as of research and development work in areas such as educational data mining, web analytics and statistics. In recent years, increasing numbers of digital tools for the education and training sectors have included learning analytics to some extent, and these tools are now in the early stages of adoption. This report reviews early uptake in the field, presenting five case studies and an inventory of tools, policies and practices. It also provides an Action List for policymakers, practitioners, researchers and industry members to guide work in Europe.

How to cite: Ferguson, R., Brasher, A., Clow, D., Cooper, A., Hillaire, G., Mittelmeier, J., Rienties, B., Ullmann, T., Vuorikari, R. (2016). Research Evidence on the Use of  Learning Analytics – Implications for Education Policy. R. Vuorikari, J.  Castaño  Muñoz (Eds.).  Joint  Research  Centre  Science for  Policy  Report;  EUR  28294  EN; doi:10.2791/955210.

Lightning presentation at the Lace SoLAR Flare event

I gave a short presentation about ReflectR during the Lace SoLAR Flare event about “ReflectR – Analysing reflective thinking in texts” on Friday 24 October 2014.

The event took place at the Open University and was under auspices of the Society for Learning Analytics Research (SoLAR) and organized by the LACE project Learning Analytics Community Exchange.

In my presentation I talked about ReflectR, an application of reflection detection. Reflection detection analyses text in order to find evidences of reflection.  In my PhD research I explored several methods in order to understand to which extend it is possible to automatically classify sentences whether they are reflective or not. The aim is to find reflective sentences in a given text, for example in forum posts or essays.

How does it work? You need a large corpus of labelled sentence. This training corpus is then used to built predictive models which can be used to classify unseen sentences. In my case I created a large corpus of reflective and non-reflective/descriptive sentences. Based on this corpus I was able to tune models which perform well on the testing data. These models can be used to classify nascent text.

To make this idea more graspable I built a little tool called ReflectR in one of my free weekends.  ReflectR takes as input a sentence – any sentence written in English. It then processes this sentence, classifies it and returns a value, which is translated into a human readable result. It will tell you if the sentence is reflective and how confident it is with the classification. Try out ReflectR at http://qone.eu/reflectr.

Here is the one page slide for this presentation showing an example sentence, which is automatically processed and classified as reflective. Dewey is greeting.

Lightning presentation ReflectR
Lightning presentation ReflectR

I see ReflectR  as an application of the base technology reflection detection. It is probably not the best application of this base technology, but this is also not its aim. ReflectR was build so that other people can try it  and get an idea what reflection detection is about. I am hoping to find people who are interested in reflective thinking and want to collaborate on innovative applications based on reflection detection.

 

ReflectR – a tool to detect reflective thinking in texts

In my PhD I am researching methods to to detect reflection automatically . Based on a large body of sentences, which were manually labelled as either reflective or descriptive, I tested several text classification algorithms. Each classifier generates a model, which can be used to classify unseen text.

Based on these models I built ReflectR. ReflectR is a tool that you can use to test the classification of sentences. It will analyse your input and will tell you if your sentence is reflective or descriptive/non-reflective. ReflectR will also tell you how confident it is with classifying your text. Have a go and try it yourself at http://qone.eu/reflectr. It is free to use. Once the text is classified please leave feedback. Your feedback is most valuable to my research!

Screenshot of ReflectR
Screenshot of ReflectR

One way to start with ReflectR is to have a look at the examples at the bottom of the page. This will give you a feeling about reflective and descriptive sentences.

If you cannot think of an experience you were deeply were thinking about, start with a descriptive sentence. Write about what happened. This writing exercise might help you to kick-start the reflective thought process. Later you can start writing about yourself. Write in the first person perspective, for example about your beliefs, your feelings, why and how you changed your perspective about something, etc.

ReflectR aims to be a demonstrator for the automated detection of reflection in texts. You could for example use he underlying technology to analyse forum posts of a learning environment. The classification results will give you an overview at which stage of the course people were engaged in reflective thinking.

KMi runs MOOC Analytics Workshop for Future Learning Academic Network

Last week Simon Buckingham Shum ran the first MOOC Analytics workshop for the Future Learning Academic Network (FLAN), the research network around the OU’s FutureLearn MOOC platform. I also contributed to this workshop with several presentations. Participants from universities across the UK, plus remote participants from New Zealand and Australia, joined a very stimulating event, hosted at the state of the art Digital Humanities Hub, European Research Institute at Birmingham University.

MOOCs are an exciting platform for educational research, providing 24/7 large scale data collection of authentic learning activity. Universities running MOOCs need to develop the capacity to make sense of this data, but all universities with online learning platforms will need to get to grips with this increasingly.

Universities of Leeds and Reading described how they are analysing their MOOC data, and FutureLearn partners, some of whose courses have yet to go live, were introduced by David Major (FutureLearn Learning Technologist) and OU researchers (Simon Buckingham Shum and myself) to the data that they will receive from their MOOCs, how they can rapidly process that in order to import it into open source analytics tools, and ways of framing their research questions in terms of FutureLearn data and nalytics tools. FLAN leads, Profs. Mike Sharples and Eileen Scanlon, chaired the closing session where partners discussed how FLAN could drive forward research and innovation.

The Future Learning Academic Network has a collaboration space where partners are working together on analytics of common interest.

Adapted from: http://kmi.open.ac.uk/news/article/18585

Co-citation analysis of the topic Social Network Analysis

Social Network Analysis as a research tool has already a long tradition and many articles have been published. For a recent collaboration with Rory Sie, Kamakshi Rajagopal, Karina Cela, Marlies Bitter-Rijpkema, Peter Sloep, and myself on Social Network Analysis in the area of technology-enhanced learning,  I prepared a co-citation analysis, which gives an overview of the topic and especially shows the multidisciplinary character of the field.

A literature search for social network analysis was conducted using Thomson Reuters Web of Knowledge (February 2012). The topic search included the key terms “social network analysis”, “network analysis”  in combination with “technology-enhanced learning”, “TEL”, “e-learning”, “social science”, “educational science”, “psychology”, “computer science”, and “information science”. 133 papers matched the search query and were used in a co-citation analysis. A co-citation relation exists if two documents are cited together in an document. In total 5693 references were extracted. The graph was pruned by filtering all articles that had less than six citations in the 133 papers sample.

Co-citation analysis for Social Network Analysis
Co-citation analysis for Social Network Analysis

Inspecting the data reveals four broader categories were SNA is applied and researched. The categories are:

  • Scientometrics (the science of measuring and analysing science)
  • Network Theory (mathematical models, computational models)
  • Introduction Texts to SNA
  • Sociometrics/Sociology/Economics/Scientific practice

Our collaboration focuses especially on SNA for learning, however the body of knowledge is broader, which everyone researching SNA for TEL can benefit from.

References sorted by category:

Scientometrics: The science of measuring and analysing science

  • Ahlgren, Per, Bo Jarneving, and Ronald Rousseau. 2003. “Requirements for a Cocitation Similarity Measure, with Special Reference to Pearson’s Correlation Coefficient.” Journal of the American Society for Information Science and Technology 54 (6) (April 1): 550–560.
  • Barabási, A.L, H Jeong, Z Néda, E Ravasz, A Schubert, and T Vicsek. 2002. “Evolution of the Social Network of Scientific Collaborations.” Physica A: Statistical Mechanics and Its Applications 311 (3–4) (August 15): 590–614.
  • Hirsch, J. E. 2005. “An Index to Quantify an Individual’s Scientific Research Output.” Proceedings of the National Academy of Sciences of the United States of America 102 (46) (November 15): 16569–16572.
  • Liu, X., J. Bollen, M.L. Nelson, and H. Van de Sompel. 2005. “Co-authorship Networks in the Digital Library Research Community.” Information Processing & Management 41 (6): 1462–1480.
  • McCain, Katherine W. 1990. “Mapping Authors in Intellectual Space: A Technical Overview.” Journal of the American Society for Information Science 41 (6) (September 1): 433–443.
  • Newman, M. E. J. 2001. “The Structure of Scientific Collaboration Networks.” Proceedings of the National Academy of Sciences 98 (2) (January 16): 404–409.
  • Newman, M. E. J. 2004. “Coauthorship Networks and Patterns of Scientific Collaboration.” Proceedings of the National Academy of Sciences 101 (suppl_1) (January 23): 5200–5205.
  • Otte, Evelien, and Ronald Rousseau. 2002. “Social Network Analysis: a Powerful Strategy, Also for the Information Sciences.” Journal of Information Science 28 (6) (December 1): 441–453.
  • Reeves, Byron, and Christine L Borgman. 1983. “A Bibliometric Evaluation of Core Journals in Communication Research.” Human Communication Research 10 (1) (September 1): 119–136.
  • Rice, Ronald E, Christine L Borgman, and Byron Reeves. 1988. “Citation Networks of Communication Journals, 1977–1985 Cliques and Positions, Citations Made and Citations Received.” Human Communication Research 15 (2) (December 1): 256–283.
  • So, C.Y. 1988. “Citation Patterns of Core Communication Journals: An Assessment of the Developmental Status of Communication.” Human Communication Research.
  • de Solla Price, D. J. 1965. “Networks of Scientific Papers.” Science 149 (3683) (July 30): 510–515.
  • White, H.D., and K.W. McCain. 1998. “Visualizing a Discipline: An Author Co-citation Analysis of Information Science, 1972-1995.” Journal of the American Society for Information Science 49 (4): 327–355.
  • White, Howard D. 2003. “Pathfinder Networks and Author Cocitation Analysis: A Remapping of Paradigmatic Information Scientists.” Journal of the American Society for Information Science and Technology 54 (5) (March 1): 423–434.
  • White, Howard D, and Belver C Griffith. 1981. “Author Cocitation: A Literature Measure of Intellectual Structure.” Journal of the American Society for Information Science 32 (3) (May 1): 163–171.

Network Theory (mathematical models, computational models)

  • Albert, Réka, and Albert-László Barabási. 2002. “Statistical Mechanics of Complex Networks.” Reviews of Modern Physics 74 (1) (January 30): 47–97.
  • Barabási, Albert-László, and Réka Albert. 1999. “Emergence of Scaling in Random Networks.” Science 286 (5439) (October 15): 509–512.
  • Erdös, P, and A Rényi. 1959. “On Random Graphs, I.” Publicationes Mathematicae (Debrecen) 6: 290–297.
  • Freeman, L.C. 1979. “Centrality in Social Networks Conceptual Clarification.” Social Networks 1 (3): 215–239.
  • Girvan, M., and M. E. J Newman. 2002. “Community Structure in Social and Biological Networks.” Proceedings of the National Academy of Sciences 99 (12) (June 11): 7821–7826.
  • Linton C. Freeman. 1977. “A Set of Measures of Centrality Based on Betweenness.” Sociometry 40 (1) (March 1): 35–41.
  • Newman, M. E. J. 2003. “The Structure and Function of Complex Networks.” SIAM Review 45 (2) (June 1): 167–256.
  • Watts, Duncan J., and Steven H. Strogatz. 1998. “Collective Dynamics of ‘Small-world’ Networks.” Nature 393 (6684) (June 4): 440–442.

Introduction Texts to SNA

  • Hanneman, R.A., and M. Riddle. 2005. Introduction to Social Network Methods. University of California Riverside.
  • Scott, John. 2000. Social Network Analysis: a Handbook. SAGE.
  • Wasserman, Stanley, and Katherine Faust. 1994. Social Network Analysis: Methods and Applications. Cambridge University Press.
  • Nooy, Wouter de, Andrej Mrvar, and Vladimir Batagelj. 2011. Exploratory Social Network Analysis with Pajek. Cambridge University Press.
  • Borgatti, S.P., M.G. Everett, and L.C. Freeman. 2002. “Ucinet for Windows: Software for Social Network Analysis.” Harvard Analytic Technologies 2006.
  • Freeman, Linton C. 2004. The Development of Social Network Analysis. Booksurge.

Sociometrics/Sociology/Economics/Scientific practice

  • Granovetter, Mark S. 1973. “The Strength of Weak Ties.” American Journal of Sociology 78 (6) (May 1): 1360–1380.
  • Milgram, S. 1967. “The Small World Problem.” Psychology Today 2 (1): 60–67.
  • Burt, Ronald S. 1995. Structural Holes: The Social Structure of Competition. Harvard University Press.
  • Kuhn, Thomas S. 1996. The Structure of Scientific Revolutions. University of Chicago Press.
  • Price, Derek J. de Solla. 1963. Little Science, Big Science. New York: Columbia University Press.

The words of Technology Enhanced Learning – a SNA view

Nicolas Balacheff and his team had a dataset about all the terms from the TEL Open Archive (https://telearn.archives-ouvertes.fr/), which is a high quality paper archive of articles in the area of TEL. One of their aims of this excercise has been to find keywords for their TEL thesaurus: http://www.tel-thesaurus.net/wiki/index.php/TEL_Dictionary_entries
The dataset consists of a directed graph of terms that have been extracted from the paper corpus of the TEL Open Archive.
For example, “open-learning” is linked to “learners” because “learners” is one of the top words appearing in “open-learning” close context (close context: 50 words before-50 words after the word). But “learners” is not directly linked to “open-learning” because other words appears much many times that “open-learning” actually do in “learners” close context.
I had a go with the dataset using the SNA tool GEPHI and had some interesting findings.

In the first picture,  I created a graph without community detection. I filtered the “long tail” of words based on the degree of each node.

In a next step, I applied a community detection algorithm. A community shows a dense connection within the community and sparse connection between communities. Each community has its own colour. The size of the nodes is still based on the degree of the node.

In a last step I changed the ranking for the node size from the node degree to the community degree of each node. Instead of using the degree (in/out-degree) for the size of the nodes, I used the weights of the nodes in each community. This is the difference between the figure two and three and is also the reason for the different layout of the graph.

Just by looking at the last graph, I would say that the connected words make very much sense, and some topics become apparent.