Postgraduate Award in Technology Enhanced Learning ePortfolio

Case Study


The Teaching Grid is a support space for people who teach at the University. One of its remits is to train teachers (lecturers and PGRs) in the use of education technology. Training for SMART Boards is currently delivered by appointment with an Adviser in the Teaching Grid.

Project planning

Identifying the problem to solve

Demand for training for SMART Boards has remained low despite a demand from the community for training. Surveys at other institutions have shown that while teachers lack confidence in using technology in teaching (Anderson, 2008) and recognise a skills deficit (Loughlin, 2014), the provision of standard training does not impact adoption (Bennett in Loughlin, 2014). In our case I think the delivery method can be burdensome with the need to pre-book, travel to the Teaching Grid and overcome any pride in admitting they want help. Teachers have also shown an avoidance to change, especially in regard to use of technology (Anderson, 2008 and Greener, 2010). Additionally, when sessions were booked, Advisers had to refresh their own knowledge to prepare for the appointment as much time had passed since the last. Both of these areas of concern I think can be improved on through the use of available technology.

The goal

In order to attempt a solution to this problem I aim in this project to increase teacher awareness of effective use of SMART Boards for teaching and learning and increase Teaching Grid Adviser confidence in delivering SMART Board training. My measure of success in these aims will be to increase the number of teachers receiving training on SMART Boards and reducing the time spent by Teaching Grid Advisers in preparing for SMART Board training sessions. A greater number of teachers receiving training will mean a greater awareness of the training content which will include effective use for teaching and learning. A reduction in preparation time from Teaching Grid Advisers will signify a greater confidence in their ability to train others on the subject.

Evaluating the app

It is important to plan my evaluation methods before proceeding with the project as evaluation solely considered after the fact is harder to measure and often coloured by the narrowing viewpoint of the design process. Therefore I will be using a modified version of Kirkpatrick’s approach to learning evaluation as described in Activity 3 – Designing an evaluation.

Analysis of stakeholder needs

To fully understand students I need to identify both their felt needs and unfelt needs.

They have a felt need for timely content delivery. Christian Smith, during a PGA TEL seminar, insightfully said [paraphrased] “Students have expectations of quick, up to date and on demand. They don't realise technology is the best way to do this”. Students’ expectations are not met by technology - they are met through technology.

Students have (often unfelt) needs for effective learning to take place during learning activities. JISC (2009) have identified features of effective technology enhanced learning including allowing learners to choose when, where and how they learn; adding to rather than replacing existing best practice (eg. Face-to-face-learning); facilitating peer interaction to develop evaluative, reflective and critical thinking skills; replicating real-world problems; including guidance from facilitators on appropriate learning strategies; and customised learning experiences.

Students feel the need not to be frustrated and this is a symptom of often unfelt needs regarding learning design. From Laurillard’s (1997) template for design of teaching I can reduce learner frustration by defining aims and objectives; defining terms used; structuring arguments; providing evidence for suppositions; illustrating examples; demonstrating processes; providing alternative descriptions of concepts known to often be misunderstood; and providing experiences that confront and address those misconceptions.


Designing the activity

At the outset I must choose an appropriate learning theory to ‘judge wether the learning and teaching processes adopted will really achieve the intended learning outcomes’ (Mayes & De Freitas, 2007, p14). I have decided to use the Instructional Systems Design of Robert Gagné (in Mayes & De Freitas, 2007) in which the subject domain is recursively decomposed into a hierarchy of units. Each unit is composed of a small number of simpler units. The learner is then taught the simplest units of knowledge/skill and then units composed of those already learnt. As SMART Board use is easily deconstructed into component skills that build upon each other this is the natural approach.

To ensure that I have considered all aspects of the learning activity I will use the LeMKE framework (Boettcher, 2007) which splits the design considerations into the learner, the mentor, the knowledge and the environment. I have also considered Laurillard’s template for designing teaching (1997) which suggests outlining the knowledge to be taught; highlighting student misconceptions; redescribing the misconceived ideas; have students explore the misconception; then design the next learning activity to further understanding. Laurillard’s approach is more suited for teaching concepts but I will be mainly teaching skills. That being said, I do want to make teachers aware of how to link the SMART Board to their pedagogy and not solely teach skills in a vacuum.


The activity I have designed consists of an iPad app for use either by learners in their own context or in the Teaching Grid with an Adviser training them. The app consists of a series of skill explanations including video examples and text that can be worked through in a non-linear fashion. Below I have outlined the design decisions based on the LeKME model.


The learners will be PGRs who teach and academic teaching staff. They will be from diverse departments with a spectrum of technological skill. They may need to learn a specific SMART Board skill for immediate use in teaching or simply be interested in finding out what SMART Boards can do for them. Ownership of personal technologies is pervasive, flexible access to resources is considered essential and it is important to be able to extend communication beyond the Teaching Grid environment (JISC, 2009).

Therefore the activity will include optional support (both realtime and asynchronous), options for learning a specific skill or free-roaming and be accessible at a place and time of their choosing.


The activity will be designed so that it can be completed alone or with a Teaching Grid Adviser. The learner will also be able to contact an adviser through an embedded e-mail form in the app or find their phone number. It is important to blend the learning in this way as the experience of others has shown online media used alone falls short (Macdonald, 2008). One of the reasons for this is that simply delivering content ‘becomes the end of the educational transaction rather than the beginning’ (Knox, 2012, p32). This is because teaching and learning is ‘inescapably and essentially a dialogue’ (Laurillard, 1997, p97).


Pulling together my own knowledge of what can be done on SMART Boards and the inbuilt training material I identified the distinct skills that make up SMART Board use in teaching. I then analysed each of these to determine which were prerequisite to learning others. This resulted in the below tree structure where the root is the most basic skill and each node relies on the skill it branches from. Students need not learn all of these skills but ‘direct and customize their learning according to their own respective needs and priorities’ (Boettcher, 2007).

A hierarchical diagram of connected boxes

It is important to not only provide the learner with the skills but also to provide the context and motivation for learning them. This can be furthered by highlighting each skill’s relation to othes including what the learner needs to know already (Laurillard, 1997).


The learner will either be in the Teaching Grid or their own context (office, departmental space or classroom) and will either be using a Teaching Grid provided iPad or their own. They will have internet access available and ideally they will have a SMART Board to hand.

Selection of appropriate technology

I have chosen to implement this activity using an electronic resource because it can be used even when a mentor is not available; it can be easily updated; it can be interactive; it can provide greater differentiation; it can provide multimedia stimulus; and it can be easily copied, transported and organised (adapted from Macdonald, 2008).

I have chosen to implement this activity as an iPad app. My reasons for choosing to use iPads are that they are available for use in the Teaching Grid; a significant number of target learners have their own or work provided iPads allowing learners the flexibility to learn in their own location and at their chosen time; they are mobile enough to be held while using a SMART Board; they can easily be shared collaboratively between the mentor (if present) and learner due to their mobility, screen size and multitouch interface; they allow easy access to the internet to supplement the provided materials; they allow easy access to their social networks through text, voice and video to allow learners to collaborate, seek further help and share their successes (an important part of maintaining learner motivation and engagement); they can provide multimedia content such as videos, audio and animations.

I have chosen to use a native app instead of a web resource because in future versions the touch interface could be used to mimic a SMART Board, aiding learning without the presence of a SMART Board. Kuhlmann claims screens only allow you to “click, hover, and drag” (2012) - which is true of webpages on iPads - but native apps also allow you to use gestures such as swipes, pinches, pans and multiple touch points.


User Interface

Screen A

This is the Explore mode where learners can browse freely between the skill nodes.

An iPad screenshot including a video preview and some bullet points

I decided to go with a minimalist approach - following the iOS 7 design methodology of making the content the main element on screen. Clicking on ‘Prerequisites’ or ‘What next?’ presents a popover list of topics to navigate to. There is also a button to contact a mentor.

Screen B

In the end I limited the first version to screen A, but I had planned screen C (see below) and so needed a menu screen.

A hand drawn sketch of some boxes

This screen would let you change between screen A and screen C but also allow you to contact a mentor.

Screen C

I planned but did not implement screen C which lists all of the skills and allows the learner to mark which skills they already know and which skills they would like to set as goals.

Hand drawn list interface

The app can then hide skills identified as known and only show missing skills that directly lead to identified goals.

Learning Objects

A Learning Object is a ‘digital piece of learning material that addresses a clearly identifiable topic or learning outcome and has the potential to be reused in different contexts’ (Weller et al. in Littlejohn and Pegler, 2007, p182). Their use can save time and money by being repurposed, recomposed or swapped out when edits are needed. They can also aid differentiation, adapt to formative assessment and help learners to otherwise craft their own learning experience.

I had done most of the work for this by identifying discrete skills for each node but I needed to further decontextualise them to make true Learning Objects. This meant making no mention of where the video or text appears in relation to the app or other skills, storing the videos and text as discrete resources and programming the app to handle changes in content and skill structure. This last point also has the added benefit of making the app easily adaptable to teaching other technologies simply by replacing the Learning Objects used.

For a list of Learning Objects (text and video) see Appendix A.


Actual motivation

I gave testers a questionnaire to complete after using the app (Appendix C) to analyse their motivation for the activity.

Testers struggled to discern the user interface with a divide between those who had used iPad apps and those who had not (those with previous experience fairing better). Learners using this app are more likely to be in the latter camp and so the interface must be adjusted to accommodate this.

Testers generally felt the app was beneficial to them but this was not universal. From verbal feedback this seems to be skewed by testers who already knew SMART Boards well and so didn’t feel they learnt much.

A large proportion of testers felt that, regardless of their own gain, their students would benefit from them using the app. Those reporting little personal gain explained this by the app raising their awareness of how the SMART Boards could be used effectively for teaching.

Testers said there were things they knew could be done on a SMART Board missing from the app - they wanted the app to cover all features. Video strobing effects when showing the SMART Board screen was a distraction and so higher quality versions should be made. They report that a more goal oriented design would improve motivation - I think implemented my planned screen C would help here.

Actual learning

I tested participants immediately before and after using the app to measure their short-term learning. I did this by observing the testers attempting a series of progressively challenging tasks and rating their performance (Appendix D).

In my analysis, the results were skewed by testers who already knew SMART Boards well. Disregarding testers who scored 100% on the pre-test (and thus could not demonstrate any learning as my evaluation had failed to account for them) there was measurable learning across all tasks. There were higher gains with the more complex tasks in comparison to the simpler, especially in the area of using SMART Boards for formative assessment. This is a positive sign that the learning was not just superficial but would impact testers’ students.

Breaking down the results by task and disregarding testers who had ‘fully completed’ that particular pre-test task highlighted that, even if a tester did not demonstrate large gains overall, they demonstrated dramatic gains in areas they were lacking. With the exception of one tester in one of the tasks, every tester in this situation made progress - with a skew towards the simpler tasks.

This demonstrates that, for the learners this is aimed at, they will make significant learning gains in using the app. How this would compare to a 1-2-1 tutorial has not been tested but the app will open up the possibility of learning in their own space, time and pace compared to 1-2-1 tuition.

Actual performance

It is too early to say if actual performance in the classroom has improved. I plan to survey testers in one term to see if they have imbedded their learning in their teaching.

Actual result

Again, it is too early to say if actual results have been achieved. I plan to survey participants in one term to see if testers’ students’ experience has improved.

Choice of technology

Using Walker’s Evaluation Rubric For Ipod/Ipad Apps (2011) the app scores the following:

  • Curriculum Connection = 4/4 (‘Skill(s) reinforced are strongly connected to the targeted skill or concept’)
  • Authenticity = N/A (The nature of the app means it could be used in authentic or non-authentic scenarios)
  • Feedback = N/A (It is not possible for the app to provide feedback on what the learner is doing on a SMART Board)
  • Differentiation = 1/4 (‘App offers no flexibility to adjust settings to meet student needs (settings cannot be altered)’)
  • User Friendliness = 3/4 (‘Students need to have the teacher review how to use the app’)
  • Student Motivation = N/A (The app is not available for general use yet and this aspect is more aimed at school use evaluation)
  • Reporting = N/A (The app is not designed to summatively assess learners)

This gives an overall score of 8/12 (67%). I could increase this score by adding differentiation through a self-evaluation screen (see Screen C). The learner would be presented with a list of skills and could mark them as either already learnt or goals to achieve. The app would then only present skills that are not already known but are required to learn the goal skills (similar to the custom quizzes described by Ryan, Scott, Freeman and Patel (2000) in which the next question presented to learners is chosen based on their history of correct and incorrect answers). It is difficult to do anything but self-assess skill competency (although ideally the app will be used with a mentor) as ‘it is not possible to demonstrate the thought processes that lie behind an answer, and there is no opportunity for partial marks’ (Jacobson & Kremer, in BECTA, 2007, p23). I could also improve the score by improving user friendliness by graphically displaying the skills tree in navigation. Many testers fed back that it was not clear the skills were structured like this and that it would be useful to have an overview of what skills were included and be able to quickly navigate to them.

Dissemination strategy

I plan to present the work at a Technology Enhanced Learning Forum at the University; send a copy of the case study to testers; publish the case study and resources on a webpage for University staff; put the app into usage by Teaching Grid Advisers; and after refining the app design I may publish the source code publicly so anyone can add their own content.


  1. Anderson, C. (2008) Barriers and Enabling Factors in Online Teaching, The International Journal of Learning, 14(12), pp241-246
  2. BECTA (2007) ʻA Review of the Research Literature on the Use of Managed Learning Environments and Virtual Learning Environments in Education, and a Consideration of the Implications for Schools in the United Kingdomʼ (Online article). Available at http:// Accessed 24 October 2010
  3. Boettcher, J. (2007) ‘Ten Core Principles for Designing Effective Learning Environments: Insights from Brain Research and Pedagogical Theory’ (Online article). Available at: Accessed 9 October 2012
  4. Clark, D. (2012) ‘Kirkpatrick's Four Level Evaluation Model’ (Online article). Available at: Accessed 15 March 2013
  5. Greener, S. (2010) Staff who say no to Technology Enhanced Learning, proceedings of the International Conference on e-Learning, p134-139
  6. JISC (2009) ‘Effective Practice in a Digital Age: A guide to technology-enhanced learning and teaching’ (Online article). Available at: Accessed 24 September 2012
  7. Knox, J. (2012) 'Open to question' in Times Higher Education, 1st November 2012
  8. Kuhlmann, T. (2012) ‘Here Are the 3 Building Blocks for Interactive E-Learning’ (Online article). Available at: Accessed 31 December 2012
  9. Laurillard, D. (1997) Rethinking University Teaching: A framework for the effective use of educational technology, New York, Routledge
  10. Littlejohn, A. and Pegler, C. (2007) Preparing for Blended e-Learning, Oxon, Routledge
  11. Loughlin, C. (2014) ʻInvisible barriers to the adoption of TEL (556)ʼ (Online article). Available at Accessed 15 September 2014
  12. Macdonald, J. (2008) Blended Learning and Online Tutoring: Planning Learner Support and Activity Design, 2nd Edition, Aldershot, Gower Publishing Limited
  13. Mayes, T. & De Freitas, S. (2007) Learning and e-learning: the role of theory, in Rethinking Pedagogy for a Digital Age: Designing and Delivering E-learning, Beetham, H. & Sharpe, R. (Eds.), Oxon, Routledge
  14. Praslova, L. (2010) 'Adaptation of Kirkpatrick’s four level model of training criteria to assessment of learning outcomes and program evaluation in Higher Education' in Educational Assessment, Evaluation & Accountability, 22, 3, pp. 215-225, Education Research Complete, EBSCOhost, viewed 14 March 2013
  15. Ryan, S., Scott, B., Freeman, H. & Patel, D. (2000) ʻThe Virtual University: The Internet and Resource-Based Learningʼ, London, Kogan Page Limited
  16. Walker, H. (2011) ‘Evaluation Rubric For Ipod/Ipad Apps’ (Online article), Schrock, K. (Ed.). Available at: Accessed 29 November 2012


  1. Appendix A: Learning Objects
  2. Appendix B: Code
  3. Appendix C: Post test questionnaire
  4. Appendix D: Assessment of learning