June, 2000
Issue 5
I N S I G H T S    I N T O    U S I N G    E D U C A T I O N A L   T E C H N O L O G Y



A new educational paradigm has emerged – one where learners develop the skills to think critically, work collaboratively and use technology to solve real-world problems. Newer and evolving complex learning systems and processes require evaluation strategies, which transcend traditional quantitative measurements, and deal more with a multitude of issues involving the (wide) impact of programs and their associated technologies.

If you are considering undertaking an evaluation of educational technology, then this article can help you. It examines a number of critical factors required for consideration in designing and implementing effective research and evaluation strategies dealing with networked online learning and other educational technologies. The RUFDATA (Saunders, U of Lancaster) and ACTION (Bates, U of BC) rubrics will be used as a basis for understanding the issues and situational procedures involved in evaluating educational technology.

Evaluation is defined as the collection, analysis and interpretation of information (Thorpe, 1988). However, depending on whom you talk to, evaluation can be many different things to different people, indicating much different realities of practice. Considering the amount of online development and activity that exists these days, and the large amounts of money that has been invested in applying technology in education, it is no wonder that everyone is looking for some sort of measured results for one reason or another. Policy makers are primarily concerned with cost benefits and norm referenced or gross outcome test scores.  Educators and employers are more concerned with verifying that individuals possess viable skills, knowledge and experience. People need to understand the goals for using technology in the first place before any attempt at evaluation is made. Arriving at some sort of consensus and awareness of purpose between all parties involved is an ideal place to start thinking critically about planning an effective evaluation. 

Evaluation may be considered a process, which would imply that a series of rigid rules are possible. When you consider the complexity of the social, educational, technological and political interrelationships involved, the need for evaluation frameworks that are flexible and multidimensional are apparent. These frameworks must help to create knowledge that is both dependable and relevant to stake holders and decision makers, but at the same time be comprehendible enough for almost anyone to customize and implement. 


There may be some lingering cynicism, but the general acceptance of technology as an integral and key component of the new learning paradigm is evident as we begin this millennium. We may discuss a variety of factors associated with newer teaching and learning methodologies. These could include technological, individual and organizational factors, but in the end what is most often required is a way to make decisions regarding the impact of programs on metacognitive skills. We are in essence talking about the collection and discussion of evidence, which relates to the quality, value and impact of the provision of learning. 

Evaluation is emerging as a discipline unto itself rather than a 'tool' discipline, which serves other ones (Scriven, 1997).  It would seem that there is quite a 'perception paralysing paradigm' associated with what evaluation truly is. A common misconception is that evaluation is synonymous with assessment. Assessment can provide relevant data for evaluation as (learning) outcomes are being measured, but assessment in itself cannot always answer all of the critical questions that are posed. 
Even though educators on a day-to-day basis carry out many types of informal evaluation, a more systematic approach is required beyond this type of routine monitoring. Further, those directly involved or affected by a program or the results of an evaluation may have difficulty remaining objective and skew the results. 

Past research methods in education focussed mainly on standardized test scores of one type or another. Moving from monolistic and pluralistic conceptions to more multidimensional ones has left us with much more permissive practices where qualitative methods are not only acceptable, but are now common practice. Evaluative claims in the past lacked propositional content, and today there is a need for developing objective frameworks that are unbiased and semi-structured, but which still incorporate proven theories about the aspects of different types of evaluations. It is also important to note that technology may be only one of the variables that causes effects or changes, and may be interrelated within a complex variety of dimensions. 
Uniform evaluation strategies cannot capture the many complexities of any particular program or project, and therefore multiple evaluation design is necessary. If the scope is a broad one, then a wide variety of experts and stakeholders are required as well. Looking to communities of practice can provide us with frameworks that can help produce some of these more complex types of evaluation strategies. 

Basically, program evaluation is involved with the organized collection of information on a specific or broad range of topics, subjects or objects so that a variety of potential judgments and/or uses can occur. It typically has components – data collection, analysis and interpretation (Thorpe, 1988), or, needs analysis, methodology, data analysis and interpretation and dissemination (Ross & Morrision, 1995). These components are not necessarily distinct phases, as they usually overlap. Nor are they chronological, as they could be revisited and refined at any time.

There are several key dichotomies in evaluation, and considering them not only allows us to consider what evaluation is, but also how a multiple approach can be developed using key components.

QUANTITATIVE - Collecting hard data and statistics – easily tabulated
QUALITATIVE - Gaining insights and opinions – subjective, flexible and dynamic

INTRINSIC - Collecting data on an artefact or on its functionality and/or performance
PAY-OFF - Gaining insights and opinions

DIAGNOSTIC - Looking for problems or opportunities for improvements
JUSTIFACTORY - Justifying a project – expenditures, time investments, etc.

FORMATIVE - Done during the development or delivery of the course
SUMMATIVE - Takes place after the project or course is complete

INTERNAL - Easy, less expensive, and produces more active involvement
EXTERNAL - For objective, unbiased or expert accountability

The above should not suggest that things are simply one way or the other. For example, the need for more structured quantitative data may be required in order to produce a method of obtaining valid qualitative data and analysis, and the formative data collected during a course may be used to account for and support summative results at a course’s completion. It is obvious that a more libertarian approach is needed for evaluating educational technology – one that is free from simple outcome measurements and rigid dichotomies such as qualitative and quantitative, and one that is at the same time focused enough to have some semblance of methodology rooted in obtaining purposeful meaning based on (realistic) goals. Multifaceted approaches to evaluation can easily include both case studies and theoretical modeling. A multiple design strategy helps to determine the process of instruction and support judgements based on the integration of multiple data sources about the value of a program, which may include a judicious mix of the above dichotomies.


There is certainly no shortage of information in our world today, but there is perhaps, a lack of wisdom on how to use it. Utilization-focused evaluations are designed and judged by their utility and actual use (Patton, 1996). Rather than being produced for spurious or trendy reasons and perhaps ending up collecting dust on a shelf, a utilization-focused evaluation is designed with a distinct purpose in mind; in other words, it will be used to accomplish something specific. Of course, as with all types of evaluation, unintended effects are possible. If evaluation is designed to be flexible, it can also address many issues that were previously unconsidered. However, focussing in on a narrow selection of possible legitimate issues can keep an evaluation from running off the tracks into too many ill-defined and irrelevant areas. 

Many activities that are not usually associated with evaluation can certainly become part of one if so planned. For example, these could include taking attendance or observing lesson delivery and class participation. To become effective parts of an evaluation plan, these types of activities must be scheduled, and the subsequent data recorded appropriately.

In these days of authenticated practice, quality improvement concerns from the business world are being shared in the world of education. Increased competition and a need to introduce new products rapidly and lower costs are easily identified in both domains. Considering the business community of practice, we discover that some of the more trendy and popular questions in regards to teaching and learning with technology today do not go much further than asking:

· Does the program (or technology) work?
· Does it improve learner achievement?
· Does it help make our institution competitive?
· Is it worth the money it costs?
· Does it generate income?
· How fast or easily can we update it?
· Are there any additional benefits we can capitalize on?
These recognizable concerns are all relevant and common, but three logical and very essential questions to ask first would be: 
1. What should be evaluated?
2. Why should it be evaluated?
3. How should it be evaluated?

Seeking answers to questions lead to choices about which research models to use in the evaluation, and what kinds of questions to ask of whom. These questions lead to issues about how to conduct an evaluation, and how to analyze the data recorded.
Finally, there are issues about how to best disseminate the results of evaluations to the appropriate people who are capable of making a difference. In looking for an operational plan, which addresses these areas of concern, RUFDATA (Reasons, Uses, Foci, Data, Audience, Timing, Agency) is a framework, which seeks to conceptualize evaluation as a series of knowledge-based practices, which help organize evaluation activity. This acronym can be attributed to the work of Murray Saunders, and the Centre for the Study of Education and Training at Lancaster University in the UK. Although generic in nature and designed to be used across a number of domains, it embodies many key factors that may force out the types of issues that need to be addressed when evaluating educational technology. To be empirically effective, this type of framework should be used within a particular community of practice, or at least embody the congealed procedures derived from the consolidated practices of a group of evaluators. The extent to which individuals or groups are part of a specific community of practice affects the capacity of the evaluation, and conformity of purpose is a critical factor for the proponents (Saunders, 2000). Everyone involved may understand things differently, but their perspectives should all be valid and relevant to the cause. 

Emphasis on the procedural or practical rather than the theoretical provides for a more reflective and consequently effective tool for novices. Data from both discursive and practical consciousness all have an important role to play in formulating a plan. According to Saunders, ‘a framework should provide a generic context for action in which some recognizable shaping characteristics are evident but within the shape a wide range of actions is possible’. RUFDATA involves a process of reflective questioning during which key procedural elements of evaluation are addressed, which helps to organize initial planning by leading to the creation of an evaluation policy. Evaluations are typically complex, and although these types of functional frameworks are readily available for consideration, it must be kept in mind that simplistic procedural approaches may produce very questionable and crude results. This is especially true when novices use them outside of a community of practice. However, one could argue that they can serve a more general audience and are in many cases better than nothing. The collaborative processes and associated activities that occur in using this type of framework may also provide a training ground for those new to evaluation, and induct them into the profession.

As with most frameworks, phase one of RUFDATA begins with asking some very basic questions about the reasons and purposes of evaluation. RUFDATA is designed to be thought provoking, and I have used this framework as an organized way to explore and address some of the key issues its structure brings to mind. In doing so, we will see that there are many ways of approaching evaluation, but also that there is a certain competency underpinning the suggested activities and courses of action, which are borrowed from a community of practice.

What are the Reasons and purposes?

Pure academic research that is non-political is not the norm in evaluating educational technology. Administrators typically initiate evaluations, and the needs of stakeholders are not always made clear. Power relationships can develop when uncertainties are present, and information can lead to control. It’s no wonder that many hostile teacher – administration relationships have occurred when the accountability in evaluations is forced from above (Patton, 1996). Primary decisions will have to be made on the integrity of the purposes that must go beyond simply satisfying a regulation or mandate, and political constraints may very well be a determining factor in setting the purpose. Politics will almost always be present in research and evaluation, as the results will typically help make political decisions, or implicitly judge decisions. The classification and categorization of data alone makes evaluation political. Strategies for developing useable evaluation research must be sensitive to the political structure that surrounds it. Findings need not always be rational and objective, but may be subjective and opinionated as well, as long as it is evident to decision makers where diverse perspectives are coming from. 

The main users of the evaluation should be the ones who decide its true purpose (Patton, 1996), not only for reasons of accuracy, but also for those of involvement and ownership. Ideally, the reasons and purposes would be a consensus reached by a consortium or task force of experts. The types of decisions that have to be made throughout the whole evaluation lifecycle depend on many factors, and unless an individual is highly experienced with evaluation, I would definitely opt for a multidisciplinary, eclectic approach, or at least do an ethnographic study with the stakeholders to fine-tune the evaluation’s purposes. Of course, it could also be difficult to agree on a meaningful purpose if too many stakeholders are involved, and resources could be taxed. There seem to be many trade-offs to consider, including the quest for more precision leading to less width and visa versa, and the involvement of too many people or large groups can lead to lengthy, unfocused ventures.

In educational technology, the common purposes for undertaking an evaluation strategy usually deals directly with technology and student achievement - how well a program works and the underlying conditions for success or failure. More realistically, purposes could include any combination of planning, management, learning, development or accountability issues and concerns. The purposes may also be formative, summative or developmental in nature, either specifically, or in any combination thereof.

The reasons and purposes of an evaluation are closely tied-in and interrelated with the users, uses, foci and audience. These key procedural dimensions transcend the set boundaries implied by the RUFDATA or any other rubric.

What will be the Uses?

Prior to knowing what the uses are, we should discover who the primary intended users of the evaluation are. Getting the right information about the right kinds of things to the right people is crucial. Determining the practical needs of users will help establish the methodology for creating reports that are relevant. This closely relates to the criteria set out in the audience section of RUFDATA. 

Reports can serve many purposes, such as providing cost-analysis at the same time as student demographics, but should be kept focused enough not to stray from the primary purposes of the evaluation. Not only can the findings of investigations prove useful, the actual process itself can supply a large variety of useful information.

Some of the more typical uses of evaluation data include:

· To build a wider basis of support based on the publication of various accomplishments.
· To provide information for prioritizing program development. 
· To observe, record and/or measure unplanned consequences of activity.
· To help make informed judgements and decisions on a wide range of issues concerning the value, quality and effectiveness of a program.
We might also think about objectives in this area. Impact objectives could focus on changes in the performance of a system or program participants. Changes in knowledge, behaviours, attitudes could have an outcome objective focus. Process objectives may help specify the actions required for intervention and implementation.

What will be the Foci?

There are many possibilities to be considered here, including processes, outcomes, impacts, objects, activities and costs. Programs are complex, and the number of levels, goals and functions are so many that there can be more potential study foci than there are resources available to investigate them (Patton, 1996). It is necessary to find a process to narrow the range of possible questions and outcomes to focus the evaluation. The challenge is to find the vital facts among the many that are high in payoff and information load (MacKenzie, 1972 in Patton, 1996). Taking the time to develop focus on future decisions and critical issues, and being responsive rather than reactive will help avoid disagreement throughout the evaluation process and maintain direction. It may be necessary to narrow the range of potential stakeholders to a specific group of primary users, and use their requirements to develop the focus. 

An evaluation, which makes goal attainment the central issue, assumes that everyone shares in the values expressed by the goals, and thus creates the potential for myopic and biased results - only what is measured gets accomplished, and unanticipated, meaningful outcomes may be missed. Goal-free evaluation (Scriven, 1972) deals with the collection of data on a wide range of actual effects and evaluates the importance of them in meeting demonstrated needs.  In reality, the shifting of evaluation focus may be necessary as the investigation reveals new information. Considering this, if the focus is needs-based rather than goal-based, the emphasis will shift from the evaluators, to the evaluation itself.

What will be our Data and evidence?

The purpose may be to investigate a user interface or the cost effectiveness of a program, but we will assume here that the artefacts being explored fit within a broader context of what is commonly investigated in regards to online learning and educational technology, as previously discussed. 

Analysis, interpretations, judgements and recommendations rely on the data collected. The data could be quantitative, qualitative or mixed, and the design could be naturalistic or experiential. Depending on the purpose of the investigation, quantitative data may be all that is required. If hard numbers and facts are needed to make a decision, then this Unitarian approach may work well. Precision does not necessarily mean numbers, but relies more on the accuracy of the data (Hammersly, 1992). It is important to stress here that the data is not the artefact or program itself, but merely a representation of it, and one that is prone to error. Many combinations of quantitative and qualitative data can be quite reasonable, and as mentioned earlier in this paper, they are not always opposing dichotomies as they can rely on and support each other.

Contrary to pure research, developmental research that qualitatively maps how people actually perceive and experience some aspect of their world will enable change. This phenomenographic approach considers the greater variances of awareness, which is typical with the nature and types of learners in distance learning environments, and also presents a further need for ‘defining’ the subject. If in fact we are to discover the relationships and differences in learners in regards to particular objects, then we must understand both the objects and the subjects explicitly. A more ethnographic, dualistic approach should be taken initially to accomplish this end, prior to any non-dualistic exploration. In this way, we are truly enabling the study of the learner’s experience of learning (Marton, 1994).

I can see potential for spurious conclusions in these types of studies, as the application of educational technology itself can vary widely, aside from the program content. For example, if a developmental phenomenographic study of a particular distance learning entity was undertaken, I would think that one would almost have to have a control group to compare findings, or at least survey a vast number of participants to establish some kind of parameters for the ‘outcome space’ in order to provide a basis for accuracy. Bowden (1994) suggests that using open-ended questions in a survey would allow subjects to set the parameters, which could reveal their relevance structure. This would certainly have to be considered in the analysis and formation of the ‘pool of meanings’ considering the repercussions possible through decisions and change. 

Simply having reasonable online content does not provide a solid base for the subject data either if it doesn’t exist within established and proven parameters, unless of course, the study’s focus is to investigate the effectiveness of proven curriculum and delivery on ill-defined subjects. With the ubiquitous nature of evolving online and other educational technology, I believe that a more empirical approach is not only useful, but also very necessary, in order to provide us with the vital data required for developing viable educational methodologies and hard and soft systems. Whether or not a whole program is deemed successful or not may depend on how data is collected concerning the subjects and their relationships to the objects.

Many qualitative studies are told in the form of stories (Eisner, 1997). With this analogy comes the importance of theme, plot and point. The way in which data is presented could quite possibly have more impact in the final analysis than the content itself does.

Who will the Audience be?

It is the human resources or people in any group or organization that makes evaluation work. Finding people that want or need to know something is an important part of the evaluation process. More than just casually identified members of an audience, stakeholders are those who have a vested interest in the findings, and could include sponsors, staff, faculty, administrators, students, government and the general public. Beyond simply holding positions of power or authority, it is more crucial that these key people are enthusiastic and committed to the evaluation. This may mean selling the idea of evaluation to them, and/or educating them in the particulars of the project and process. In any case, selecting the proper people is what is important, not a group or organization in general, and I believe that the choice of the word audience here may be a poor one. What is really needed are people who are interested and committed, and can make or influence decision.

The types of purposes groups and audiences will have for disseminated findings are not always anticipated. Multiple audiences can broaden the impact of a study in critical ways, and the evaluators may not be able to foresee any long terms effects or impacts. Targeting the interests and needs of specific people are preferred over dissemination to broad, vague audiences. Of course this may not always be possible, as in the need to release findings to governing boards in education and the like.

What will the Timing be?

Good evaluations can be a considerable undertaking and take much time, and there can be many key events and activities throughout its lifecycle. An evaluation plan can be designed before, during or after the fact.  Developing a plan prior to the delivery of a course or program has obvious timeline advantages. As we will see here, timing is critical in many respects, and is more than simply collecting summative data upon the programs completion. Summative data can help judge the effectiveness, efficiency or cost of a program, and broader intervention may be evaluated more easily for the purposes of future redesign and efficaciousness. This type of strategy is useful in rendering overall judgements, but may be driven by the time-constraints and methodology chosen (Thorpe, 1988).

On the other hand, formative evaluation data collected will be current and relevant, and presents opportunities for intervention and continuance or discontinuance. Formative evaluation practices essentially address how a program is progressing and identifies room for improvement, and can of course still help produce summative data. The time consuming nature of this type of data collection may suggest that practitioners have to become more involved in the investigation (Thorpe, 1988). Finding a way to effectively schedule and record some of the typical types of monitoring and intervention that already occur regularly in a given situation could help produce relevant data more easily, and take pressure off of practitioners.

There may also be a need to take immediate action on specific issues. Time frames affect indicators, and policy makers may not be able to wait for information. The accuracy and precision of data may have to be sacrificed for those that are more current if timely intervention is critical. For example, the timing of ethnographic studies is crucial. Interviewing participants months after they have completed a program will not paint an accurate real-time description, which is representative of the interactions and activities experienced. Short-term indicators may have to suffice for making decisions regarding long-term results. Keeping stakeholders involved throughout any timely design alterations can reduce the associated threats to the validity and utility of an evaluation that requires immediate data for response and/or intervention. Pre-planning and determining what types of information would prove useful at particular points in time could make huge differences in decision making. Intermittent reports including summaries and recommendations can provide much useful and timely information, including lessons learned, and skills and ideas that are developed over time. 

It is evident that a variety of factors affecting purpose, decision and methodology can emerge throughout an evaluation’s lifecycle. Once again, an active relationship between all parties involved is essential in order to respond and adapt to information that is timely and relevant.

Who should be the Agency conducting the evaluations?

One of the main goals of evaluation is to provide information to decision makers so that they and others may benefit from the results. We have already discussed the necessity of finding people that are interested, committed and able to render or affect decisions. Here we will discuss in broad terms the reasons for incorporating a variety of people or agencies in conducting an evaluation.

In broad scale evaluations, no one person can be expected to act as an investigator, educator, and technologist and perhaps most importantly, decision maker. Many evaluators like to consider themselves as key decision makers as well, due to their commitment, pride of ownership and direct involvement. I believe that you can validate just about any theory or opinion by collecting the right ‘kind’ of data, and there can certainly be a need for non-political, unbiased and objective research. Seemingly juxtaposed here is the requirement for personal involvement, insight and sensitivity.  The chances are high that a significant evaluation undertaking will have many purposes, and be shown or presented to and by many different people. The key here is to develop situational awareness and responsiveness, which will guide the interactive process that is imperative between the evaluators and the audience (Patton, 1996). Stakeholders need to be identified early and be actively involved throughout the evaluation process, not only during the final report stages. It is evident that evaluators need to possess good communication and people skills in order to build relationships, facilitate group activities and resolve conflict among the audience.

In very sensitive cases and contrary to more participatory methods of data collection, an outside agency may be required to conduct phenomenographic or ethnographic studies to avoid personal stakes from influencing the way in which data is rendered, collected, recorded and interpreted. Once again, collaboration and communication are essential among all stakeholders for statistical, logistical and political viability.


Further to the policy creation framework in the RUFDATA example, the Bates (1995) ACTIONS (Access, Costs, Teaching and learning, Interactivity and user friendliness, Organizational issues, Novelty, Speed) model represents a different type of framework for specifically assessing the strengths and weaknesses of learning technologies. Here we will briefly look at its components to consider how issues from this community of knowledge can be integrated within the RUFDATA rubric.


If we are to create learning programs that appeal to a large cross-section of the population, we have much to contend with.  Flexibility is important even within target groups. Accessibility of a particular technology may vary between users for a number of reasons, and information may be required on connectivity and bandwidth, cultural and age differences, geographic location and user skill levels to name a few. The evaluation issues here could be concerned with cost issues, pre-program skill sets, course content and delivery, and government support. These issues could be considered in the reasons and purpose, foci or data and evidence sections of RUFDATA. Timing may also be an important factor here, as investigations may critically require formative and current data during and while the users become enveloped in the program. In addition to the technical data gathered about a system, timely ethnographic surveys could correspond and be an ideal way of capturing the information required on user access issues.


Cost issues can be of paramount importance these days. Administrators usually want to know if the costs of delivery surpass those of traditional delivery. Comparing the unit cost per learner can consist of formative, quantitative data, but investigation in these areas can also produce much qualitative information as well. Considering the reasons and purposes of the evaluation and the uses and audience may focus an evaluation in more on cost issues, but that is not to say that many other related or unrelated factors cannot be investigated concurrently. Other performance driven benefits may include learning outcomes and participant satisfaction.

Teaching and learning

Whenever there is a learning need, the trend is usually towards producing brand new materials. There are millions of existing materials available, and as is the norm these days, many traditional courses and course materials are being ported over to multimedia or online environments. This has sparked some debate over the need for the development of new processes, pedagogies and instructional theories. Teaching and learning processes are embedded within complex systems, and investigations in this area may well belong in another process and category of evaluation – one which deals more specifically with instructional design and theory, the learning environment, learner issues, content and presentational design. 

Investigating how well a particular technology or delivery system meets the institution’s requirements is more in line with what evaluating educational technology is all about. This would certainly fit in well when defining the reasons and purposes of an evaluation, and I suspect that a more narrow focus would be required in the areas of teaching and learning to keep the data from becoming to broad and meaningless.

Interactivity and user-friendliness

Investigating a user-interface, learning environment or a particular type of technology could in itself be the focus of a small evaluation, or any value driven benefit. Functionality issues help answer questions concerning the success of a program, and formative findings in these areas can substantially contribute to the improvement of a program’s overall effectiveness. 

In some instances, various experts may be required to collect information, as a mini, on-site judgement can be made at the time of recording. For example, in addition to user input data, the person collecting data on user interfaces may have to know and understand what a manageable cognitive load is, what screen design principles are, what constitutes clear information presentation and what coordinated media integration is. Alternatively, forms or other methods of data collection would have to be appropriately designed, in order to be easily used by a wider range of evaluators or other people.

Organizational issues

This rubric defines organizational issues primarily as those that may impede or hinder program development and/or delivery. In this context, it would be extremely important to carefully consider the stakeholders and audiences involved, as findings could warrant organizational change, and as we have discussed, change can be a very threatening thing. If organizational issues are to be a focus, then situational involvement and awareness are critical to the acceptance and implementation of any decisions resulting from evaluation findings. 


Technology can mean different things to many people. Defining what the technology is can be a starting ground. Some programs have been based on new technology alone without concern for content or pedagogy. Here we have an opportunity to make judgements and decisions about how viable a particular technology is, and new ground could be broken with the findings. Much of the data collected may be formative, experiential and naturalistic. With the possibility of no previous data or experts being available, I would think that some model for comparison or norm would be needed to give the results credibility; otherwise, measuring technology outcomes can be a messy, inaccurate undertaking.


A competitive concern these days deals with how fast and easily materials can be changed or updated. Formative analysis could provide the basis for this type of study, and offer opportunities for intervention and testing. Timing would be a critical factor here as well as practitioner involvement. Evaluators may also have to be in either the physical or online classroom to observe first hand how well teachers are incorporating the technology in question into instruction.

In contrasting the RUFDATA and ACTION rubrics, we see that although they seemingly serve a similar purpose, that is, to act as a guide for implementing evaluation, they are in fact, quite different. Although RUFDATA has many areas that overlap and are difficult to define as being one specific category or another, it does address most of the important issues surrounding an evaluative inquiry. Anything that prompts people to start ‘thinking like evaluators’ (Saunders, 2000) could be considered useful. RUFDATA is borrowed from the evaluation community whereas ACTION is from the educational community – specifically, educational technology.  The quality of the evidence produced can be directly proportional to the experience of the evaluators and not the design of evaluation instruments (Saunders, 2000). For this reason, emphasis should be put on the involvement of individuals and groups in a community of practice. Since this is not always possible, novice evaluators must rely on direction and bodies of knowledge from external sources, such as RUFDATA. 

When investigating educational technology specifically, we can once again borrow from a community of practice. However, knowing learning technology explicitly does not make one an expert in the evaluation of its uses. A merger between these two types of frameworks can provide us with the type of insight that is required in evaluating educational technology. This insight can help establish more informed evaluation teams, comprised of the right type of people from different domains and areas.

For more perspective, here is what some of the students in Lancaster University’s Advanced Learning Technology Program had to say concerning the use of RUFDATA:

“It may well be the case that real evaluation plans look nothing like the rubrics, but embedded within them are often the elements of RUFDATA. For example, the ETOILE (other) evaluation plan, whilst not conforming to the overall RUFDATA structure, nevertheless has all the elements of the RUFDATA framework.”

“I would see the RUFDATA rubric as a useful starting point for an evaluation. Its simplicity and wide applicability ensure a complex subject (often with political overtones) can be systematically documented, described and accepted as an approach. It can serve as the initial analysis at a high level.”

“The attraction of the RUFDATA approach is clear at just about every sign-post along the road. Program funding councils understand it, beginning evaluation professionals understand it, it can be used to post-hoc explain just about any evaluation study and we know -- from the most reliable of sources -- that it encompasses years of professional practice and experience. So why knock it?”

“From my own perspective, working within an organisation that has had little luck with explicit evaluation approaches, I have found RUFDATA to be highly useable on a variety of fronts. -- I feel encouraged at it's ability to address evaluation from individual course level (micro level) through to organisational reviews (macro level).” 


We have seen that evaluation is more than just a series of procedures that are carried out, and that it can even be classified as a science unto itself. Evaluation has its own community of practice, which can help inform other domains. Similarly, educational technology has its own communities of practice. Evaluations that investigate educational technology have to look at the contexts in which it resides. This includes not only technological factors, but also those concerning individuals, the organization and a whole slew of pedagogical issues. 

In the end what may really matter is the perspective and influencing power of the stakeholders and decision makers. Although these people will not always be associated with the profession of evaluation, borrowing from communities of practice in both evaluation and education facilitates the construction of a better, more multidimensional and flexible evaluation approach – one that embodies the tools and practices so urgently required in improving educational technology delivery systems. 

Considering the complexities of evaluation and given limited skills, time and resources, it makes good sense to consider rubrics that address important contingencies in evaluation and educational technology practice, to serve as guides in asking the right questions and making the right choices. 

Hopefully the evaluation process has been demystified by reading this article. You should now be able to understand how to incorporate this knowledge in evaluating various forms of educational technology by formulating a successful and focussed strategy – one which recognizes and identifies shortcomings, and at the same time capitalizes on its proponents and other resources.


Bartolic-Zlomislic, S., & Bates, A. W. (Tony). (1999). Assessing the Costs and Benefits of Telelearning: A Case Study from the University of British Columbia. Project web page: http://itemsm.cstudies.nbc.ca/survey.html

Bowden, J. A. (1994). The nature of phenomenographic research. In B. J. A & E. Walsh (Eds.), Phenomenographic Research: Variations in Method: The Warburton Sumposium . Melbourne, Australia: RMIT.

Cronbach, L. (1987). Issued in Planning Evaluations. In Murphy, R. & Torrance, H. (Eds) Evaluating Education: Issues and Methods, pp 5 – 35. London: Paul Chapman Publishing Ltd. 

Cukier, J. (1997). Cost-benefit analysis of tele-learning: Developing a methodology framework. Distance Education, 18(1), 137-153.

Dobson, M. (1998). The formative evaluation planning guide. University of Calgary. Available: http://www.acs.ucalgary.ca/~pals/guide-tl.html  [1999, November].

Dobson, M. (1999a). An evaluation plan for ETOILE . Lancaster: Lancaster University: ETOILE document. ESPRIT Project 29086.

Dobson, M. (1999b). The lessons learned  evaluation plan template. University of Calgary. Available: http://www.acs.ucalgary.ca/~pals/evplan.html [2000, May].
Dobson, M. (2000). Implementing the evaluation of ETOILE . Lancaster: ESPRIT project 29086 Lancaster University.

Doughty, G. (1998). Chapter 13: Evaluation costs and benefits of investments in learning technology for Technology students. In M. Oliver (Ed.), Innovation in the evaluation of learning technology . London: UNL.

Draper, S. W., & Foubister, S. P. (1998). Chapter 12: A cost-benefit analysis of remote collaborative tutorial teaching. In M.Oliver (Ed.), Innovation in the evaluation of learning technology . London: UNL.

Eisner, E. (1997). Chapter 2, What Makes a Study Qualitative? Chapter 8, The Meaning of Method in Qualitative Inquiry. The enlightened Eye: Qualitative Inquiry and the Enhancement of Educational Practice, pp 27 – 40 & 169 – 195. New Jersey: Merrill/Prentice Hall.

Hammersley, M. (1992). Chapter 9 Deconstructing the qualitative-quantitative divide, What's wrong with Ethnography (pp. 39-55). London: Routledge.
Marton, F. (1994). On the structure of awareness. In J. A. Bowden & E. Walsh (Eds.), Phenomenographic Research: Variations in Method: The Warburton Sumposium . Melbourne, Australia: RMIT.

Harrison, B.L. (1995) Multimedia tools for social and interactional data collection and analysis. In Thomas, P (Ed) The Social and Interactional Dimensions of Human-Computer Interfaces, pp 204 – 239. Cambridge: University Press:

Patton, M. Q. (1996). Chapter 14. Power, Politics, and Ethics. In M. Q. Patton (Ed.), Utilization-Focused Evaluation: The New Century Text (3 ed., pp. 341-370). Thousand Oaks: Sage.

Ross, S. and G. Morrison. (1995). Evaluation as a Tool for Research and Development: Issues and Trends in Its Applications in Educational Technology. In Tennyson, RD & Barron, AE, (Eds) Automating Instructional Design: Computer-Based Development and Delivery Tools, pp491 – 521.  New York: Springer Verlag.

Scriven, M. (1994). Evaluation as a Discipline. Studies in Evaluation, 20: pp 147-166. 

Saunders, M. (2000). Beginning an Evaluation with RUFDATA: Theorising a Practical Approach to Evaluation Planning. In Evaluation, 6(1), pp 7-21.

Thorpe, M. (1988). Evaluating Open and Distance Learning, Second Edition, Longman, ISBN:0.582.21592.7


© 2000 Shaw Multimedia Inc.