American Accounting Association

AAA Home

Accounting Education News
Faculty Development

Designing and Implementing
an Academic Scorecard

In their article from Change, Harold O'Neil, Estela Bensimon, Mike Diamond and Mike Moore apply the "balanced scorecard" framework from organizational performance literature to a specific academic program's assessment and planning processes with an eye toward increasing meaningfulness and effectiveness.

Most of us remember that once upon a time, universities enjoyed an honored status in society -- a time when society intuitively embraced the academy's mission and supported it generously, largely without questioning what went on within the ivy-covered walls. As the millenium approaches, however, universities face growing expectations and must provide increased accountability for the outcomes they produce. Our institution, the University of Southern California (USC), is no exception.

But colleges and universities also represent a distinctive type of organization, and it is to this distinctiveness that we most often attribute our lack of rational measures of institutional accountability and effectiveness. The image of organizations as rational entities dedicated to the pursuit of clear and measurable goals may not be a true depiction of any organization, and it is particularly ill-fitting for a university. The predominant characteristics of universities -- the extraordinary amount of autonomy and professional discretion enjoyed by faculty, decision-making by compromise and bargaining, and the limits on administrators' formal authority -- have earned them a unique designation: "organized anarchies."

In an organization dubbed "anarchical," indicators of performance would seem particularly out of place. But in the face of fiscal stress, it sounds selfish and arrogant to argue that accountability measures premised on organizational rationality are incompatible with the anarchical characteristics of the universities.

Our purpose in this article is to describe how a faculty committee at our own Rossier School of Education, despite strong reservations about the value of quantitative measures of performance, adapted a model originally developed for business firms to satisfy the central administration's need to know how we are doing and how we measure up to other schools of education. In the course of doing so, we found that the model also satisfied our own need for a simple and multidimensional measure that could guide our efforts to improve.

Scrambling for "Metrics of Excellence"
For the last few years, the provost's office at USC has required each academic unit to provide "metrics of excellence" for discussion at our annual fall budget meetings. The provost's office has provided guidance, but each unit has had the responsibility of providing the metrics that are most appropriate for its particular discipline and school. Results have been mixed. In some cases, far to many metrics were provided, making it difficulty to focus on those key areas that define academic excellence. In others, units overemphasized input measures or relied upon external rankings as the primary metric of excellence. Little consistency was achieved across units, making it difficult to make inter-school comparisons.

Needless to say, we did not look forward to the provost's annual request. Our approach tended to be one of getting the report done as quickly as possible. As soon as it was completed and submitted, we filed it away and forgot about it. This was not so much because the report was not taken seriously as because it was seen to be irrelevant. The performance indicators included were not connected in any visible way to the decisions that we had to make about program development, enrollment management, or the allocation of resources.

When it became clear that the annual "metrics of excellence" exercise was not a passing fad, and that the metrics might play a more critical role in determining our access to university resources, we decided to think about the report more seriously. In effect, we decided that we were investing too much of our most valued and more scarce resource -- time --in compiling information for a report that would be put away in a file. As a result, we decided to assume ownership of the "metrics of excellence" and to design them in a way that would be useful to us and to the faculty of our school. To that end, a faculty committee was charged with coming up with a set of metrics of excellence that we could commit to for the next three to five years, and that would enable us to reflect more intelligently about how well (or how poorly) we were accomplishing the particularly initiates that we area about. Certainly, our story is not unique, but others who initially resisted the process, as we did, may find it salutary.

Designing Metrics that are Simple, Practical, and Conducive to Organizational Learning
Despite many reservations, we started with the U.S. News & World Report indicators because they have achieved a wide measure of popular acceptance. These annual rankings of the "best" graduate schools at least serve as a standard for market choice -- helping to determine the kid of student that we can attract. At the same time, they correlate highly with National Academy of Sciences ratings of the same programs. (See article by Evan Rogers and Sharon J. Rogers in the May 1997 AAHE Bulletin.) To this measure, we added a set of indicators that responded directly to the provost's request. These included 1) the quality of undergraduate and graduate students, 2) the quality of faculty, 3) the quality of academic programs, and 4) the nature and efficiency of school operations. Finally, we tried to be creative in adding some qualitative indicators.

This conventional, "bottom-up" approach to determining metrics of excellence began with the identification of some standard indicators of quality and productivity like student test scores, retention rates, grant dollars per faculty member, or average number of publications per faculty member. But rather than stopping with the resulting "laundry list" of discrete indicators, we turned to the literature on organizational performance and assessment for help in designing an approach that could both capture the complexity of an academic organization and present a coherent image of our school's performance. We found a promising framework in Robert Kaplan and David Norton's "balanced scorecard" approach (Harvard Business Review, Vol. 70, No. 1, 1992, and Vol. 71, No. 5, 1993).

Although the balanced scorecard was developed with business organizations in mind we found the framework particularly adaptable to the unique characteristics of academic organizations. Kaplan and Norton defined the "balanced scorecard" as a "set of measures that gives top managers a fast but comprehensive view of the business," by including "financial measures that tell the results of actions already taken," as well as operational measures of customer satisfaction, internal processes, and the organization's innovation and improvement activities.

A fundamental feature of the balanced scorecard is that it allows decision-makers to view organizational effectiveness from four perspectives simultaneously: 1) the financial perspective (How do we look to shareholders?); 2) the internal business perspective (What must we excel at?); 3) the innovation and learning perspective (Can we continue to improve and create value?); and 4) the customer perspective (How do customers see us?). A such, it provides information from multiple perspectives while minimizing information overload by limiting the number of measures included.

In order to make the balanced scorecard fit the parameters of the academic organization more closely, our faculty committee made some minor modifications in the wording of the four perspectives and of the questions that define them (see Chart 1 on AAA Webpage). "Financial perspectives" was replaced with "academic management perspective," and instead of asking "How do we look to shareholders?" we asked, "How do we look to our university leadership?" (In public institutions, this question might be expanded to include "statewide coordinating boards" or "systemwide administrators.") For the original "customer perspective" we substituted "stakeholder perspective" and identified students and employers as our most significant stakeholders. (For public institutions, this stakeholder set could be expanded to include elected officials and other stakeholders who have influence over budget appropriations for higher education.) We kept the original names of the two remaining perspectives. In addition to these changes, we renamed the "balanced scorecard" the "academic scorecard."

Consistent with the balanced scorecard methodology, we then began the process of developing goals and measures for each of the four domains. Our choice of goals and measures was guided by university and school current priorities.

The goals and corresponding measures are not fixed. As our environment changes, some goals may be dropped and new ones added. More importantly, within each of the four perspectives we limited our goals selections to five or fewer and kept the goals and indicators simple in order to maximize use of easily accessible data that are integral to established practices and processes.

In selecting indicators, we were guided by the following criteria: 1) they had to reflect our values; 2) they had to be simple; 3) they had to meaningful; 4) they had to be easy to represent visually; 5) they had to facilitate organizational learning; 6) they had to support comparisons between us and other units both within and outside the university; and 7) they had to permit analysis over at least four years. In short, we wanted our indicators of organizational performance to be ordinary rather than exceptional, routinely applied to the rhythms of academic management. Rather than adding them on to these processes, we wanted the indicators to be based on data we already collect on a regular basis.

Finally, we needed a reporting format that would succinctly communicate our effectiveness. (Throughout this process, we remained concerned that concepts like "metrics of excellence" and "benchmarks" produce negative reactions if they appear to be a modern version of "Taylorism" that treat the university as a machine, or that try to reduce the complex and messy human processes of universities into numerical abstractions.)

In Table 1, we provide a detailed description of the goals, measures, and benchmarks for just one of the perspectives that make up the academic scorecard -- the stakeholder perspective" -- to help others who may want to adapt our approach.

We selected six goals to address the needs of stakeholders: a) quality of academic programs; b) student-centeredness; c)quality of faculty; d) value for money; e) alumni satisfaction, and f) employer satisfaction. In some cases, we have been able to identify goals, but have not yet fully developed the associated measures and benchmarks. Blanks for progress statements in the right-hand column indicate that benchmarks are currently being developed. We are continuing to develop specific goals, measures, and benchmarks for the remaining three perspectives.

The benchmarks listed correspond to actions that the Rossier School of Education had already initiated to improve its performance. For example, until a few years ago, we had a steady stream of students, but as the economy changed and as competition from neighboring institutions stiffened, being more intentionally student-centered became a critical goal. Major investments were made to expand student services, including hiring new staff, developing new and more attractive materials, equipping the student lounge with computer stations, and using electronic mail and Web-site technology more effectively to disseminate information. We felt intuitively that these changes were having a positive effect, but other than anecdotal information we did not have indicators that enabled us to determine what was working and for whom. We intend to gather the necessary data and, where possible, benchmark our performance against other peer institutions.

Goal F -- "employer satisfaction" -- is directed to the school districts that are the major employers for teachers graduating from USC. The program has several unique features -- including extensive fieldwork prior to student teaching -- which we assumed give us a competitive advantage over the far larger programs in public institutions with whom we compete. But there is no factual information to support this belief.

More significantly, since our mission is to "redefine excellence in urban education," it is important that we determine how effective we are in preparing teachers who have the knowledge and skills to be responsive to multilingual and multiracial student populations that predominate Los Angeles' urban school districts.

Establishing appropriate benchmarks is the most time-consuming aspect of creating the academic scorecard, mainly because it requires baseline data that enable judgements to be made not only about "how well we are doing" but also about "what new practices, policies, or initiatives we need to adopt in order to improve."

Students represent our most important stakeholder group, and the "quality of academic programs" will reflect how we are perceived by prospective students as well as those who are likely to employ them. The perception of quality in our academic programs is important in attracting high-quality graduate students nationally and locally. It also impacts our ability to recruit faculty, to attract grants, to secure prestigious fellowships for our graduate students, and so on. Finally, it also influences how we are perceived by our own central administration and other powerful actors within the university.

Although many will find it surprising, the measure we chose to use for the goal of improving the "quality of academic programs" is our ranking in U.S. News & World Report. Admittedly, this is not a very creative measure, but we chose it because these rankings have become a de facto standard of excellence for prospective students and faculty that we felt we could not afford to ignore. U.S. News began ranking graduate schools of education in 1995. Unfortunately, over the last five years the Rossier School of Education has lost ground. Our overall ranking has fallen, from 23 in 1995 to 31 in 1999. Particularly troubling is our low "reputation" ranking (32) by academics in 1999.

An alternative view is that being ranked overall as a "31" is acceptable, as there are 1,191 graduate schools of education, both public and private, and this rank places us in the top 3 percent of all graduate schools of education. In addition, we have achieved the goal of being among the top 10 schools of education at private universities. Clearly, we needed some agreed-upon benchmarks in order to measure progress and to judge what we meant by "success."

To create such benchmarks, we analyzed the top 10 schools of education in the U.S. News rankings. To derive a benchmark average, we computed a median for all ranks and a mean for the other indicators. (For example, the mean of the average 1998 verbal GRE score for these institutions is 549. Also, the median reputational rank by academics of universities is "5", which indicates that one-half of the universities in the top 10 were below this tied rank and one-half of the universities were above this rank. Note that we truncated the median values in Table 2 to whole numbers to facilitate discussion.) These benchmark statistics are shown in Table 2.

We then compared the computed benchmarks with our USC rankings. As shown in Table 2, we have many opportunities to improve. For example, the benchmarking GRE verbal score is 549, whereas our GRE verbal score is 503. Overall, compared with this benchmark, we have to improve by approximately half a standard deviation. To do so we are creating a set of specific initiatives to support improvement across a broad range of indicators. The reputation ranking will be the most difficult benchmark to change quickly, as reputations often take decades to change. However, improvement on the remaining rankings is amenable to creative, short-term interventions.

The final step in the process of creating benchmarks is to develop effective graphics that allow stakeholders to see easily how we are doing in meeting/exceeding the benchmarks. (See Chart 2 on the AAA Webpage for several prototype displays.) Such displays need to be created for each indicator and updated with real data annually. The format for the displays was modified from an idea -- "dashboard" -- suggested by Christopher Meyer in the Harvard Business Review(Vol. 72, No. 3, 1994) and in use by some businesses.

How Does the Academic Scorecard Help the Administration?
From the perspective of a central administration, an instrument like the academic scorecard should make it easier for the university to accomplish its strategic goals. The USC provost desires metrics of excellence to help him determine the quality of an academic unit and whether it is increasing or declining. This is particularly important in a university like USC, which is highly decentralized and where most academic and budgetary decisions are made at the dean's level without the involvement of the provost. The motivation behind "metrics of excellence" reflected in the provost's realization that a balanced budget in itself was a very weak indicator of academic quality. But the provost gets 19 to 20 reports from the various schools and departments of the university annually, and the provost's office has not been successful in developing a systematic way of comparing and evaluating those reports. A mechanism for doing so might be an academic scorecard completed by each unit.

The scorecard is attractive because it offers a format within which to establish common measures across academic units that have shared characteristics. For example, at USC the Schools of Education, Social Work, and Planning and Policy Development are organized into a cluster that is expected to play a leading role in advancing the "Urban Paradigm," one of the university's strategic pathways.

Given this shared mission, it would make sense for these three schools to develop some shared metrics of excellence to report on how well they are doing vis a vis this urban strategic initiative. The academic scorecard provides a practical way of doing this, and the next phase of our project will be to pilot its use among volunteer schools within USC. Clearly, for the administration, one of the advantages of a group of schools adapting the scorecard is that it would enable systematic follow-up --something that is impossible under the current system.

Another appeal of the academic scorecard is that without it, initiatives like USC's metrics of excellence can quickly degenerate into a numbers game. If there are no obvious and consistent uses of the data, deans will not take the data collection seriously. The simplicity of the scorecard also makes it easier for academic units to show how budget allocations are linked to the metrics of excellence. For example, the Rossier School of Education was able to explain budget decisions in its FY2000 budget plan by showing their relationship to particular academic scorecard indicators. This gave the university administration clearer criteria against which to judge the reasonableness of the dean's allocations, and enabled the school to demonstrate that a particular budget item that might appear at first glance out of line with that of previous years was, in fact, the result of an informed decision.

Some Conclusions
Much of the literature on higher education administration regards management tools adapted from the business world as nothing more than short-lived fads. We expect that it will be four to five years before we can assess whether the scorecard makes any difference in how our university and the individual schools that compose it are managed, how priorities are set, or how resources are allocated.

Still further ahead will be the test of whether there is any evidence that a tool like the academic scorecard affects the bottom line: the quality of teaching and learning. It would not be surprising to discover as well that the use of the scorecard -- and particularly the processes through which people must work together in order to develop it -- has latent benefits that contribute to organizational well-being, like conversations that encourage the development of shared values.

If a university can align market-sensitive measures of effectiveness with parallel measures of its core processes and with its mission -- and can get both measures to a high level -- it will be in a good position to maintain excellence amid turbulent change. Measures of success in how core processes are functioning must be consistent with purposes and shared values.

Deciding which of these many processes are "core" --that is, that substantially influence essential areas of performance -- is a daunting task. Even more daunting is how to assess them. We believe that organizing an essential subset of measures through a medium like an academic scorecard provides a useful way to conceptualize and display the overall academic, educational, and financial performance of a particular academic unit. But the most effective use of a device like the academic scorecard depends on the wider development of and commitment to credible, mission-driven measures.

In the absence of this commitment, universities will experience growing state and federal intervention to impose measurement criteria and systems. We, as academic leaders, need to seize the initiative by adopting measures of success that are a truly useful management tool for our institutions and that have credibility with the institutions' internal and external stakeholders. If these measures become an integral part of formulating a university's mission, strategies, and processes of continuous improvement it is highly likely that theses same measures will satisfy the externally driven demands for accountability.

Citation: O'Neil, H. F., Bensimon, E. M., Diamond, M. A. and Moore, M. R. (1999). "Designing and implementing an academic scorecard", Change: the magazine of higher learning, Volume 31, Number 6, pp. 32-40.

The authors wish to thank the members of the R&D Committee at the University of Southern California, School of Education, for their advice and counsel. The committee members were Dennis Hocevar, Agnex Lin, Harry O'Neil (Chair), Michael Newcomb, Donald Polkinghorne, Joan Rosenbert, Marta Sota (Student), David Thomas (Student), and Estela Bensimon (ex officio). The work reported herein was supported in part by the James Irvine Foundation, under grant number 98-106, and in part under the Educational Research and Development Centers Program, PR/Award Number R305B60002, as administered by the Office of Educational Research and Improvement, U. S. Department of Education. The findings and opinions expressed in this article do not reflect the positions or policies of the National Institute on Student Achievement, Curriculum, and Assessment, the Office of Educational Research and Improvement, or the U. S. Department of Education; nor do they necessarily reflect the position or policies of the James Irvine Foundation.

Address correspondence to Harold F. O'Neil, Jr., 15366 Longbow Drive, Sherman Oaks, CA 91403; Telephone: 818-501-4004; Fax: 818-907-2760; E-mail: honeil@usc.edu .