PxPixel
Measuring What Matters - Tech Learning

Measuring What Matters

Performance measures can boots IT efficiencies, generate buy-in from stakeholders, and improve technology's impact on teaching and learning. But you have to do it right.
Author:
Publish date:

Performance measures can boots IT efficiencies, generate buy-in from stakeholders, and improve technology's impact on teaching and learning. But you have to do it right.

A hot air balloonist calls down to a man he sees on the ground. “Do you know where I am?” The man replies: “41° 28’ north latitude, 81° 37’ west longitude, and about 150 feet off the ground.” The balloonist calls:“Thanks. You must work in IT.” “Yes I do—how did you know?” “Because you answered my question, and I still don’t know where I am.” The man on the ground calls back: “You must be a superintendent.” “Yes I am—how did you know?” “Because when you got here you were lost, and you’re still lost, but now it’s my fault.”

As a district technology leader, you’re the key player in helping your superintendent and other senior leaders measure performance and provide accountability across all district functions. But if you don’t understand what exactly needs to be measured and why, or how to communicate this information to your non-IT colleagues in a language they understand, the best technology in the world won’t help improve your organization. What follows is a brief primer on how school CIOs and their staffs can begin to build, report, and act on performance measures.

STEP 1: UNDERSTAND THE METRIC SYSTEM

What do we mean by performance measures? In a nutshell, critical information that, once organized and shared, will spur action and provide accountability. There are three categories of measures:

Input Measures = what you have to work with. Example: the number of networked computers in a school, their age, RAM, and/or processor speed.

Output Measures = what’s been done with inputs. Example: improved graduation rates and test scores. Output measures in one context (end-of-year test scores) can be input measures in another (baseline data for the next school year).

Work Measures = how inputs become outputs. The most important work measures are “lead indicators”—data you can gather and report today that tells you something important about how your output measures will look in the weeks, months, and years to come. Example: student attendance describes current work—how much of the available time teachers and students actually work together—and also helps schools predict output measures such as grades, promotion, and test scores.

Measures best drive performance improvement and provide accountability when there’s a clear and shared understanding of cause and effect. For example, driving increased student use of computers will be easier if everyone involved believes that the action will improve student test scores or grades. Direct cause-and-effect relationships between measures are rare, however, so IT-related output measures need to be linked to school or district outputs through a “theory of action.” For instance, if you believe that student test results improve when they spend more time with particular software, than computer up-time is an output measure for IT activity that helps create a condition (software use) that enables the goal of higher test scores.

STEP 2: DEFINE YOUR GOALS

Now that you have some working definitions, come up with a set of four to eight IT output goals. Your goals should focus on technology architecture and its impact on the district’s ongoing operations and stem from district plans. No matter what goals you decide on, they should address efficiency and effectiveness. Efficiency goals might include:

  • Reducing the annual overhead of noninstructional areas such as operation and maintenance of schools, transportation, food services, and central office administration.
  • Realizing a positive return on investment for new technology projects.
  • Reducing total cost of ownership of end-user equipment, especially PCs and peripherals.

Effectiveness goals for IT are harder to define because they should be tied to overall district goals. Some possible IT effectiveness areas that can be tied to district goals include:

  • Customer satisfaction and service quality
  • Technology availability and response time
  • Technology usage rates

STEP 3: DEVELOP THEORIES OF ACTION

Generate a customized theory of action for each of your goals (see Step 1 for a definition). It’s important to regularly back up these theories of action with external research and internal results so they are credible to stakeholders.

STEP 4: DESIGN MEASURES

Next, define and develop measures for tracking your goals. If one of your efficiency objectives is to reduce IT overhead, for example, you might design measures for tracking how much enduser technology actually costs over its entire life cycle (total cost of ownership), including purchase and installation, training and support, repairs and upgrades, and removal and disposal. A quarterly or monthly update of projected annualized TCO per PC will be affected by new PC purchases, renegotiation of maintenance contracts, changes in the software environment, and other factors. If another one of your goals is to improve technology usage rates, you might design measures for tracking how often PCs are used and for what purpose. IT systems generate almost any data you could want, and the processing of that data into measures can be automated, so be persistent in finding the data sources you want and setting up the measurement reporting.

STEP 5: REPORT TO STAKEHOLDERS

Finally, decide on four to eight key measures you want to regularly report to your district or school. (You can make other measures available on the school intranet or include them in an annual report.) Focus on output measures and key leading indicators and provide supporting narrative with your report, including any relevant goals or benchmarks or changes in input measures (like staff reductions) that provide useful context. Remember: any measure you report must be tied to a clear, defensible theory of action. If nobody will help you articulate those theories, you need to do it anyway and keep repeating them until others challenge or accept them. Work measures are important too— they should focus the daily work of IT and there should be internal theories of action about how they relate to inputs and outputs. For example, IT might internally report and use such work measures as help desk call abandon rate (percentage of callers who give up before their call is answered), percentage of problems resolved during a help desk call, and average time to fix problems that could not be fixed during the call. Such measures will be numerous and will be more helpful if organized conceptually. Two frameworks for doing that are:

  • Use: IT in schools is either for management (used to help run the school or district as a whole, like a student information system), teaching (used by teachers to support their work, like an electronic grade book), or learning (used by students in the learning process, like a biology dissection simulation).
  • Architecture: IT capacity can be classified as infrastructure (Internet access, wide-area and local-area networks, servers, routers, and wiring); applications (finance, human resources, student information systems); data (operational data stores and data warehouses or stored data sets); portal (Web presence, content management, and software and systems interface issues); and end-user devices (PCs, printers, handhelds, and other devices).

Regardless of how IT work measures are organized, the appetite of non-IT people for them is limited, so reporting should focus on key IT outputs, their relationship to district goals, and any noteworthy trends in inputs or leading indicators. Be sure to revisit these measures, because the ones that matter may change over time, and new data sources are always coming to light.

Of course, no amount of measurement reporting can overcome harsh budget realities. Performance measures have not translated into more technology funding in my district, where a revenue crisis drove IT spending and staffing down more than 50 percent over a three-year period.

Nevertheless, measures are useful during such crises, as they help prioritize use of limited resources and guide daily management and continuous improvement efforts to squeeze maximum efficiency and effectiveness out of every dollar and every minute. And the discipline of making and using measures holds the hope that future investment of resources in K–12 IT will be better focused on improving efficiency and effectiveness. Our children deserve no less.

Peter Robertson is the chief information officer for the Cleveland Municipal School District. He’s on leave for the 2004-2005 school year to complete his doctorate in educational leadership at Columbia University.

A Trial-by-Error Example

Here’s how one district tweaked performance measures to better match their objectives.

BEFORE:

Pre-2003, Cleveland’s IT department had no formal role in PC procurement. Years of uncoordinated PC buying in reaction to grant funds had created a huge unfunded “total cost of ownership” liability. Efforts to reduce that liability generated mountains of data about help desk, field support, and server and bandwidth usage. Because such measures were relatively easy to create, IT reporting consisted of long reports of field support ticket statistics sorted by category. Summaries of that reporting showed a roughly 40 percent decline in average problem resolution time even while ticket volume doubled. Such reporting was used to explain that further service improvement required additional resources, but the explanations lacked a clear connection to district goals.

AFTER:

PC inventory and software usage hours were more difficult to measure because the data had to be collected at network servers, aggregated across the district, and associated with the PCs and software that triggered it. But it was still relatively simple and, once done, easy to maintain. The resulting measures of input (PC inventory) and output (software usage) were more useful in tying IT activity to district goals. Showing a 30 percent decline in software usage hours per computer was compelling to non-IT people trying to understand technology support problems. The same was true for the inventory data, which revealed that 15 percent of the classroom PCs were too old to be connected to the Internet and about half of the district’s PCs were out of warranty.

Sample Measures

Seven measures the Cleveland Municipal School District IT department reports to stakeholders:

  • Tech support cost (disaggregated by month)
  • Ratio of PCs to students (classroom, month, and PC status)
  • Network up-time percentage (device, connection, and hour)
  • Help desk call volume (building, day, and reason)
  • Problem ticket resolution time (building, day, and reason)
  • Software and Internet usage hours per PC (building, day, and software or Web site title)
  • Number of data warehouse users and events (user, day, and report)

Resources

To learn more about measurement reporting, consult these sources.

  • Establishing an Integrated Performance Measurement System, the second volume in a six-part handbook offered by The Performance-Based Management Special Interest Group: www.orau.gov/pbm/pbmhandbook/pbmhandbook.html
  • Balanced Scorecard Step-by-Step: Maximizing Performance and Maintaining Results by Paul R. Niven (Wiley 2002)
  • Results: The Key to Continuous School Improvement by Mike Schmoker (ASCD 1999)
  • The Six Sigma Way by Peter S. Pande et al, Robert P. Neuman, Roland R. Cavanagh (McGraw-Hill, 2000): www.sixsigmaway.com

Featured

Related

Results Matter

No Child Left Behind has made data-driven decision making a national movement, forcing districts to focus their use of technology on collecting, analyzing, and acting on student data. I believe this is a good thing. In fact, I would argue our current obsession with using technology for testing and analysis is a

Measuring the Impact of Technology Use

We are updating the technology section of the school plan, and need to decide how we will measure the impact of technology use on student achievement. We are required to look at standardized tests, but aren’t sure this will tell the whole story. What additional measures can we use? Standardized tests do

Measuring Server Traffic

Question: What server software tool do you recommend for measuring Internet traffic? The IT Guy says: The Webalizer is fast, free, and provides fairly comprehensive web server traffic reports. Reports are broken down by month, day, and hour, and reports can be accessed dynamically throughout the day as

Image placeholder title

June 2013, What’s New

MessageOps has launched 365 Command, an easy-to-use, Web-based portal that makes it simple and efficient for anyone to perform common Office 365 administrative tasks without complicated scripting.

Measuring Up in a Flat World

from Technology & Learning Pioneering groups are reforming curriculum to prepare students for the global digital workforce. When President James Monroe was crafting the Monroe Doctrine back in 1823 to prevent foreign inter ests from encroaching on American territory, the notion of a global economy undeterred by

Measured Progress Expands Leadership Team promo image

Measured Progress Expands Leadership Team

DOVER, N.H.—May 8, 2017—Measured Progress is pleased to welcome Stephen Murphy as Vice President, Measurement Services, and to announce the promotion of Justine Hargreaves to Vice President, Marketing. They join the executive leadership team reporting to Martin Borg, President and CEO, working cross-functionally to help define long-term strategy and to sustain the New Hampshire nonprofit’s delivery on its mission to improve teaching and learning.