Chat with us, powered by LiveChat How do I manage organizational performance? - Writeden

 How do I manage organizational performance?

7 to 10 pages

  • An introduction 
  • A brief overview of the organization and stretch goals for 5, 10, and 20 years
  • A support of the goals using the current trajectory of the organization’s performance with a discussion of data measures and outcomes the organization is trying to achieve
  • A discussion of how the goals will help shape and support positive social change in the future related to the organization’s mission
  • A definition of the metrics you want to see as specific outcomes for the organization
  • An overview of the performance improvement/evaluation plan that has goals, benchmark metrics, frequency of measures, accountable person for the outcomes, actual metrics achieved, and performance plan to improve even when metrics are met
  • Highlights of the organization’s progress and the potential positive impact on specific stakeholder groups
  • A conclusion (Provide highlights of what you just discussed and what you want your reader to remember.)
  • References

Evaluation and Program Planning 61 (2017) 125–127

Being smart about writing SMART objectives

May Britt Bjerke*, Ralph Renger University of North Dakota, School of Medicine & Health Sciences, Center for Rural Health Evaluation, 250 Centennial Dr. Stop 8138, Grand Forks, ND 58202- 8138, United States

A R T I C L E I N F O

Article history: Received 17 October 2016 Received in revised form 16 December 2016 Accepted 19 December 2016 Available online 23 December 2016

Keywords: Objective development Mainstreaming Evaluation guidance SMART objectives

A B S T R A C T

This article challenges the conventional wisdom in mainstream evaluation regarding the process for developing specific, measurable, attainable, relevant, and time-bound (SMART) objectives. The article notes several advantages of mainstreaming the SMART method including program capacity building and being able to independently monitor progress toward process and outcome objectives. It is argued the one size fits all approach for writing SMART objectives is misleading. The context in which the evaluation is conducted is a key deciding factor in how and when the SMART criteria should be applied. Without an appreciation of the evaluation context, mainstream users may be developing objectives that are far from smart. A case example is presented demonstrating a situation where a stepwise, rather than simultaneous application of the SMART criteria was necessary. Learning from this case, recommen- dations are forwarded for adjusting how SMART criteria should be presented in mainstream evaluation manuals/guides.

© 2016 Elsevier Ltd. All rights reserved.

Contents lists available at ScienceDirect

Evaluation and Program Planning

journal homepage: www.elsevier .com/ locate /eva lprogplan

1. Introduction

Doran (1981) first introduced the specific, measurable, assign- able, realistic, and time-related (SMART) method for writing effective management goals. Today, the SMART method in management is commonly stated as the standard for developing effective, measurable goals and objectives (Bowles, Cunningham, De La Rosa, & Picano, 2007; Conzemius & O’Neill, 2011; Frey & Osterloh, 2001; Gettman, 2008; Hessel, Cortese, & De Croon, 2011; Hofman & Hofman, 2011; Jung, 2007; MacLeod, 2012; Lawlor, 2012; Linstrom, 2006; Pearson, 2012; Piskurich, 2015; van der Grift et al., 2013; Wade, 2009).

Although developed within management, the SMART method is also widely cited within the program planning/evaluation litera- ture (Chen, 2015; Gudda, 2011; Isell, 2014; Knowlton & Philips, 2013; Mathison, 2005; Patton, 2011; Sharma & Petosa, 2012; Smith, 2010). Moreover, program planning/evaluation guides provided by the Centers for Disease Control and Prevention, the United Way, The W.K. Kellogg Foundation and the United States Department of Education, include the recommendation of using SMART criteria when creating program goals and objectives (Bryan, DiMartino, & Center for Secondary School Redesign, 2010; Centers for Disease

* Corresponding author. E-mail addresses: [email protected] (M.B. Bjerke),

[email protected] (R. Renger).

http://dx.doi.org/10.1016/j.evalprogplan.2016.12.009 0149-7189/© 2016 Elsevier Ltd. All rights reserved.

Control and Prevention, 2013; Harris and Harvard Family Research Project, 2011; W.K. Kellogg Foundation, 2004). The proliferation of the SMART method in evaluation and non-profit organization guidance supports the contention that SMART is now a main- stream method for developing program goals and objectives.

The benefit of mainstreaming is a greater number of programs, especially those with limited resources, are able to apply evaluation fundamentals to monitor and make program improve- ments (Picciotto, 2002; Preskill & Boyle, 2008; Sanders, 2002). This increased evaluation capacity reduces the need for costly external evaluation consultants (Cousins, Goh, Elliott, Aubry, & Gilbert, 2014; Picciotto, 2002). It also enables more programs to meet funders’ evaluation requirements (Stevenson, Florin, Mills, & Andrade, 2002).

However, as is the case with attempting to mainstream any evaluation method, there are many potential unintended con- sequences (Grudens-Schuck, 2003; Merton & Sztompka, 1996; Picciotto, 2002; Renger, 2006; Williams & Hawkes, 2003). First, many mainstream program evaluation guides present the SMART criteria without an explanation as to why or how they should be applied. Thus, users may “blindly” following the recipe-like method to develop SMART objectives without fully understanding the underlying reasons for applying each SMART criterion. Second, when following a recipe-like formula writing SMART objectives may become nothing more than a grantsmanship exercise; a necessary box needing to be checked to fulfill a sponsor’s request for proposal requirements. Hummelbrunner (2010) expressed

126 M.B. Bjerke, R. Renger / Evaluation and Program Planning 61 (2017) 125–127

similar concerns that laypersons following mainstream guidance often do so as a justification rather than a planning exercise.

Finally, and specifically to mainstreaming SMART objectives, program evaluation guides suggest SMART objectives be written in a single step. On the surface, this may seem reasonable and harmless. However, it is the authors’ contention there are some instances where attempting to satisfy all the SMART criteria in a single step is unrealistic and/or unwise. This method may produce a mechanical approach to program evaluation objective writing. The following case example demonstrates a situation where a stepwise approach, rather than a simultaneous application of the SMART criteria, was necessary to write meaningful program objectives.

2. Case example

The authors’ need to elaborate on an alternative method for developing SMART objectives arose while working on a self- assessment tool for a cardiac ready communities (CRC) program in a rural Midwest state (Center for Rural Health, 2016). The goal of a CRC (also known as Heart Safe communities) program is to increase survival rates from out-of-hospital cardiac arrest (OHCA) through several community strategies targeting the five links in the American Heart Association’s “chain of survival” (American Heart Association, 2015; Heart Safe Communities, n.d.): compris- ing 1) recognition of cardiac arrest and activation of the emergency response system, 2) immediate cardiopulmonary resuscitation (CPR), 3) rapid defibrillation, 4) basic and advanced emergency medical services, and 5) advanced life support and post-cardiac arrest care.

The CRC strategies are designed to address one or more of the survival links and may include: community leadership involve- ment, community awareness campaigns, CPR training, public access to automated external defibrillators (AEDs), emergency medical dispatching, resuscitation protocols for emergency medi- cal services (EMS) and hospital services, and community evalua- tion (Heart Safe Communities, n.d.; Montana Cardiac Ready Communities, 2015; North Dakota Department of Health, 2016). The state established the success of each strategy by providing targets needing to be met within the three-year program time- frame to receive official recognition as a cardiac ready community.

Within the state, there were numerous CRC’s needing evalua- tion assistance. Further, given the numerous strategies encom- passed in a single CRC initiative, it was not feasible to provide each community with the external evaluation resources needed to track their individual progress in meeting the set program targets. Therefore, it was decided the best evaluation strategy was to empower participating communities to conduct their own CRC evaluation.

The evaluation strategy consisted of providing participating communities with evaluation tools and technical assistance to enable ongoing self-assessment (Center for Rural Health, 2016). The self-assessment tool included guidance on how to create SMART objectives for each CRC program activity so communities could track progress toward the program targets (Center for Rural Health, 2016). Specifically, the initial draft of the self-assessment guide described how and why to write specific, measurable, achievable, relevant, and timely objectives in line with the SMART criteria suggested by Chen (2015).

The process of writing the guide forced evaluators to a deeper level of thinking as to how the SMART criteria would be applied. Explaining how to make the objectives specific, measurable, and relevant was relatively straightforward. For example, one program strategy related to the link of early CPR is community level CPR training. To meet the specificity criterion the community needed to detail what was meant by the terms “trained” and “population”.

For instance, “trained” could mean the population is at a minimum trained in hands-only CPR within the last two years and “population” could be defined as all community members aged 10 and above. To meet the measurable criterion the number of community members trained in CPR could simply be tracked via CPR course attendance sheets. The objective was relevant because of the research evidence linking change in this essential link in the chain to improved OHCA survival rates (American Heart Associa- tion, 2015).

However, challenges arose when attempting to explain how to apply the achievable and timely criteria. An achievable objective is one that can be reasonably met with existing resources (Chen, 2015). Thus, whether an objective is achievable depends on having the needed resources to move from the baseline to the desired goal. For example, assume the baseline revealed 20% of the community was CPR trained, but the goal was to have 25% trained. If both budgets and training resources such as instructors, training materials, and manikins were limited, then the 25% target might be an attainable target. Alternatively, if the community had access to ample resources, then perhaps a higher target such as 35% being CPR trained might be achievable.

To meet the timely criterion requires objectives to include a reference date for completion. Doing so, according to Chen (2015), stimulates effectiveness. Although the state imposed a three-year time frame, some objectives needed to be completed sooner than others. However, without a baseline assessment establishing a reasonable timeframe was challenging. For example, it is reason- able to posit the community would achieve the 25% target sooner if 20% of the population was already trained in CPR, as compared to if the baseline was closer to 10%. If the former was the case, the SMART objective could state that by year 2 of the three-year program the community will increase from 20% to 25% the share of community members aged 10 and above trained in at least hands- only CPR.

These challenges made it clear the self-assessment tool needed to be modified so the CRCs (i) initially apply the criteria specific, measurable, and relevant to their objectives, (ii) then gather baseline data (because measurable has been defined), and (iii) finally add to the objective quality by applying the achievable and timely criteria. This held true for all strategies for each link in the survival chain. The revised self-assessment tool provided more detailed guidance aiding the communities in applying each SMART criterion (Center for Rural Health, 2016). For example, adding descriptions/definitions of all program strategies helped commu- nities in adding specificity to their SMART objectives. Further, adding available community resources provided assistance in developing achievable objectives with realistic timeframes. SMART objectives are more likely when a formal planning, implementa- tion, and evaluation process like the Antecedent Target Measure- ment (ATM) approach are followed with fidelity (Renger & Titcomb, 2002). The revisions to the self-assessment tool better assisted stakeholders in writing program objectives that met all the SMART criteria. However, as reviewers of our work rightly note additional guidance could be provided for each SMART criterion. Currently, the self-assessment tool is being revisited to see where decision rules could be added. It is a continuous improvement process and the authors are getting smarter around their SMART objectives guidance.

3. Conclusion

The case presentation demonstrates a uniform, one step SMART approach may not always result in smart objectives. In our example, the absence of baseline information did not allow for the writing of achievable and timely program objectives. Thus, there was a need for a stepwise approach to creating SMART objectives.

M.B. Bjerke, R. Renger / Evaluation and Program Planning 61 (2017) 125–127 127

The stakeholders first wrote specific, measurable, and relevant objectives; then gathered baseline data. Once the baseline data were collected the achievable and timely criteria could be applied. While two steps were needed in this context, it is possible to imagine situations where perhaps additional steps are needed before satisfying all the SMART criteria.

3.1. Lessons learned

Learning from this, mainstreaming SMART objectives must be done with some degree of caution and account for users who do not fully understand the underlying reasons for applying each SMART criterion. It is important future mainstream evaluation manuals/guides delineate between different contexts in their SMART goal/objective guidance as stakeholders may need to do some homework before being able to satisfy all the SMART criteria. Otherwise, programs may end up with SMART objectives that are not so smart after all.

Acknowledgements

The authors would like to thank Dr. Carlos Rodriguez, Kim Dickman, Skyler Ienuso, Makenzie McPherson, Allyssa Schlosser and Eric Souvannasacd for their insights and feedback.

This work was supported by the Leona M. and Harry B. Helmsley Charitable Trust.

References

American Heart Association (2015). Highlights of the 2015 American Heart Association guidelines update for CPR and ECG. Retrieved 15 December 2016 from https:// eccguidelines.heart.org/wp-content/uploads/2015/10/2015-AHA-Guidelines- Highlights-English.pdf.

Bowles, S., Cunningham, C. J., De La Rosa, G. M., & Picano, J. (2007). Coaching leaders in middle and executive management: Goals, performance, buy-in. Leadership & Organization Development Journal, 28(5), 388–408.

Bryan, W., DiMartino, J., & Center for Secondary School Redesign (2010). Writing goals and objectives. A guide for grantees of the smaller learning communities program. United States Department of Education, Smaller Learning Communities Program. Academy for Educational Development Retrieved 15 December 2016 from https://www2.ed.gov/programs/slcp/slc-wgandobj-book- f.pdf.

Center for Rural Health (2016). Cardiac ready community implementation and evaluation guidelines. School of Medicine and Health Sciences, University of North Dakota Unpublished document.

Centers for Disease Control, & Prevention (2013). Evaluation guide. Writing SMART objectives. Division for Heart Disease and Stroke Prevention. Sate Heart Disease and Stroke Prevention Program Retrieved 15 December 2016 from http://www. cdc.gov/dhdsp/programs/spha/evaluation_guides/docs/smart_objectives.pdf.

Chen, H. T. (2015). Practical program evaluation: Theory-driven evaluation and the integrated evaluation perspective, 2nd ed. Thousand Oaks, CA: Sage.

Conzemius, A., & O’Neill, J. (2011). The power of SMART goals: Using goals to improve student learning. Solution Tree Press.

Cousins, J. B., Goh, S. C., Elliott, C., Aubry, T., & Gilbert, N. (2014). Government and voluntary sector differences in organizational capacity to do and use evaluation. Evaluation and Program Planning, 44, 1–13.

Doran, George T. (1981). There’s a S.M.A.R.T. way to write management’s goals and objectives. Management review 70.11 1981: 35. Business Source Premier.

Frey, B. S., & Osterloh, M. (Eds.). (2001). Successful management by motivation: Balancing intrinsic and extrinsic incentives. Springer Science & Business Media.

Gettman, H. J. (2008). Executive coaching as a developmental experience: A framework and measure of coaching dimensions. ProQuest.

Grudens-Schuck, N. (2003). The rigidity and comfort of habits: A cultural and philosophical analysis of the ups and downs of mainstreaming evaluation. New Directions for Evaluation, 2003(99), 23–32.

Gudda, P. (2011). A guide to project monitoring & evaluation. Bloomington, IN: AuthorHouse.

Harris, E., & Harvard Family Research Project (2011). Afterschool evaluation 101: How to evaluate an expanded learning program. President and Fellows of Harvard College Retrieved 15 December 2016 from http://www.unitedway.org/our- impact/focus/education/out-of-school-time/tools.

Heart Safe Communities (2016). What is HEARTSafe? n.d., Retrieved 15 December 2016 from http://heartsafe-community.org/.

Hessel, V., Cortese, B., & De Croon, M. H. J. M. (2011). Novel process windows— Concept, proposition and evaluation methodology, and intensified superheated processing. Chemical Engineering Science, 66(7), 1426–1448.

Hofman, W. A., & Hofman, R. H. (2011). Smart management in effective schools: Effective management configurations in general and vocational education in the Netherlands. Educational Administration Quarterly, 47(4), 620–645. http://dx. doi.org/10.1177/0013161X11400186.

Hummelbrunner, R. (2010). Beyond logframe: Critique, variations and alternatives. In N. Fujita (Ed.), Beyond logframe; Using systems concepts in evaluationTokyo: Foundation for Advanced Studies on International Development.

Isell, M. L. (2014). Health program planning and evaluation. A practical, systematic approach for community health, 3rd ed. Jones and Bartlett.

Jung, L. A. (2007). Writing SMART objectives and strategies that fit the ROUTINE. Teaching Exceptional Children, 39(4), 54–58.

W.K. Kellogg Foundation (2004). Logic Model Development Guide. Retrieved 15 December 2016 from https://www.wkkf.org/resource-directory/resource/ 2006/02/wk-kellogg-foundation-logic-model-development-guide.

Knowlton, L. W., & Philips, C. C. (2013). The logic model guidebook: Better strategies for great results, 2nd ed. Sage.

Lawlor, K. B. (2012). Smart goals: How the application of smart goals can contribute to achievement of student learning outcomes. Developments in Business Simulation and Experiential Learning, 39, .

Linstrom, J. (2006). Project objectives help you work SMART-er. Fire Chief, 50(4), 26– 28.

MacLeod, L. (2012). Making SMART goals smarter. Physician Executive, 38(2), 68–72. Mathison, S. (2005). Encyclopedia of evaluation. Sage Publications. Merton, R. K., & Sztompka, P. (1996). On social structure and science. University of

Chicago Press. Montana Cardiac Ready Communities (2015). Cardiac ready community application

and toolkit. Retrieved 15 December 2016 from http://dphhs.mt.gov/ publichealth/EMSTS/cardiacready/communityapp.

North Dakota Department of Health (2016). Cardiac ready community designation guidelines. Retrieved 15 December 2016 from https://www.health.nd.gov/ media/1382/nd-crc-criteria.pdf.

Patton, M. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: Guilford Press.

Pearson, E. S. (2012). Goal setting as a health behavior change strategy in overweight and obese adults: A systematic literature review examining intervention components. Patient Education and Counseling, 87(1), 32–42.

Picciotto, R. (2002). The logic of mainstreaming a development evaluation perspective. Evaluation, 8(3), 322–339.

Piskurich, G. M. (2015). Rapid instructional design: Learning ID fast and right. John Wiley & Sons.

Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443–459.

Renger, R., & Titcomb, A. (2002). A three-step approach to teaching logic models. American Journal of Evaluation, 23(4), 493–503.

Renger, R. (2006). Consequences to federal programs when the logic-modeling process is not followed with fidelity. American Journal of Evaluation, 27(4), 452– 463 2006.

Sanders, J. R. (2002). Presidential address: On mainstreaming evaluation. American Journal of Evaluation, 23(3), 253–259.

Sharma, M., & Petosa, R. L. (2012). Measurement and evaluation for health educators. Jones & Bartlett Publishers.

Smith, M. J. (2010). Handbook of program evaluation for social work and health professionals. Oxford University Press.

Stevenson, J. F., Florin, P., Mills, D. S., & Andrade, M. (2002). Building evaluation capacity in human service organizations: A case study. Evaluation and Program Planning, 25(3), 233–243.

van der Grift, E. A., van der Ree, R., Fahrig, L., Findlay, S., Houlahan, J., Jaeger, J. A., . . . Olson, L. (2013). Evaluating the effectiveness of road mitigation measures. Biodiversity and Conservation, 22(2), 425–448.

Wade, D. T. (2009). Goal setting in rehabilitation: An overview of what, why and how. Clinical Rehabilitation, 23(4), 291–295.

Williams, D. D., & Hawkes, M. L. (2003). Issues and practices related to mainstreaming evaluation: Where do we flow from here? New Directions for Evaluation, 2003(99), 63–83.