Write an assignment about “Program Evaluation”


  • Issues Related to Population
  • Increasing Participation
  • Seeking More Generalizable Results


SKU: Repo884499

Conducting Qualitative Research

Program evaluation procedures can help you identify the needs of a population as you develop programs. It can also help you identify portions of the population who are not accessing services once a program is implemented. In this assignment, you examine issues related to access to services and how program evaluation procedures can be used to address those issues, and you evaluate a meta-analysis of marriage and relationship education programs.

The assignment: 

Review the article,”Does Marriage and Relationship Education Work? A Meta-Analytic Study,” and provide a brief summary of the issues related to disadvantaged and ethnically diverse populations.

How would you as a researcher increase participation of disadvantaged and ethnically diverse populations in marriage and relationship education programs?

What types of measures and data collection methods would you employ to obtain more reliable and generalizable results? ( I WOULD USE : grounded designs / Data collection – interview, observation, record review, or combination

ARTICLE:Does Marriage and relationship Education works

The science of prevention of human problems continues to grow and show promise ( Flay et al., 2005; Rishel, 2007). In addition to the prevention of individual mental health problems, prevention efforts also include educational interventions to help romantic couples form and sustain healthy marriages and relationships. Marriage and relationship education (MRE) consists of two general components. The primary emphasis has been on developing better communication and problem-solving skills that are core to healthy, stable relationships, such as diminishing criticism and contempt and improving listening skills ( Gottman & Silver, 1999). Couples learn about the importance of these skills and usually practice them with some instructor guidance. A second component of MRE is didactic presentation of information that correlates with marital quality, such as aligning expectations and managing finances. Couples learn about and discuss these issues and often make specific plans for dealing with them more effectively. Often within this component are discussions about important virtues related to relationship quality, such as commitment and forgiveness ( Fincham, Stanley, & Beach, 2007). While some MRE programs emphasize one component to the exclusion of the other, most combine the two, and most of these give more emphasis to communication skills training. While many couple therapists also provide MRE services, MRE is distinct from couple therapy. MRE does not provide intensive, one-on- one work between participants and professionals on specific personal problems, as therapy does. MRE provides “upstream” educational interventions to groups of couples and individuals before problems become too serious and entrenched ( J. H. Larson, 2004).

Over the last decade, MRE has grown beyond programs offered by private professional and lay practitioners to become a tool of public policy. For example, U.S. federal policy makers recently have supported MRE as a way to help couples—especially lower-income couples—form and sustain healthy marriages as an additional tool to reduce poverty and increase children's well-being ( Administration for Children and Families, 2007; Dion & Hawkins, 2008). In 2006, federal legislation allocated $500 million over 5 years to support promising MRE programs and initiatives targeted primarily at lower-income couples. (See http://www.acf.hhs.gov/programs/ofa/hmabstracts/index.htm for a listing of funded programs.) In addition, a growing number of states have also allocated significant public funds to support MRE efforts ( Ooms, Bouchet, & Parke, 2004). For instance, Texas has dedicated more than $10 million a year to support MRE; Utah has dedicated $750,000 a year. With greater public support for MRE, however, comes greater public scrutiny ( Huston & Melz, 2004).

Scholars have conducted many evaluation studies of various MRE programs over the past three decades ( Halford, 2004; Halford, Markman, Kline, & Stanley, 2003). Previous meta-analytic reviews of MRE research have generally shown it is effective in increasing relationship quality and communication skills ( Butler & Wampler, 1999; Carroll & Doherty, 2003; Giblin, Sprenkle, & Sheehan, 1985; Hahlweg & Markman, 1988; Hight, 2000; Reardon-Anderson, Stagner, Macomber, & Murray, 2005). However, these studies have been limited in their conclusions. The first meta-analysis of MRE is more than 25 years old ( Giblin et al., 1985). The most recent meta-analysis ( Reardon-Anderson et al., 2005) did not include quasi-experimental studies, studies that may be more representative of MRE as it is practiced under normal field conditions ( Shadish, Matt, Navarro, & Phillips, 2000). Two studies reviewed only a narrow band of the marriage education spectrum—premarital education ( Carroll & Doherty, 2003; Hahlweg & Markman, 1988). Another focused only on one specific program—Couples Communication ( Butler & Wampler, 1999). Two meta-analyses did not distinguish between therapy and educational interventions for couples (i.e., Giblin et al., 1985; Reardon-Anderson et al., 2005). One meta-analysis was an unpublished dissertation ( Hight, 2000) that did not differentiate between relationship quality and communication skills outcomes, although it was the only meta-analysis that gave significant attention to unpublished studies. Moreover, moderator variables important to practitioners and policy makers, such as gender differences, ethnic/racial diversity, and economic diversity of participants, have not been investigated extensively.

Our meta-analytic study addresses these limitations. Our primary aim is to address the following question: Does the overall evidence suggest that MRE can help couples form and sustain healthy relationships? Specifically, we evaluate the efficacy of MRE for relationship quality and communication skills at both immediate postassessment and follow-up assessment. We also explore several important methodological, sample, and intervention variables that may moderate the effects of MRE.


Selection and Inclusion Criteria

Psychoeducational intervention

In the current meta-analysis, all studies assessed the effects of a psychoeducational intervention that included improving couple relationships or communication skills as a goal. Therapeutic interventions were excluded to provide a clear picture of the effects of psychoeducational intervention. Therapeutic interventions generally have stronger effects than do psychoeducational interventions ( Shadish & Baldwin, 2003). Thus, we excluded studies that had set curricula but were delivered by a therapist to a couple as well as programs that were essentially group therapy (e.g., Worthington et al., 1997). Studies that focused on improving sexual functioning were excluded (e.g., Cooper & Stoltenberg, 1987).

Reporting of outcome data

We included studies that reported sufficient information to calculate effect sizes for the specified outcomes. When studies did not report sufficient information to calculate effect sizes, we contacted the authors where possible for more information and used methods for “rehabilitating” studies outlined by Lipsey and Wilson (2001). Six studies (5%) were dropped because we could not calculate an effect size.

Outcome measures

We coded measures of relationship quality that assessed various aspects of relationships such as areas of agreement–disagreement and conflict, time together, and areas of satisfaction–dissatisfaction. Some measures simply asked about overall relationship satisfaction. We included these measures as a subset of the broader construct of relationship quality. Most studies ( k = 112) used standardized measures, such as the Dyadic Adjustment Scale ( Spanier, 1976) or the Marital Adjustment Test ( Locke & Wallace, 1959). Communication skills were reported in numerous ways, including global assessments, positive and negative communication, positive problem solving, and negative problem solving, with both self- report and observational measures employed. We combined all these measures into a single, communication outcome indicating a global intervention effect on communication skills.

We examined both immediate post assessments and follow-up assessments, reporting these separately to explore deterioration (or gain) over time. Timing of follow-up for experimental studies ranged from 1 to 60 months; 3- and 6-month follow-ups were most common. Timing of follow-up for quasi- experimental studies ranged from 1 to 36 months; again, 3- and 6-month follow-ups were most common. When multiple follow-up assessments were available, we chose the assessment closest to 12 months. Only a handful ( k = 7) of studies employed follow-up assessments greater than 12 months. For instance, Schulz, Cowan, and Cowan (2006) evaluated the effects of their transition to parenthood MRE intervention at 6-, 18-, 42-, and 60-months postpartum. Although we coded the follow-up closest to 12 months to allow for more deterioration (or gain) of effects, note that most studies had only one follow- up and that assessment usually occurred between 3 and 6 months, not at a more distal 12 months.

Methodological design

Our primary interest is the efficacy of MRE, which is addressed by effect sizes representing the difference between intervention and no intervention. Thus, we included only studies that used control groups. This means we did not include a number of “horse race” studies comparing one intervention with another. Some studies were conducted with classic no-intervention control groups ( k = 38), but most used “wait list” control groups ( k = 73). We chose to examine both experimental and quasi- experimental studies because quasi-experimental studies may be more representative of MRE under normal field conditions ( Shadish et al., 2000). Experimental studies compared groups randomly assigned to an MRE-treatment or a control group; quasi-experimental studies included a no-treatment control group, but random assignment was not assured. (A full list of MRE studies reviewed but not included in this study, including treatment A versus treatment B studies, one group/pre- post design studies, uncodable studies, and studies with duplicate data, is available on request.)

Publication status

We searched extensively for both published and unpublished studies so that we could address publication bias directly. Studies that are not published may be systematically different than published studies, including differences in the intervention effect size. Indeed, meta-analyses that ignore unpublished studies likely overstate the true effect size (e.g., Vevea & Woods, 2005). More than 60% of the studies in this meta-analysis were unpublished reports, primarily dissertations. Clinical graduate students conducted the large majority of these unpublished dissertation studies. Some developed their own intervention programs, but most employed well-known programs, such as Couples Communication. The studies generally were well designed but usually suffered from lack of statistical power due to small sample sizes. We suspect that the studies were unpublished primarily due to a lack of statistical power to produce significant results combined with authorship by graduate students who may have been headed toward clinical rather than academic positions.

Search Procedure

We searched for MRE research conducted over the last three decades (since 1975), when the pace of work in this field began to pick up, through 2006, when substantial federal funding first targeted support for MRE. First, we reviewed 502 studies identified by a search conducted by the Urban Institute for their meta-analysis of MRE ( Reardon-Anderson et al., 2005). Second, we searched bibliographies from other meta-analyses and literature reviews. Third, we searched PsycINFO for more recent work (since the Urban Institute search in 2003). Fourth, we searched Dissertation Abstracts International for unpublished work. Finally, we made extensive efforts over the course of 2 years at national conferences and through e-mail to contact researchers and practitioners to find unpublished (and in-press) reports. These search procedures produced 86 codable reports containing 117 independent studies.

MRE Participants Summary

Samples in the 117 studies consisted mostly of White, middle-class, married couples in general enrichment programs who were not experiencing significant relationship distress. Only 7 studies had more than 25% racial/ethnic diversity in their samples; only 4 of these 7 studies had samples that were predominantly non-White. Similarly, only 2 studies had primarily low-income samples; another handful of studies had samples with at least some low-income couples. (Almost of all these studies came from unpublished dissertations.) There were no reports of homosexual couples in any of these studies. In terms of relationship status, the study samples consisted overwhelmingly of married couples; the number of unmarried, cohabiting couples, when reported, was negligible in enrichment studies. (In programs targeting engaged couples there likely were more cohabiting couples, but this information was seldom provided.) In terms of life-course timing, 3 studies targeted single high-school students, 16 targeted engaged or seriously dating couples, and 10 targeted couples at the transition to parenthood. The remaining 75% of studies were general marriage enrichment programs (although these samples sometimes included a few engaged or cohabiting couples). There was more variation for relationship length (when reported); the average relationship length was 0–2 years for 18 studies, 3–5 years for 18 studies, 6–10 years for 32 studies, 11–15 years for 30 studies, and 16–20 years for 11 studies. Only about half ( k = 61) of the studies reported the relationship distress level of the samples. From these reports, there appear to be negligible numbers of distressed couples in the samples of most studies. Eight studies reported that 50%–89% of couples in the samples were distressed; 2 studies reported that 90%–100% of couples in the samples were distressed.

Computation and Reporting of Effect Sizes

The effect size statistic employed is the standardized mean group difference. We adjusted each effect size by using Hedges(1981) correction for small sample bias. All effect sizes were weighted by the inverse variance (squared standard error) and averaged to create the overall effect size. We employed random effects estimates, as opposed to fixed effects. The random effects model allows for the possibility that differences in effect sizes from study to study are associated not only with participant- level sampling error but also with variations such as study and intervention methods ( Lipsey & Wilson, 2001). In addition, the random effects model allows researchers to generalize beyond the studies included in the meta-analysis ( Hedges & Vevea, 1998). We aggregated effect sizes to the study level because many studies included multiple outcomes. We used Biostat's Comprehensive Meta Analysis II to perform these calculations.

For technical and conceptual reasons, it was wise to conduct analyses separately for experimental and quasi-experimental studies ( Lipsey & Wilson, 2001). Often meta-analysis will include only experimental studies in their analyses because they provide the best evidence of efficacy. However, this also has the potential side effect of excluding significant numbers of studies that also may yield valuable information. In essence, we provide a “benchmark” by analyzing experimental studies first. Then, as suggested by Shadish and Ragsdale (1996), we compare these results with those from quasi-experimental studies. Moreover, rather than combining immediate post assessments and later follow-up assessments, we computed effect sizes separately by time to examine potential deterioration (or gain) in effects.

By analyzing the data in these ways, we encountered the challenge of dealing with a set of effect sizes rather than a single estimate. That is, we generated a set of four effect sizes for each outcome: 2 (design: experimental/quasi-experimental) × 2 (time points: postassessment/follow-up). In addition, we wanted to make a more direct test of deterioration (or gain) of effects. The most direct test of effect size stability from postassessment to follow-up requires limiting our analyses only to those studies that included both an immediate postassessment and a follow-up assessment. Some studies contributed effect sizes only at postassessment with no follow-up, some had no immediate postassessment but did have a follow-up, and some studies had both. The first set of analyses described above compares postassessment and follow-up effects across studies, confounding real differences between postassessment and follow-up effects with potential between-study differences. Within-study comparisons that examine only those studies that have both a postassessment and a follow-up do not have this problem. Our overall challenge, then, was to interpret the pattern of effect sizes, as well as individual effects.

Program Evaluation

Provide a brief introduction to your paper here.  The title serves as your introductory heading no need for a heading titled “Introduction.”  For this assignment, you will examine the program evaluation research article by Hawkins, Blanchard, Baldwin, and Fawcett (2008), “Does Marriage and Relationship Education Work? A Meta-Analytic Study,” which is provided in the Learning Resources. Using the Hawkins et al. article, you will complete the following sections.

Issues Related to Population

Using the Hawkins et al. article, provide a brief summary of the issues related to disadvantaged and ethnically diverse populations.

Increasing Participation

Briefly describe how you as a researcher would increase participation of disadvantaged and ethnically diverse populations in marriage and relationship education programs. Remember support your points with scholarly support without relying on direct quotations to make your points for you.

Seeking More Generalizable Results

Explain the types of measures and data collection methods would you employ to obtain more reliable and generalizable results. Be sure to support your points with scholarly support.


Your conclusion section should recap the major points you have made in your work. However, perhaps more importantly, you should interpret what you have written and what the bigger picture is. Remember your paper should be 2  pages not counting your title page and reference page. Please do not exceed two pages of content.


There are no reviews yet.

Be the first to review “Write an assignment about “Program Evaluation””

Your email address will not be published. Required fields are marked *

Sorry no more offers available