Families Society" PURPOSE: The purpose exercise conduct a detailed, critical evaluation research design, methods analysis a study written published a peer-reviewed journal.
Valentine, K., Thomson, C., & Antcliff, G. (2009). Early childhood services and support for vulnerable families: Lessons from the Benevolent Society's Partnerships In Early
Childhood Program. Australian Journal of Social Issues, 44(2), 195-213,120.
Yes, it is very specific.
Do subtitles, if present, provide important information regarding the research?
Yes, they bullet-point the basic components of the article although they do not label all of the conventionally-expected components of a research article like a literature review.
Are the main variables expressed in the title?
Are the terms in the title easily understood by most people?
To some extent: the general subject matter is clear, although not what is meant by vulnerable families, nor is the Partnerships In Early Childhood Program (PIEC) well-known.
5) Does the title avoid any reference to the study's results?
6) Overall, is this a good title? Why or why not?
Mediocre, given that critical definitions for the key words are not defined.
2. Ethical Evaluation
7) Are the steps the researcher took to honor ethical responsibilities to individuals clear? Are they appropriate? Are they enough?
Yes. The program in question was designed to help at-risk families so the families benefited from the study.
8) If there were any findings (based on your readings of tables or other means of data presentation) that refuted the researcher's hypothesis, did he address these findings?
No, overall the positive aspects of the program were confirmed.
9) If any results were unexpected, did the researcher discuss any explanations for the unexpected effects?
10) Did the researcher adequately acknowledge the limitations of the research?
Relatively broad implications were drawn from the study about the benefits of the program.
11) Overall, has the researcher adequately fulfilled his ethical obligations?
Regarding the confidentiality and showing respect for the subject-participants, yes.
3. Literature Review
12) Is the material presented in the literature review relevant to your research interests?
13) Is the special problem area identified in the first paragraph or two of the report?
Yes it is clearly stated: "access to early childhood services is widely considered to be an important means of supporting vulnerable children and families" (Valentine, Thomson, & Antcliff 2009).
14) Does the researcher establish the importance of the research problem?
Yes -- the health and welfare of families and children are at stake.
15) Has the researcher been appropriately selective in deciding what studies to include in the literature review?
The researcher focuses mainly on comparing the Australian program under study with existing similar programs in the U.S.
16) Is the research cited recent?
Yes, from 2005-2006 although the U.S. program (Head Start) dates back to the 1960s.
17) Is the literature review critical?
In general, the research accepts previous data confirming the success of early childhood and family intervention programs.
18) Is the researcher clear as to what are research, theory and opinion?
Yes, most of the evidence was numerical and data-based.
19) Overall, do you think this is an adequate literature review? Why or why not?
A very small number of other programs were analyzed and greater examination of what is effective in early childhood education programs would have been helpful.
4. Operationalization and Measurement
20) Is the conceptualization suitably specific?
"The components include an impact evaluation of outcomes and a process study of implementation" of the program (Valentine, Thomson, & Antcliff 2009).
21) Are the definitions productive?
Yes: they are both formative and summative in nature.
22) How many different dimensions are being measured at once?
Two: successful outcomes and successful processes.
23) Are the various dimensions sufficient?
Yes although what is meant by success is not clearly defined.
24) Are the actual questions (or a sample of them) provided?
25) Is the response format clear, or, when not already clear, does the researcher provide information on the response format? Is there any information on restrictions in respondents' responses?
No restriction -- the interviews used were open-ended.
26) If the researcher is using a published instrument, does he or she cite sources where additional information can be found?
27) Has the researcher avoided overstating the preciseness of the measurement?
The researcher admits that the interviews were semi-structured and informal in nature.
28) Does the researcher provide some measure of reliability? What type of reliability is established? Do the measures indicate adequate reliability for your purposes?
There is no measure of reliability: the study is qualitative.
29) Does the research provide some measure of validity? What measures of validity are presented and are they adequate for your purposes?
30) Overall, is the measurement appropriate and adequate given the research purpose?
Given the limits of the study number, scope, and lack of formal measurement, it is very difficult to draw broader generalizations from the evidence provided by the researchers.
5. Sample Strategy
31) Does the research goal lend itself to generalization? Is the broad sampling method appropriate for the research goal?
No, the sample is very narrow and program-specific.
32) Does the researcher provide information regarding the study population? The sample?
A "total of 43 interviews were conducted with representatives of partner organisations, the Benevolent Society and PIEC staff (n=ll); with EEC directors (n=6); with EEC staff (n=16); and with families (n=10). EEC and parent interviews were held in August and key personnel interviews in November and December" (Valentine, Thomson, & Antcliff 2009).
33) Is the exact sampling method (e.g. simple random, purposive) specified? Remember, it is not sufficient for a researcher to simply state that a sample was selected 'randomly.'
All representatives of the organization were interviewed and some parents, although the number of parents is not clear, based upon the presentation of the article.
34) Is the sample size sufficient, given the research goals, the degree of accuracy the researcher desires, and the nature of the population studied? Given the nature of the research, is the sample size sufficient?
For the purposes of just studying the program, the small size is justified, however the lack of specificity about the number of parents interviewed is a troubling gap in the research.
35) If the researcher uses a probability sample, does he or she generalize the findings to the appropriate population? If the researcher uses a non-probability sample, does he or she refrain from generalizing to a wider population?
There is generalization about the success of this particular program to other, similar programs despite the limited sampling.
36) Overall, is the sampling appropriate?
No, not given the broadness of the conclusions drawn.
10. Evaluation Research
67) What is the purpose of the evaluation presented?
To support interventions like PIEC: "the challenges posed by implementing PIEC should also be seen in the context of its potential benefits. In addition to the promising provisional results of the evaluation, there are a number of characteristics of the intervention that indicate its potential" (Valentine, Thomson, & Antcliff 2009).
68) Is the nature of the program described in detail?
Yes: it is an "attachment-based intervention" (Valentine, Thomson, & Antcliff 2009).
69) Are the goals presented and can the goals that the author presents be evaluated?
The goals are somewhat vague in terms of showing how the success of this program indicates that these types of interventions will be successful, without a specific focus on a particular component of the format.
70) What type of observation method is used? Is it appropriate, given the real-life restrictions of evaluation research?
Interviews. Interviews do seem appropriate, given it allows program directors to provide information about their experience.
71) Is a control group used? If so, how has the researcher tried to show that it is equivalent to the experimental group? If not, does the researcher adequately explain its…