In experimental or hypothesis-testing research the methods section will provide enough detail for the research to be replicated, and provide reassurance that you are using the best method to get the best (valid, generalisable) results. This web page uses an example from the Health Sciences to step through the elements within the research design section of a research report.
The methods section of a scientific report usually covers two main areas: materials and experimental rationale. Each of these sections should include enough detail so that your research can be duplicated by others and (hopefully) obtain similar results. The methods section will contain references to other people's methods, but there would not be any results in this section.
The materials section includes information about subjects or participants in the study, equipment used or variables examined, interventions and treatments or chemicals used and their preparation.
The choice of subjects for any study is a critical feature as poor choices could lead to false or misleading outcomes. For example, if you examined the role of soy supplements in reducing hot flushes in post menopausal women, then the subjects need to be post menopausal women. This may sound obvious, but you need to consider how menopause will be confirmed, for example by participants' assessment or by setting predefined blood levels of leutenising and follicle stimulating hormones. Information regarding the inclusion and exclusion criteria for subjects must also be included and it needs to be explicit and justifiable. In the example used above, the types of issues to be addressed would include:
This section also provides information justifying the sample size; use a power analysis if possible. A power analysis gives the minimum number of subjects required to avoid a type II error (false negative). Unless you have this minimum sample size your study will not be worthwhile ' even if you prove the hypothesis to be true, the study is unlikely to give significant results to support it.
The variables examined are the interventions made (independent variables) and the variables measured (dependent variables). For the example given above, the number and frequency of hot flushes over a defined time period would be the dependent variable and the soy tablets would be the independent variable. A description of each variable is required. A description of the dependent variables in the hypothetical study would outline how the hot flushes were measured. It could be either a descriptive account of the number and severity of episodes using a questionnaire, or a subjective measure of sweating using skin conductance recordings. If the latter was used, then details about the equipment, its location on the subject's body, and how it operates must be included in the text. Information concerning whether skin conductance was measured 24hrs per day or nightly, how often, and at what intervals should also be included.
The intervention is what you administered; it may be a drug treatment, cognitive behavioral therapy, occupational therapy, physiotherapy exercises or some type of surgical procedure. In the soy treatment for menopause example, the intervention would be soy tablets compared with a matching placebo tablet (also known as the control). In that case, the formulation of the tablets, dose, frequency and length of the intervention all need to be described in detail and, where appropriate, justified.
This section of the report tells the reader how the research will be conducted. It is a description of the study design and the procedures used. In some instances, the procedures can be followed more easily if they are presented in a flow chart or table as shown below.
Procedures - Study description for use of soy tablets in menopause trial.
|Recruitment||240 subjects give informed consent|
|-2 to 0 weeks||2 week wash in phase with placebo|
|0||220 subjects randomized to one of 2
groups (Soy or placebo),
n=110 per group
|Day 1||Soy 25 mg or placebo twice daily,
for 12 weeks
measure sweating and temperature using conductance meter; symptom check list with questionnaire.
|Week 6||measure sweating and temperature using conductance meter; symptom check list with questionnaire.|
|Week 12||measure sweating and temperature using conductance meter; symptom check list with questionnaire.|
|Week 13||Break randomization code, analyse data|
When describing the procedures you must provide enough information, so that other researchers can repeat your study. If a well established methodology has been strictly followed then you do not need to include all of the details, but simply provide a reference to the original method so that others can refer to it. For example: Body temperature was measured by radiotelemetry using the method develop by Bloggs (1980). However, if you have adapted someone else's method, then any modifications must be fully described and justified. For example: Body temperature was measured by radiotelemetry using a similar method to that developed by Bloggs (1980). In the present study the radiotransmitter was placed on the skin and not in the peritoneal cavity. This modification was considered necessary because skin temperature decreases as core temperature increases (Hall 2001). If a totally new method has been devised, for example, a method of measuring body temperature, then this has to be described, justified and validated. That is, we will need to provide evidence that our new method measures body temperature accurately and reliably by comparing it to another accredited method or instrument, for example: National Australian Testing Authority (NATA) certified thermometer.
For a double-blind, placebo-controlled, randomized clinical trial, such as the hypothetical soy trial in post-menopausal women, information regarding randomization procedures, masking of treatments or interventions, blinding of assessors, and expected treatment differences are also included in the methodology. Finally, the tests used for establishing statistical significance need to be described, including the name(s) of software packages used. The statistical tests used must be appropriate comparisons and not just those that favour the results. For example, if you wish to show the size of an effect of soy treatment on hot flushes then the correct comparison would be one between soy and placebo, giving the mean difference with confidence intervals. Other comparisons such as improvements from baseline may look impressive, but if people taking placebos also improved then these statistical analyses do not tell the reader anything meaningful.
This is the section where you justify what you did and your choice of methods. This justification is necessary because some methods are better than others. That is, good methods are valid and reliable.
Validity refers to the relevance and accuracy of what is measured. There are two broad types of validity: internal and external. Internal validity refers to whether the tests measure accurately what they were designed to measure. External validity refers to the extent to which your findings can be generalised to other people, or situations, or at other times (Trochim 2000).
Reliability or precision refers to the ability to repeatedly get the same result with the same instrument, regardless of the assessor. This result may not necessarily reflect the true mean, but it is consistent.
Validity and reliability are the goals of any scientific research, but reliability is the limiting factor in determining validity.
Once you have determined that your methods are both reliable and valid then the rationale for your experimental procedures needs to be explained. For example, in the hypothetical soy study we would need to describe how skin conductance is a measure of sweating and why sweating is an indirect measure of hot flushes. Similarly, the choice of questionnaire used to evaluate menopausal symptoms would have to be justified. In scientific research, the usual practice is to use a questionnaire developed and validated by someone prominent in the field. However, if you modify established procedures in any way, then the modifications must be described and justified.
In the field of Health Sciences, it is common practice to use more than one method to assess similar outcomes. For example, objective and subjective measures of signs and symptoms are often used to determine any benefit from a particular intervention. Objective measures (for example sweating, skin temperature, blood tests) are viewed as more rigorous evidence for the efficacy of an intervention and are considered less likely to be influenced by factors such as mood or a desire for success of the intervention on the participants' part. However, subjective measures, such as the participants' assessment of the intervention, provide different but equally important information. There is little value in a wonderfully effective drug for the treatment of mouth ulcers if the side effects are worse than the ulcers. Hence, different methods may yield different types of information or the same information from a different perspective (for example, physician versus subject's assessment). When more than one method is used to assess the outcome of an intervention, we need to describe each method separately, along with the reason for its inclusion. Each separate method or instrument should provide different information and not merely duplicate ' or worse contradict ' the findings found using another method.
Any methodological limitations of your study should be acknowledged in this section of the report. Most studies have limitations and it is better to acknowledge them rather than ignore them. Ignoring them invites the reader to conjure up their own limitations of the study design. Once we have identified any methodological limitations, you then need to describe the steps you took to minimize any possible adverse effects these limitations may have on the outcomes of the study.
The following example is taken from a Masters thesis. Note the way that the headings take the reader through the process by which the subjects were selected ' the inclusion and exclusion criteria; the ethical procedures that were met; how the blood was collected and what the rejection criteria was; the kinds of control samples prepared; the method of testing used; and how validity and reliability were established.
Writing About Materials and Methods ' Experimental Research
Hodge, Sandra J. (n.d.) Studies of Immuno-regulation in Inflammatory Processes, thesis prepared for the Degree of Master of Applied Science, University of South Australia
Section headings from chapters 2 & 3
Chapter 2: Subjects
2.1 2.1 Ethical guidelines
2.2 2.2 In vitro investigation of leucocyte activation markers and cytokines in whole blood
2.3 2.3 Guidelines for inclusion in study of leucocyte markers of inflammation in infected neonates
2.4 2.4 Selection of infected infants for inclusion in study
2.5 2.5 Selection of specificity controls for inclusion in study
2.6 2.6 Obstetric parameters for investigation
Chapter 3: Methods
3.1 Collection specifications
3.2 3.2 Controls
3.2.1 3.2.1 Procedure controls
3.2.2 3.2.2 Isotype control
3.2.3 3.2.3 Autoflourescence control
3.3 3.3 Stimulation of whole blood
3.4 3.4 Staining of leucocyte activation markers and cytokines with monoclonal antibodies
3.5 3.5 Determination of apoptic changes using Annexin V staining
3.6 3.6 Comparison of apoptic changes in stimulated whole blood and PBMCs
3.7 3.7 Analysis of stained samples
3.8 3.8 Validation experiments:
3.8.1 3.8.1 Confirmation of specificity of intracellular and surface staining
3.8.2 3.8.2 Stability of leucocyte activation markers in stored blood
3.8.3 3.8.3 Expression of activation markers in cord and venous blood
3.9 3.9 Reference ranges
3.10 3.10 Statistical analyses
Here are some hints for writing up the methods and materials sections of a research project: