Overview
Quantitatively and qualitatively, through testing processes, I worked to determine the efficacy of the pinpointed main idea strategies. I hoped to see constant improvements to student test scores as the expository texts became more difficult, as time passed, and as students became more independent.
Methods of Data Collection
Method 1: Online Surveys
Google Forms were used for a pre-survey and a post-survey to mark changes as a result of the research strategies. In particular, I wanted to identify student attitudes about identifying main idea, their self-perceptions about their abilities, and the strategies that they used to locate main idea. The online surveys were created using a combination of short answer and Likert scale responses.
Method 2: Journal Reflections
An informal, ethnographic journal was used to mark my observations and reflections after each day’s instruction and practice. The journal included fragments of conversation that I heard from students about the day’s task as well as my observations of the strategies that were being employed during practice time.
Method 3: Teacher-Made Assessments
Teacher-made assessments for each of the expository excerpts provided quantitative data about potential student improvements as they practiced identifying main idea. The assessment was also used as a pre- and post-test. The assessment focused on three questions.
Method 4: Standardized Tests
Finally, standardized MAP (Measures of Annual Progress) test scores were compared to see if students improved in the area of main idea usage. Sophomores took the adaptive MAP assessment in the Fall of 2017 and took it once again in the Spring of 2018.
Reflection
Triangulation
In my research, I used multiple qualitative and quantitative methods to better ensure that my study was efficacious and replicable. Using four different data sources helped to validate and verify my data. Whereas one data collection method alone could lead to specious claims, having four different methods allowed me to notice trends in my data and become more confident in my results. Triangulation also helped to minimize bias with elements like my qualitative, informal journal. The harder, data-driven methods like MAP testing and teacher-made assessments provided quantitative data to substantiate the comments and insights I made.
Validity
When thinking about the credibility of the research, it is important to address both internal, external, and construct validity. MAP testing measures what it is intended to measure by showing student mastery of standards by aligning and asking questions about the grade-level content. Because of the nature of the adaptive testing process, MAP validity is ensured through assessment design-, experience-, and question-levels. My instruments and procedures were created with the purpose of measuring the skills and perceptions that they were meant to measure. For instance, my survey helped to capture student attitudes while my pre- and post-tests measured students' abilities to identify and summarize the main idea of a text. For the most part, I believe that results can be generalized beyond this study.
Reliability
Overall, most elements of this study would be replicable in other classrooms. The explicit teaching of the strategies, proctoring of pre- and post-surveys, and use of the three question exit tickets could all be replicated elsewhere. In terms of the MAP Growth tests, the NWEA has developed these tests over time and they have been tested for reliability. Additionally, the same student's data is evaluated at multiple points over several months to increase reliability. Certain elements, like the informal journal relied on my own discretion. Reliability could be affected by my mood, biases, or perceptions of students. Additionally, the use of certain nonfiction, expository articles relied upon the particular curricular objectives of my district. Other studies could use the same pieces, but would probably benefit from using texts related to their own content area.
Instructional Decisions
Recognizing Diverse Learning Needs
Within the initial survey, students selected their preferred methods of instruction and showed their exposure to main idea identification strategies. I was able to use this information to guide my instruction and focus on the gaps in my students' learning. Additionally, through my record-keeping, I was able to monitor the ways that students were interacting and talking about the instruction taking place.
Appropriate Goals and Instruction
Based on student responses to the initial survey and their preliminary scores on the pre-test, I was able to focus on the specific skills that students found challenging during the main idea identification process. For instance, while students were already adroit at finding essential details, I spent less time on that skill and more time on summarizing and organization.
Quantitatively and qualitatively, through testing processes, I worked to determine the efficacy of the pinpointed main idea strategies. I hoped to see constant improvements to student test scores as the expository texts became more difficult, as time passed, and as students became more independent.
Methods of Data Collection
Method 1: Online Surveys
Google Forms were used for a pre-survey and a post-survey to mark changes as a result of the research strategies. In particular, I wanted to identify student attitudes about identifying main idea, their self-perceptions about their abilities, and the strategies that they used to locate main idea. The online surveys were created using a combination of short answer and Likert scale responses.
Method 2: Journal Reflections
An informal, ethnographic journal was used to mark my observations and reflections after each day’s instruction and practice. The journal included fragments of conversation that I heard from students about the day’s task as well as my observations of the strategies that were being employed during practice time.
Method 3: Teacher-Made Assessments
Teacher-made assessments for each of the expository excerpts provided quantitative data about potential student improvements as they practiced identifying main idea. The assessment was also used as a pre- and post-test. The assessment focused on three questions.
- How is the text organized?
- What is the main idea of the text?
- What specific facts or opinions are used to clarify or prove the main thought?
Method 4: Standardized Tests
Finally, standardized MAP (Measures of Annual Progress) test scores were compared to see if students improved in the area of main idea usage. Sophomores took the adaptive MAP assessment in the Fall of 2017 and took it once again in the Spring of 2018.
Reflection
Triangulation
In my research, I used multiple qualitative and quantitative methods to better ensure that my study was efficacious and replicable. Using four different data sources helped to validate and verify my data. Whereas one data collection method alone could lead to specious claims, having four different methods allowed me to notice trends in my data and become more confident in my results. Triangulation also helped to minimize bias with elements like my qualitative, informal journal. The harder, data-driven methods like MAP testing and teacher-made assessments provided quantitative data to substantiate the comments and insights I made.
Validity
When thinking about the credibility of the research, it is important to address both internal, external, and construct validity. MAP testing measures what it is intended to measure by showing student mastery of standards by aligning and asking questions about the grade-level content. Because of the nature of the adaptive testing process, MAP validity is ensured through assessment design-, experience-, and question-levels. My instruments and procedures were created with the purpose of measuring the skills and perceptions that they were meant to measure. For instance, my survey helped to capture student attitudes while my pre- and post-tests measured students' abilities to identify and summarize the main idea of a text. For the most part, I believe that results can be generalized beyond this study.
Reliability
Overall, most elements of this study would be replicable in other classrooms. The explicit teaching of the strategies, proctoring of pre- and post-surveys, and use of the three question exit tickets could all be replicated elsewhere. In terms of the MAP Growth tests, the NWEA has developed these tests over time and they have been tested for reliability. Additionally, the same student's data is evaluated at multiple points over several months to increase reliability. Certain elements, like the informal journal relied on my own discretion. Reliability could be affected by my mood, biases, or perceptions of students. Additionally, the use of certain nonfiction, expository articles relied upon the particular curricular objectives of my district. Other studies could use the same pieces, but would probably benefit from using texts related to their own content area.
Instructional Decisions
Recognizing Diverse Learning Needs
Within the initial survey, students selected their preferred methods of instruction and showed their exposure to main idea identification strategies. I was able to use this information to guide my instruction and focus on the gaps in my students' learning. Additionally, through my record-keeping, I was able to monitor the ways that students were interacting and talking about the instruction taking place.
Appropriate Goals and Instruction
Based on student responses to the initial survey and their preliminary scores on the pre-test, I was able to focus on the specific skills that students found challenging during the main idea identification process. For instance, while students were already adroit at finding essential details, I spent less time on that skill and more time on summarizing and organization.