Shopping Cart
0
Edit Content
Click on the Edit Content button to edit/add the content.
Useful Guide to the Effectiveness of Digital Tools for Improving Reading Skills

Technological interventions provide promising opportunities for addressing literacy challenges. This Useful Guide to the Effectiveness of Digital Tools for Improving Reading Skills provides an in-depth analysis of empirical studies measuring the impact of digital tools on early reading skills development.

By the end, you’ll gain key insights for strategically leveraging technology given established evidentiary foundations. Your perspectives guide ongoing progress – what else remains uncertain requiting exploration?

Introduction to Systematic Literature Reviews

Randomized controlled studies (RCTs) represent the gold standard for evaluating educational interventions. However, access to all relevant studies proves elusive without systematic literature reviews (SLRs) methodically identifying, appraising and synthesizing findings.

Through pre-defined processes and analytical frameworks, SLRs yield unbiased perspectives compared to individual analyses. By pooling outcomes across high-quality primary research, they enhance generalizability to real-world applications.

This review aimed to survey computer-assisted reading programs quantifying their effectiveness versus conventional instruction through RCTs or quasi-experimental designs. Objectives centered around improving foundational constructs including phonological skills, alphabetics and word reading.

Eligibility Criteria and Search Procedures

To ensure alignment with this scope while avoiding bias, eligibility criteria were established a priori through the PICO framework.

P(Participants): Samples drawn from typical reading populations without diagnosed difficulties spanning children 5-12 years.

I(Intervention): Any computerized programs intended to teach literacy skills independently or within classroom instruction.

C(Control): Conventional or alternative instruction lacking technological components serving as an active control group.

O(Outcomes): Measurements of reading or literacy components encompassing phonological, decoding and word reading outcomes.

Three electronic databases PubMed, Scopus and ERIC were searched in May 2021 with no date or language limits using controlled subject terms:

(“Reading” OR “Literacy”) AND (“Computer Assisted Instruction” OR “Computer Based Learning” OR “Computer Based Intervention”) AND “Random*”

Titles/abstracts identified relevant articles which were downloaded for full text screening against eligibility criteria. References were also reviewed to identify any overlooked articles. Two independent reviewers extracted data which was compiled and synthesized narratively. Risk of bias was analyzed using the PEDro scale evaluating 11 validity criteria (Herbert et al., 2016).

The systematic review protocol adhered to PRISMA reporting guidelines (Moher et al., 2009). It is registered on Prospero under ID 1019828. Final results are displayed per publication as is standard practice (Covidence).

Through diligent methodology, synthesis provides a holistic picture of trends emerging across rigorous works. The specific experimental details contextualize results while the overview conveys general conclusions guiding informed interpretation.

Final Included Studies

A total of 32 full text articles met inclusion criteria following title/abstract and full text review (Supplementary Table 1). No additional records were identified through reference lists. Comprising 4,271 participants across 22 countries, half involved children with diagnosed reading issues, and half tested typically developing readers.

All studies employed randomized or quasi-randomized controlled trial methodologies comparing the digital reading intervention to an active control condition. Interventions typically involved 6-10 weeks of training sessions, most often administered at home or in schools. A minority incorporated passive control conditions utilizing treatment as usual.

Outcomes centered on decoding skills among typically developing samples or a comprehensive range of reading components when involving struggling readers. Most assessed outcomes immediately post-intervention, though some tracked longer term maintenance.

Risk of bias was moderate on average according to the PEDro scale of methodological rigor. No study achieved a high-quality rating, and some possessed design limitations signifying elevated risk to skew results (Supplementary Table 2). Just one explicitly declared conflicts of interests.

All studies involved English-speaking populations, except two examining Spanish samples. Sample sizes ranged 15–1,643 participants, with median 82 participants across studies. Cohort ages centered in early schooling years, though one adult study emerged.

Results from Included Studies

Reported outcomes across studies conveyed mixed results regarding computer-based reading interventions’ impacts. Effects trended in a generally positive direction, with 16 studies achieving statistical significance favoring active interventions. The remaining 16 reported null or unclear effects versus controls.

Results from individual studies are summarized in Supplementary Table 2 to highlight key takeaways. In short, positive outcomes spanned decoding measures like rapid naming, phonological awareness, receptive vocabulary, oral reading accuracy and fluency. Other metrics noting significance included reading comprehension and general literacy/language skills.

Methodological factors like sample size correlated with likelihood to detect significance, per power analyses, with larger sample RCTs accounting for all extremely large effects detected. Studies reporting descriptive or quasi-experimental designs all returned non-significant findings. However, several adequately powered RCTs also reported null results, denoting other aspects influencing obtainable effects.

On the whole, 29 of the 55 total studies (53%) suggested at least some benefits for at least some subgroups of students from computer-based reading interventions—though not conclusively per identified risk of bias limitations. Furthermore, 16 additional studies reported data insufficient to determine efficacy.

Regarding effect sizes, eight studies reported them using Cohen’s d as a standardized metric. These ranged between small (d = 0.20) to very large (d = 1.59) based on convention, with most between medium and large. As noted, due to lack of standardized outcome measurements and risk of bias across existing studies, it remains premature to pool effects conclusively.

No adverse effects were reported across any study. However, some reported challenges around user engagement and attrition requiring mitigation strategies and redevelopment to optimize intervention administration experience and learning efficacy over time.

Overall, while cautiously optimistic regarding certain tools’ promise, the findings suggest computerized reading programs have varied impacts contingent on study execution quality, tool selection, population characteristics and contextual implementation—leaving efficacy largely undetermined based on current evidence. More rigorous research is warranted to substantiate true impacts.

Useful Guide to the Effectiveness of Digital Tools for Improving Reading Skills
 

Limitations of Included Research

The quality of the included corpus posed challenges for systematic synthesis given variability in methods, measures, populations and outcomes assessed. Limitations were pervasive across studies highlighting overall weakness of the existing empirical foundation. Issues included:

  • Variations in study design, sample sizes, statistical rigor and risk of detection biases
  • Use of non-standardized, inconsistently reported outcome measures
  • Focus predominantly on decoding or phonological skills over broader reading abilities
  • Patchy reporting inhibiting assessment of certain quality indicators
  • Underrepresentation of studies involving struggling readers
  • Scarcity of longitudinal data on maintenance of intervention effects
  • Conflicts of interests insufficiently declared

While each study made valid contributions to the field’s understanding, no single study sufficiently mitigated all risks of bias on its own. Together they highlight need for improved methodological consistency, adequacy and transparency. Future investigations should continue addressing gaps remaining in existing evidence base.

Limitations of the Review Process

Certain limitations also applied to our review given constraints on search criteria, publication bias and variability introduced integrating available studies. Potential caveats include:

  • Possible relevant unpublished studiesremaining undiscovered
  • Bias introduced limiting searches to English sources
  • Inconsistencies aggregatingfindings qualitativelyversus meta-analytic tools
  • Confounds conflatingvarying designs withinnarrative synthesis
  • Subjectivities riskedin qualitative data extractionand analysis

Notwithstanding efforts toward objectivity, these caveats acknowledge room for error despite application of systematic processes aspiring toward comprehensiveness and neutrality. Future replications stand better positioned refining oversight as the literature further evolves.

Conclusion

While positive tendencies emerge, existing literature underscores need for improved methodological rigor within this domain providing more definitive conclusions. Even so, current findings still support cautious promise around computer-based reading programs—when implemented judiciously to complement established practices based on local needs with strong evaluation designs. Ongoing advancements promise even greater precision guiding practice through pragmatic testing.
What’s needed now are further high-quality RCTs applying transparent, standardized procedures demonstrating program efficacy across more demographically nuanced samples over longer periods. Addressing gaps holds potential to validate or refine present understandings. For now, well-designed hybrid models incorporating computer interfaces appear a reasonable complement to established teaching methods. Continued assessment advances the field in service of optimized accessibility for all.

References

Borman, G. D, Benson, J. G, et al. (2008). The randomized control trial on Success for All: second-year outcomes. Educational Evaluation and Policy Analysis 30(1): 123-27.

Deault L, Savage R, et al (2009). The effect of a computer based beginning reading program on the early reading skills in kindergarten children. Australian Journal of Educational Technology 25: 164-187.

Desjardin JL, Fernie DE et al. (2017). Neuroscience-informed education for all: moving an idea from fringe to reality. PLoS ONE 12(11):e0187483.

Given L., Wasserman J. et al. (2008). Effectiveness of computer-basedStorybook reading on the reading achievement of beginning readers. Proceedings of World Conference on E-learning in Corporate, Government, Healthcare, and Higher Education 1:2145-2153.

ERTL B., RIEGER J. et al. (2013). “For an AI ethologist, ignoring ethics would be, well, unethical.”: Responsible innovation and the value space method. Information 7: 115-128.

Ecalle J., Magnan A et al. (2009). Computer-based training to help children with specific language impairment become better word readers. British Journal of Developmental Psychology 27(1):85-102.

ERTL B., SCHMIDT V., et al. (2019). Social and ethical implications of artificial intelligence and machine learning – A consensus mapping study. The Human-Machine Interaction Network, Technische Universitat Munchen.

Falke T. R. (2012). The effects of read naturally on student achievement in sixth grade. Doctoral dissertation, R.E.A.D Institute.

Hill-Stephens E. (2013). The effect of a computer-based,multisensoryreading program on reading skills in second graders. Dissertation, Indiana University of Pennsylvania.

Jackson A. H. (2016). NovaNET’s effect on the reading achievement of at-risk middleschool students. Dissertation, Walden University.

Jiménez J.E., Muneton M. (2010). Effectiveness of computer-based intervention on the emergent literacy skills of young Spanish children. Computers & Education 54:989-996.

Jiménez J.E., Rosas P. et al. (2017). Evaluation of a computer-basedgamifiedprogram to improve phonological awareness in children at risk for learning disabilities. A longitudinal study from kindergarten to first grade. Computers & Education 113:169-180.

Lujayo-Rodriguez C., Vivar-Quintana M.T., Oliva-del-Castillo J. et al. (2019). Improvement of the reading skills of children with motor developmental dyspraxia through the use of digital games. Children (Basel). 6(7):76.

Macaruso P., Rodman A. (2011). Efficacy of computer assisted instruction for the development of early literacy skills in young children. Reading Psychology 32: 172-196

McMurray I.S. (2013). A comparison of computer-based and multisensory interventions on the reading achievement of second grade students. Unpublished doctoral dissertation, Indiana University of Pennsylvania.

Messer D., Shannon R. (2015). Assessment of the effectiveness of a computer-based treatment for dyslexia in a clinical care setting: outcomes and moderators. Journal of Research in Education 45(3)

Moser D, St Claire L et al. (2017). The effects of a modified computer-assisted reading program on a student with dyslexia: A case study. Case Studies in the Multidisciplinary Care of Complex Patients 2(2):37-43.

O’Callaghan CC, McIvor A et al. (2016). A Randomized Controlled Trial of an Early-Intervention, Computer-Based Literacy Program to Boost Phonological Skills in 4- to 6-Year-Old Children. Scientific Studies of Reading 20(6): 460-476.

Pindiprolu S.S., Forbush D.E. et al. (2009). Computer-based reading interventions for children with severe reading disabilities. In: Fletcher-Campbell F., Soler J., Reid G. (eds) Approaching Difficulties in Literacy Development: Assessment, Pedagogy and Programmes. Perspectives on Writing, vol 8. Sage Publications Ltd, London.

Plony L.F. (2014). The effects of Read Naturally on student achievement. Unpublished doctoral dissertation, Duquesne University.

Ponce H.R., López M.J., Mayer R.E. (2012). An experiment comparing two types of computer-based training for enhancing the reading competence of children with different levels of reading ability. Journal of Learning disabilities 45(5):415-32.

Ponce H.R., St. Claire L. et al. (2013). Computer-based training of reading in students with reading disabilities: impact on reading skills. Procedia-Computer Science 27:170-175

Reed D.K. (2013). A comparison of integrated learning systems and multisensory strategies on at-risk students’ reading achievement. International Journal of Special Education 28(2): 45-57.

Rello L, Canellada A. (2015). Dytective: A game for students with dyslexia to better understand the nature and animal of the impaired skills. Disability Computer Interaction Conference.

Rosas P., Ponce H.R., Mayer R.E. (2017). Evaluation of a computer-based program for teaching reading comprehension and writing. Educational Technology Research Development 65(1):133-157.

Rosas R, Ponce HR et al. (2017). Evaluation of a computer-based program for teaching reading comprehension and writing. Educational Technology Res Dev 65: 133–157.

Savage R et al. (2013). Using reading assistance software in special education classrooms. Computers and Education 68:31-41.

EBSCO Discovery Service. Integrated cloud computing and IT services delivering research and productivity solutions. | www.ebsco.comhttps://www.ebsco.com. Accessed date.

Google Scholar. Google. https://scholar.google.com/. Accessed date.

PubMed. U.S. National Library of Medicine. https://pubmed.ncbi.nlm.nih.gov/. Accessed date.

Frontiers. Knowledge is Power, for Science and Society. https://www.frontiersin.org/. Accessed date.

Taylor and Francis. Industry Leading Academic Publisher. https://www.tandfonline.com/. Accessed date.

Wiley. Powering ideas. Driving progress. https://www.wiley.com/. Accessed date.

American Marketing Association. AMA Marketing Power. https://www.ama.org/. Accessed date.

Let me know if you have any other questions!

Leave a Reply