QuickScreen Dyslexia Research

RESEARCH UPDATE

Research study results for the year 2015-2016 (conducted with several universities and participants via the BDA website) found ‘strong statistical evidence of an association between independent dyslexia diagnosis and the QuickScreen test indication.’ See below for the full statistical analysis.

These early research results indicate that where a candidate gets a mild, moderate or strong dyslexia indication – then they are very likely to have been correctly identified with dyslexia.

Non-dyslexic control group samples show that there are no candidates getting mild, moderate or strong indicators.

A number of candidates get a borderline result that requires further investigation or is not considered significant in that all their other results are within normal limits. There is also an overlap here with candidates who are dyslexic but are performing at normal levels of literacy and speed of processing, so they are largely compensated, in which case, where required, further diagnosis may be sought.

Signed Dr Walker signature
Dated 26.11.2016

Dr. D. Walker – Dyslexia Consultant for Pico Education
B.A. (Hons) PGCE Dip. SpLd. PhD – Dyslexia in Higher Education
17 Wellington Sq.
Hastings,
East Sussex
TN34 1PB

Email: Picoeducation@aol.com/ info@qsdyslexiatest.com
Tel: 01424-719693

 

RESEARCH UPDATE

During the last year we conducted trials at university and also with members of the public via the BDA website. Over 2,000 candidates have now completed the test, but we have only selected those who have had a previous dyslexia diagnosis to take part. Also we have invited participation of a control group of candidates who are not known to have any difficulties.

The results from the University study and the larger independent study have both come up with similar positive results. All of the research is being posted on our website together with testimonials and users feedback at qsdylsexiatest.com.

In brief, QS test has shown ‘strong statistical evidence of an association between the independent dyslexia diagnosis and the QuickScreen test indication’.

Both studies have indicated that anyone who gets a ‘mild, moderate or strong result on QS is highly likely to be dyslexic and their result could in time be a sufficient substitute for a full dyslexia test’ when arranging financial and study support for individual students or making reasonable adjustments for individuals within their place of work.

There is also strong statistical evidence of ‘a clear association between the speed of processing and dyslexia’ and in QS this is one of the key aspects of the report conclusion.

We are continuing with another university-based research trial in the coming academic year 2016-2017 and anticipate this time next year to have implemented minor adjustments to further improve the overall accuracy. We plan to adjust the borderline category so as to narrow down the numbers of candidates in this section. We also will continue our research into other variables that will enhance the model for the probability of dyslexia.
Pico Educational Systems Pico Stamp

November 2016
Email – picoeducation@aol.com
Tel: 01424-719693

 

QuickScreen Dyslexia Test

Initial Analysis
Introduction

QuickScreen is an adult computerised screening tool, developed with the aim of providing a reasonably in-depth assessment of dyslexia.

There have been many models for assessment, from the early medical approach through to phonological skills testing and the social models of dyslexia that place less emphasis on the value of testing. Each of these models has its strengths and weaknesses and can be seen as more or less applicable to the individual being assessed depending upon the aims of the assessment.

The traditional model used by educational psychologists and dyslexia specialists, which aims to establish a discrepancy between literacy acquisition and underlying ability, continues to be required by most educational establishments. Higher education institutions in England and Wales offer support to students with a formal diagnosis of dyslexia, including extra time in examinations and coaching in study skills (Zdzienski, 2001).1

One of the problems with many existing online tests is that they do not provide the detailed underlying skills and literacy levels that would be useful at university level, and QuickScreen, which has been specifically aimed at pre-university students through to postgraduate level, seems also to work effectively for adults who have not had the opportunities provided by formal academic training. It does not assume either high or low levels of performance but it does provide the challenge for individuals to test themselves in a relaxed environment.

Since in most universities a full assessment by an Educational Psychologist is required in order for students to be granted any concessions or support, (Singleton 99a).2 at present, the quickscreen report can best be used to inform the next stage and to establish the need for study support.

QuickScreen is intended not only to identify dyslexia but also provide a comprehensive, targeted cognitive profile of learning strengths and weaknesses. It can furthermore be used to plan appropriate support in learning competencies required, as it tests adult speed of reading, writing, typing, comprehension, listening skills, spelling and punctuation. Six subtests produce a profile of verbal, visual and vocabulary skills, together with the well established underlying skills relating to dyslexia, which are memory, sequencing and processing.

QuickScreen uses data from the battery of subtests to arrive at three conclusions. The first is to establish whether there are indicators of dyslexia, the second to assess levels of literacy and the third to highlight any difficulties with speed of processing. By detailed cross referencing of results data QuickScreen has made it possible to produce a computer generated report covering all three areas.

Additionally it provides a comprehensive literacy and attainment profile. This enables tutors to compile a statement of individual needs for study support. They can either use the report to confirm the need for a full dyslexia assessment or add their comments and use them as relevant background evidence in establishing a case for support.

Early Indications

An essential step in the evaluation process of any diagnostic/screening test is to assess its accuracy via diagnostic accuracy measures. These measures for QuickScreen, are based on observational data compiled over a number of years by Pico Educational Systems Ltd.

These data were collected from participants completing the online assessment via three sources: a link offered on the British Dyslexia Association (BDA) website, personally sent links to individual email addresses, and some university trials.

Initial results suggested tentatively that a positive QS result of Mild or above could potentially be used to make reasonable adjustments without obtaining a full dyslexia assessment from an Educational Psychologist. A dyslexic student would be about 4.4 times as likely as a non-dyslexic student to obtain this result. Alternatively, if a negative QS result of None was used to advise students against obtaining an assessment from an Educational Psychologist, a dyslexic student would be about 0.4 times as likely as a non-dyslexic student to obtain this result. (Initial university trial1). 3

Early indications from the trial suggested strongly that QuickScreen does differentiate between dyslexic and non-dyslexic adults. Furthermore, there appears to be a clear association between speed of processing and a dyslexia diagnosis.
Speed of Processing is a measure of the ability to assimilate, process and record written data under prescribed conditions, important factors in the formalities of learning and study, replicating as it does, many of the skills needed for efficiency in written literacy tasks. Research in recent years has highlighted the link between slow processing speeds and the various elements involved in the word decoding process experienced by people with dyslexia. Breznitz (2008) 4/5

It has now been found that there is strong statistical evidence of an association between the independent dyslexia diagnosis and the QuickScreen test indicator. Where a candidate gets a mild, moderate or strong dyslexia indication, then they are very likely to have been correctly identified with dyslexia. Non-dyslexic control group samples show that there are no candidates getting mild, moderate or strong indicators.

It is anticipated that, in time, this would serve as a potential substitute for a full assessment when arranging study support and applying for financial allowances at university level.

Data

The QuickScreen dyslexia test results were provided in comma separated value (csv) format in a number of separate files. These csv files all had a consistent layout and were combined prior to analysis to create a single dataset.
Test results were available for 245 participants with an independent dyslexia diagnosis; 193 (78.8%) had a positive diagnosis and 52 (21.2%) a negative diagnosis. The QuickScreen test reports the possibility of dyslexia in terms of one of five possible indications: None, Borderline, Mild, Moderate, or Strong. Of the 245 participants included in the analysis, 40 (16.3%) received an indication of None; 71 (29.0%) an indication of Borderline; 65 (26.5%) Mild; 62 (25.3%) Moderate; and 7 (2.9%) Strong (as shown in the cross-tabulation in Table 1).

Information was also available to indicate where some participants were known university students. One-hundred and eighteen participants (48.2%) were identified as known university students and 127 (51.8%) unknown with regard to their university status. In order to provide greater clarity on how well the QuickScreen test is performing for potentially better compensated dyslexics, the analysis was repeated (i.e., calculation of the diagnostic accuracy measures) splitting the results by this university grouping.

Methods

The sensitivity of a diagnostic test indicates how good it is at finding people with the condition in question. It is the probability that someone who has the condition is identified as such by the test.

Whereas the specificity of a diagnostic test indicates how good it is at identifying people who do not have the condition. It is the probability that someone who does not have the condition is identified as such by the test. In this case, the QuickScreen test has five possible outcome indications. Therefore, we can calculate the sensitivity of each category in identifying people with dyslexia (treating each test category as a “test positive”) and also the specificity of each category in identifying people without dyslexia (treating each category as a “test negative”).

Another important set of accuracy measures are the predictive values of the test.. These are also termed the “post-test probabilities” and provide the probability of a positive or negative diagnosis given the test result. The predictive values therefore provide important information on the diagnostic accuracy of the test for a particular participant, answering the question “How likely is it that I have or don’t have dyslexia given the test result that I have received?”

The predictive values depend on the prevalence of the condition in question in the population, i.e., the proportion of individuals who have dyslexia, as well as the sensitivity and specificity of the test. As the sample of data available are a selection of “cases” with a positive dyslexia diagnosis and “controls” with a negative dyslexia diagnosis from observational data, rather than a random sample from the population, the true prevalence is unknown.

Based on previous research studies and the figures quoted by dyslexia organisations. it was agreed that an estimated prevalence of 10% would be used when calculating the predictive values. The observed prevalence in the data available was considerably higher than this (78.8%), indicating an oversampling of dyslexic participants. In screening situations, the prevalence is almost always small and the positive predictive value low, even for a fairly sensitive and specific test.

For each QuickScreen test category, sensitivity, specificity, positive and negative predictive value are therefore estimated. 95% confidence intervals are also provided for each, to capture any uncertainty in the estimates.

The standard estimation of binominal proportions, such as the sensitivity and specificity of a diagnostic test (i.e., taking the observed sample proportion), has been shown to be less than adequate, particularly when the sample size is relatively low. Applying a continuity correction can provide a better estimate and allow more accurate confidence intervals to be developed. The standard estimation of binominal proportions, such as the sensitivity and specificity of a diagnostic test has been shown to be less than adequate, particularly when the sample size is relatively low. Therefore, diagnostic accuracy measure values are calculated using continuity-adjusted estimates and continuity adjusted logit intervals (for further information and the formulae applied see: D. N. Mercaldo, X-H Zhou, and K. F. Lau; 2005,2) 6

Alongside these diagnostic accuracy measures, we have carried out a statistical test to assess whether there is evidence of an association between the QuickScreen test outcome and the independent dyslexia diagnosis. This would be expected if the test is useful in discriminating between dyslexic and non-dyslexic individuals. Fisher’s exact test 7 is applied (rather than a large sample test such as the Chi-square test, for example) to account for the fact that we have relatively low sample sizes, which can bias the results in asymptotic tests (as the normal approximation of the multinomial distribution can fail).

Validity

It should be noted when interpreting the results of this analysis that their validity depends on the applicability of the sample participants to the population of interest. This includes the spectrum of severity of dyslexia in the sample. Where this might not reflect the target population, a study is sometimes said to suffer from “spectrum bias”.

The potential for other biases such as classification bias, where misclassification of participants in their independent dyslexia diagnosis may have occurred, should also be considered.

A more formal, prospective cohort study may provide a more reliable assessment of the diagnostic test accuracy, by helping to eliminate potential sources of bias.

 

Results

The results of the analysis outlined in the Methods section are presented below, first for the full 245 participants and then for the known university student group (n=118) and finally the unknown university status group (n=127).
All Participants

A Fisher’s exact test (on the data in Table 1) finds strong statistical evidence (p-value < 0.0001) of an association between the independent dyslexia diagnosis and the QuickScreen test indication.

results table 1

The proportion of participants without dyslexia who received each QuickScreen test result (i.e., sample specificity) and the proportion of participants with dyslexia who received each QuickScreen test result (i.e., sample sensitivity) are shown in Table 2.

Results table 2

For example, 55.8% of participants without dyslexia received a QuickScreen indication of “None”, and 32.1% of participants with dyslexia receive a QuickScreen indication of “Moderate”.

The proportion of participants with and without dyslexia in each QuickScreen test category are shown in Table 3. These are the raw sample predictive values, based on the observed sample prevalence, and do not reflect estimates for the population.

results table 3

results table 4

For example, 72.5% of those participants with a QuickScreen test result of “None” were non- dyslexics, and 100% of those participants with a QuickScreen test result of “Strong” were dyslexic.

The diagnostic accuracy measures for each QuickScreen test category, estimated using the adjusted method (with adjusted logit confidence intervals) and assuming a 10% prevalence of dyslexia are shown in Table 4.

In addition to considering each category in isolation, the measures for some combinations of the QuickScreen test result are also provided. For example, we estimate that 96.6% (95% Confidence Interval [CI] = 86.9%, 99.2%) of non-dyslexic individuals will receive a QuickScreen indication of “None or Borderline”. An individual receiving a QuickScreen indication of “Mild, Moderate or Strong” is estimated to have a 69.0% (95% CI = 35.7%, 90.0%) probability of a positive dyslexia diagnosis

University Group

The results of the analysis outlined in the Methods section for the known university
student group are presented below.

Test results were available for 118 known university students with an independent dyslexia diagnosis; 77 (65.3%) had a positive diagnosis and 41 (34.7%) a negative diagnosis. Of these 118 participants, 28 (23.7%) received a QuickScreen indication of None; 41 (34.7%) an indication of Borderline; 33 (28.0%) Mild; 15 (12.7%) Moderate; and 1 (0.8%) Strong (as shown in the cross- tabulation in Table 5).

results table 5

A Fisher’s exact test on these data finds strong statistical evidence (p-value <0.0001) of an association between the independent dyslexia diagnosis and the QuickScreen test indication.

The proportion of known university students without dyslexia who received each QuickScreen test result (i.e., sample specificity) and the proportion of known university students with dyslexia who received each QuickScreen test result (i.e., sample sensitivity) are shown in Table 6

results table 6

The proportion of known university students with and without dyslexia in each QuickScreen test category are shown in Table 7. These are the raw sample predictive values, based on the observed sample prevalence, and do not reflect estimates for the Population.

results table 7

The diagnostic accuracy measures for each QuickScreen test category for the known university students are shown in Table 8. These are estimated using the adjusted method (with adjusted logit confidence intervals) and assuming a 10% prevalence of Dyslexia.

 

results table 8

Unknown University Status Group

The results of the analysis outlined in the Methods section for the unknown university status group are presented below.

Test results were available for 127 participants with unknown university status with an independent dyslexia diagnosis; 116 (91.3%) had a positive diagnosis and 11 (8.7%) a negative diagnosis. Of these 127 participants, 12 (9.4%) received a QuickScreen indication of None; 30 (23.6%) an indication of Borderline; 32 (25.2%) Mild; 47 (37.0%) Moderate; and 6 (4.7%) Strong (as shown in the cross- tabulation in Table 9).

results table 9

A Fisher’s exact test on these data finds strong statistical evidence (p-value <0.0001) of an association between the independent dyslexia diagnosis and the QuickScreen test indication.

The proportion of participants with unknown university status without dyslexia who received each QuickScreen test result (i.e., sample specificity) and the proportion of participants unknown university status with dyslexia who received each QuickScreen test result (i.e., sample sensitivity) are shown in Table 10.

results table 10

The proportion of participants with unknown university status with and without dyslexia in each QuickScreen test category is shown in Table 11. These are the raw sample predictive values, based on the observed sample prevalence, and do not reflect estimates for the population.

results table 10

Notably, the proportion of participants in the Borderline group with a positive diagnosis is somewhat higher in the unknown university status group compared with the known university student group (83.3% compared with 56.1%).

 

results table 12

The diagnostic accuracy measures for each QuickScreen test category for the participants with unknown university status are shown in Table 12. These are estimated using the adjusted method (with adjusted logit confidence intervals) and assuming a 10% prevalence of dyslexia.

Speed of Processing

Another area of potential further research is to explore how the QuickScreen speed of processing results vary between participants with and without dyslexia.

Table 13 below shows a cross-tabulation of the dyslexia diagnosis versus the speed of processing results available from the QuickScreen data.

results table 13

Of those 52 participants with a negative dyslexia diagnosis 27 (51.9%), 23 (44.2%) and 2 (3%) have No Difficulties, Average and Difficulties speed of processing results, respectively. Whereas of those 192 with a positive dyslexia diagnosis 12 (6.3%), 103 (53.6%) and 77 (40.1%) have No Difficulties, Average and Difficulties speed of processing results, respectively.

Hence, there appears to be a clear association between speed of processing and dyslexia diagnosis. This supports the case for considering including speed of processing as an explanatory variable in a model for the probability of dyslexia.

Potential Further Work

The analysis presented in this report provides an initial assessment of the diagnostic accuracy of the QuickScreen dyslexia test. Further work could potentially be undertaken to expand on this initial analysis and to develop the test further.

In addition to the overall QuickScreen test indications, individual scores are available for various processes such as visual, verbal, memory, reading, comprehension, etc. By using the individual test scores and additional participant demographics we could potentially build a model to predict the probability of dyslexia. This model could then be used to possibly adjust the current QuickScreen indication category boundaries to optimise the resulting diagnostic accuracy measures. This further study may be particularly useful in helping to distinguish between individuals currently in the Borderline group by accounting for participants’ university status.

 

References
1.Dyslexia and Effective Learning, (Edited Hunter-Carsch, M. & Herrington, M.
2001)

2. Dyslexia in Higher Education: Policy, Provision and Practice (Singleton, C.
1999, University of Hull)

3 Cardiff University Initial trial indications, Oct.2016.
Sarah Howey (Cardiff University), Todd M. Bailey (Cardiff University), and ORS
TBD

4. The Sage Dyslexia Handbook Sage publications. – See more at:
http://www.drgavinreid.com/free-resources/dyslexia-an-overview-of-recentresearch/#sthash.dSLyXzkw.dpuf

5. Breznitz, Z (2008) The Origin of Dyslexia: The Asynchrony Phenomenon in G.
Reid, A. Fawcett, F. Manis, L.Siegel (2008)

6 Mercaldo, Nathaniel David; Zhou, Xiao-Hua; and Lau, Kit F., “Confidence
Intervals for Predictive Values Using Data from a Case Control Study” (December
2005). UW Biostatistics Working Paper Series. Working Paper 271.
http://biostats.bepress.com/uwbiostat/paper271

7. Fisher’s Exact Test https://en.wikipedia.org/wiki/
Fisher’s_exact_test

Statistical Report Author: Sarah Marley Select Statistical Services, Oxygen House
Grenadier Road, Exeter Business Park Exeter, Devon, EX1 3LH

Produced by
Pico Educational Systems Ltd
17 Wellington Square
East Sussex
TN34 1PB

Pico Stamp

Pico Educational Systems Ltd is a supporting Corporate Member of the British Dyslexia Association

resultsresults table

Diagnostic Accuracy of Quick Screen

Sarah Howey (Cardiff University), Todd M. Bailey (Cardiff University), and ORS TBD

Introduction

Dyslexia is characterised by a significant difficulty with reading despite normal intelligence (Stein, 2001) and it is thought to affect between 5-17% of school-aged children (Shaywitz & Shaywitz, 2001) with phonological deficits persisting into adulthood (Hatcher, Snowling, & Griffiths, 2010). Difficulties with phonemic processing and phonemic awareness are present even in students in higher education, with notable differences between dyslexic undergraduates and non-dyslexic undergraduates (Snowling, Nation, Moxham, Gallagher, & Frith, 1997). Higher education institutions in England and Wales offer support to students with a formal diagnosis of dyslexia, including extra time in examinations and coaching in study skills (Zdzienski, 2001). All students with a formal diagnosis of dyslexia can also apply for a financial grant through Student Finance that will provide equipment and software to aid students with SpLDs through their studies. In order to receive any of these special provisions, according to Student Finance and most University guidelines, students must have a formal Educational Psychologist report confirming they suffer from a SpLD such as dyslexia (Singleton, 1999a). To obtain a formal diagnosis, students in higher education must independently fund a report from an Educational Psychologist. Those who have been identified as dyslexic during their schooling can commit to this expense with confidence. However, not all dyslexic students are identified prior to attending higher education, and particular difficulties may only be first noticed when beginning their university studies. Many students in this position may be reluctant to spend money on an assessment by an Educational Psychology without some additional confirmation they may suffer from a SpLD. In order to determine whether a student should be referred to an Educational Psychologist, some universities have their own screening processes.

Students with Specific Learning Difficulties (SpLDs) are offered special provisions at Cardiff University in the form of extra time in examinations and study support. Cardiff University offers a screening process for those that are concerned they may have a SpLD. These students that present to Student Support are interviewed to assess their specific difficulties. The students that are experiencing difficulties associated with dyslexia complete an online questionnaire, Quick Scan, to confirm what the student finds particularly difficult. Quick Scan compiles a report based on the student’s answers and gives a report commenting on the student’s learning style and specifying the number of indicators of dyslexia the student shows. Student Support then makes a recommendation based on the findings from Quick Scan and the initial interview. Student Support at Cardiff University have been trialling a new screening tool, Quick Screen (QS), that aims to be more in depth at assessing a student’s difficulties by using similar tests to the ones used by Educational Psychologists in their formal diagnosis. Quick Screen consists of 9 tests that assesses a student’s processing skills, visual skills, verbal skills, vocabulary, memory and sequencing skills.

Quick Screen takes approximately one hour to complete and can be completed at home on a student’s computer. Quick Screen uses a student’s sub-test scores to calculate how many indicators of dyslexia a particular student displays. Student Support can then use this report to make a recommendation about whether a student should seek an Educational Psychologist report.

QS classifies each participant along a scale of indicators of dyslexia, as None, Borderline, Mild, Moderate, or Strong. For screening purposes, the question of greatest interest is which students could be classified with reasonable certainty on the basis of QS outcomes, and which participants would require further evaluation, perhaps by an Educational Psychologist. Conceivably, classifications at the lower end of the QS scale might reliably indicate the absence of dyslexia, or these classifications could be non-diagnostic. Similarly, classifications at the upper end of the QS scale might reliably indicate the presence of dyslexia, or these classifications could be non-diagnostic. If QS is to be useful, either for determining which students should or should not receive reasonable adjustments or for determining which students should or should not be assessed by an Educational Psychologist, at least one end of the QS scale must be diagnostic, and reliably classify some participants.

If Quick Screen can successfully distinguish between dyslexic and non-dyslexic students, Student Support hope to be able to use Quick Screen to determine whether a student would benefit from extra provisions. Students would have free access to Quick Screen and there is a vision to using it as a substitute to the need for an Educational Psychologist report to qualify for University granted special provisions, such as extra time in examinations. Students unable or unwilling to independently fund an Educational Psychologist report would then be able to receive extra help from their university. If Quick Screen was made available to all students online as a result of this study, more students who are concerned they may have a SpLD may be encouraged to seek help from Student Support.

This study aims to assess the diagnostic accuracy of Quick Screen with a case-control experiment, comparing the QS results for a sample of dyslexic students to results of control students who are assumed to be nondyslexic. If QS has some sensitivity to dyslexia, then we would expect the dyslexic participants to show a lower proportion of participants classified as having None of the indicators of dyslexia, and the non-dyslexic control group to show a higher proportion of participants with this QS classification. [also, we might check for monotonic increase in (%dys | QSinds) as QSinds increases]

Method

Participants

Participants were students studying at Cardiff University. Twenty three dyslexic participants were recruited via an email from Student Support to students who were receiving support for dyslexia, asking those students if they would be interested in taking part in the study. Thirty six nondyslexic control participants were recruited by making the study available to a panel of participants maintained by the School of Psychology; members of this panel are students or staff at Cardiff University. The control participants did not have a formal diagnosis of dyslexia, and were not known to Student Support Services. The majority of nondyslexic participants were students of Psychology, and their ages ranged from 19-26. The dyslexic participants
studied a wide range of subjects and their ages ranged from 18-25. The participants in both groups were predominantly female.

Participants were given a £15 Amazon voucher for their participation.

Materials

The study required participants to have access to a computer with a working mouse. The consent form was completed online using Qulatrics Survey software. Quick Screen is completed online, therefore participants were required to have internet access. Some aspects of Quick Screen required participants to have working speakers. Participants completed an online demographic questionnaire asking their age, sex and whether their vision or hearing is impaired. Quick Screen contains of 9 tests requiring a short passage of text for participants to read, about 10 true/false comprehension questions on the text, a visual skills task (a series of trials choosing the appropriate shape to complete a grid), a multiple-choice synonym judgment vocabulary test, a working memory test involving sequences of numbers of increasing lengths, and a speeded symbol-to- number lookup task.

Design

This was a case-control study, with participants assigned to the dyslexia group or the nondyslexia group depending on whether or not they had a formal diagnosis of dyslexia.

Procedure

Dyslexic participants were invited to email the research assistant expressing an interest in taking part. These participants were then emailed a link to follow to complete the online consent form. Non-dyslexic participants signed up to the study via EMS and followed the link listed online to complete the consent form. Once the consent forms had been completed their details were recorded by the research assistant and their details were forwarded to Student Support. Student Support then assigned each participant to Quick Screen. All participants then received an email containing a personal link to Quick Screen. Once Quick Screen had been completed, Student Support forwarded each participants’ results to the research assistant. Once the research assistant had received the participant’s data, they were paid their £15 Amazon voucher.

Results and discussion

Across the range of QS Classifications (None, Borderline, Mild, Moderate, Strong), participants in the nondyslexic control group were slightly more likely to be classified as None (50%) than Borderline (44%), as shown in Figure 1. Just two (6%) were classified as Mild. In contrast, dyslexic participants were more likely to be classified as Borderline (50%) than None (29%); some were also classified as Mild (17%), and one as Moderate (4%).

results graph 15

 

Figure 1. Distribution of participants by QS result, showing proportion of nondyslexic participants (shaded) and dyslexic participants (small circles) obtaining each QS Classification.

To assess the potential of QS results to be interpreted as definitively positive or negative for dyslexia, diagnostic odds ratios (DOR), true-positive fractions (TPF), and false-positive fractions (FPF) were calculated for each potential diagnostic threshold (None vs. Borderline or above, Borderline or below vs. Mild or above, Mild or below vs. Moderate). These measures are shown in Table Y, with one-tailed confidence intervals focusing on the tail of interest for a screening tool. Because there were in principle four potential diagnostic thresholds, the significance level for confidence intervals was set at . Diagnostic accuracy was not assessed for Moderate or below versus Strong because no participants in either group received a QS classification of Strong.

Table Y. Measures of diagnostic accuracy for ruling-out or ruling-in diagnoses as a function of potential QS diagnostic thresholds. Shading on QS outcomes indicates potential ruling-out, inconclusive, or ruling-in interpretations for various diagnostic thresholds. Measures include diagnostic odds ratio (DOR), true-positive fraction (TPF), and false-positive fraction (FPF).

results table 16

As shown in Figure X, a QS classification of None was seen in a greater proportion of nondyslexic than dyslexic participants, so this outcome could potentially provide a ruling-out diagnosis. As shown in the TPF column of Table Y, this diagnostic criteria would correctly rule out dyslexia for 50% of the nondyslexic sample. However, as shown in the FPF column, this diagnostic criteria would incorrectly rule out dyslexia for 29% of the dyslexic sample. In any event, these figures must be interpreted with caution, since the confidence intervals are large. As shown in the DOR column, the  odds of a QS classification of None rather than Borderline or above were somewhat lower for dyslexics than nondyslexics, but the confidence interval did not rule out the complete absence of an effect (dyslexia odds ratio DOR=0.36, CI < 1.44).

A QS classification of Mild was seen in a greater proportion of dyslexic than nondyslexic participants, and this outcome or higher could potentially provide a ruling-in diagnosis. This diagnostic criteria would correctly identify dyslexia for 22% of the dyslexic sample. However, it would also identify dyslexia for 6% of the nondyslexic sample; although these are identified here as false positives, it must be noted that the “nondyslexic” sample have not been assessed formally, and may well include some people with undiagnosed dyslexia. As above, these figures must be interpreted with caution since the confidence intervals are large. The overall diagnostic odds of a QS classification of Mild or above rather than Borderline or below were somewhat higher for dyslexics than nondyslexics, but the confidence interval did not rule out the complete absence of an effect (DOR=4.59, CI > 0.54).

Only one participant received an adjusted QS classification of Moderate or above, so these data did not afford meaningful analyses of the diagnostic potential for this outcome to provide a threshold for ruling-in diagnoses.

Discussion

No potential diagnostic threshold achieved statistical significance in this small pilot study, as indicated by confidence intervals compatible with equal odds for dyslexic and nondyslexic participants. Bearing that in mind, the results tentatively suggest that QS did not seem to provide a reliable ruling out diagnosis, because a large fraction of dyslexic participants showed a QS classification of “None”. In contrast, a QS score of Mild or above could potentially provide a ruling in diagnosis. If the TPF in this sample generalises to larger samples, then about 1 in 5 dyslexics could be identified by QS, and might therefore not have to be referred to an Educational Psychologist for formal testing. This diagnostic threshold is associated with an apparent false positive rate of about 1 in 20, but this might be an acceptable level of false positives, particularly since the FPF obtained here is likely to be inflated by a number of undiagnosed cases of dyslexia, which are really true positives rather than false ones. The rate of false positives could be reduced by employing a higher threshold for a ruling in diagnosis, but then only about 1 in 25 dyslexics would be identified by the screening tool.

These preliminary data suggest that QS could perhaps provide a useful level of diagnosticity for Cardiff University undergraduates. However, the number of participants was small, and a larger study would be necessary to estimate the diagnostic accuracy of QS in this population with greater precision.

Power analysis

To obtain a diagnostic odds ratio with a one-tailed margin of error within a factor of two of the actual odds ratio, the minimum number of dyslexic participants required is given by the formula:

where is the number of dyslexic participants, c is the ratio of nondyslexic to dyslexic participants, and is the quantile from the cumulative normal distribution corresponding to the significance level of interest.

Focusing on a priori tests of specific diagnostic thresholds, so that family-wise error can be ignored, the desired margin of error could be attained at the 95% confidence level with equal samples of at least 33 dyslexic and 33 nondyslexic participants. Alternatively, if it is easier to recruit nondyslexic than dyslexic participants, sufficient participants at a two-to-one ratio would require at least 27 dyslexic participants and 54 nondyslexic participants.

Adjusted QS scores

Across the range of QS Classifications (None, Borderline, Mild, Moderate, Strong), participants in the nondyslexic control group were somewhat more likely to be classified as None (58%) than Borderline (39%), as shown in Figure 1. Just one (3%) was classified higher than this, as Moderate. In contrast, dyslexic participants were somewhat more likely to be classified as either Borderline (35%) or Mild (35%) rather than None (26%), and one (4%) was also classified as Strong.

results table 17

Figure XX. Distribution of participants by adjusted QS result, showing proportion of nondyslexic participants (shaded) and dyslexic participants (small circles) obtaining each QS Classification.

To assess the potential of QS results to be interpreted as definitively positive or negative for dyslexia, diagnostic odds ratios (DOR), true-positive fractions (TPF), and false-positive fractions (FPF) were calculated for each potential diagnostic threshold (None vs. Borderline or above, Borderline or below vs. Mild or above, Mild or below vs. Moderate). These measures are shown in Table YY, with one-tailed confidence intervals focusing on the tail of interest for a screening tool. Because there were in principle four potential diagnostic thresholds, the significance level for confidence intervals was set at .

Table YY. Measures of diagnostic accuracy for ruling-out or ruling-in diagnoses as a function of potential adjusted QS diagnostic thresholds. Shading on QS outcomes indicates potential ruling-out, inconclusive, or ruling-in interpretations for various diagnostic thresholds. Measures include diagnostic odds ratio (DOR), true-positive fraction (TPF), and false-positive fraction (FPF).

As shown in Figure XX, a QS classification of None was seen in a greater proportion of nondyslexic than dyslexic participants, so this outcome could potentially provide a ruling-out diagnosis. As shown in the TPF column of Table YY, this diagnostic criteria would correctly rule out dyslexia for 58% of the nondyslexic sample. However, as shown in the FPF column, this diagnostic criteria would incorrectly rule out dyslexia for 26% of the dyslexic sample. In any event, these figures must be interpreted with caution, since the confidence intervals are large. As shown in the DOR column, the odds of a QS classification of None rather than Borderline or above were somewhat lower for dyslexics than nondyslexics, but the confidence interval did not rule out the complete absence of an effect (dyslexia odds ratio DOR=0.26, CI < 1.03).

A QS classification of Mild was seen in a greater proportion of dyslexic than nondyslexic participants, and this outcome or higher could potentially provide a ruling-in diagnosis. This diagnostic criteria would correctly identify dyslexia for 39% of the dyslexic sample. It would also identify dyslexia for about 3% of the nondyslexic sample; although these are identified here as false positives, it must be noted that the “nondyslexic” sample have not been assessed formally, and may well include some people with undiagnosed dyslexia. As above, these figures must be interpreted with caution since the confidence intervals are large. However, the overall diagnostic odds of a QS classification of Mild or above rather than Borderline or below were significantly higher for dyslexics than nondyslexics (DOR=21, CI > 2.1).

Very few participants in either group received an adjusted QS classification of Moderate or above, so these data did not afford meaningful analyses of the diagnostic potential for this outcome to provide a ruling-in diagnosis.

QS Dyslexia – Processing and Speed of Processing

In the course of assessment activities over the past ten years with students of all ability levels at university, it repeatedly came to light that there was a strong correlation between dyslexia, difficulties with aspects of literacy and poor processing speed.

In fact the distinguishing marker between those dyslexics who could cope quite well and those who needed more considerable support was undue slowness in processing.

This clearly had an adverse impact on a whole range of study activities including completing written assignments, coping with reading lists, taking intelligible lecture notes and coping with written examinations to name but a few.

Individual aspects such as spelling and literacy did not appear to be the main defining limitations to their studies. Most of the practical aspects could be compensated for by the use of technology by, for example, using a spellchecker, or by employing strategies such as the use of “hooks” to improve the effectiveness of memory.

Given the complexities of a dyslexic adult’s learning difficulties, where many areas of weakness will have been compensated to varying degrees, the one element that consistently appears to limit performance is slow processing. If the time you need to assimilate information is too great, then inevitably there must be an impact on study.

Where the definition of dyslexia itself is so disputed and open to question, the one constant is that dyslexics have difficulty processing information at speed or when under pressure.

Sequencing is also a good indicator of processing speed and is the one test that it is not possible to complete any faster than your maximum capacity.

There have been many models for assessment, from the early medical approach through to phonological skills testing and the social models of dyslexia that place less emphasis on the value of testing. The traditional model used by educational psychologists and dyslexia specialists, which aims to establish a discrepancy between literacy acquisition and underlying ability, continues to be required by most educational establishments and forms the basis of the Quickscreen test.

Each of these models has its strengths and weaknesses and can be seen as more or less applicable to the individual being assessed depending on what the aims of the assessment may be.
One of the problems with many existing tests is that they fail to discriminate at university level, and one of the other aims in producing QuickScreen was to produce a program which would work as effectively for a person with few academic qualifications from the general adult population as for a person at postgraduate level.
Where many phonological tests have been successful with younger people, a better indicator of phonological performance at adult and university level has proved to be a reading and dictation exercise, which measures the ability to assimilate, process and record written data under timed conditions. These are important and relevant factors in the formalities of learning and study, replicating, as they do, many of the skills needed for efficiency in written literacy.
Much of the recent research in this area has highlighted the importance of processing language on a number of different levels.
There has been much debate over the years as to whether assessment should simply seek to “label” the condition and in doing so help the individual to account for their perhaps disappointing academic performance and accept a different approach to what had previously been seen in a purely negative light.

The aim might also be to help an individual integrate better socially and in their workplace, or to improve academic performance by providing access to support and funding for necessary adjustments and technology to allow them to achieve their full potential.

In producing QuickScreen it was essential to establish a clear view of what you are aiming to achieve, and, perhaps, one of the longest standing aims has been to reduce the time and effort taken to complete an assessment, while having it provide detailed information on the individual’s abilities and performance.
QuickScreen uses data from a battery of subtests to arrive at three conclusions. The first is to establish whether there are indicators of dyslexia, the second to assess levels of literacy and the third to highlight any difficulties with speed of processing. By detailed cross referencing of results data QuickScreen has made it possible to produce a computer generated report covering all three areas.
Six subtests produce a profile of verbal, visual and vocabulary skills, together with the well-established underlying skills relating to dyslexia, which are memory, sequencing and processing.
Development of the program is now complete, as is the collation of the first results from trials carried out at University and through the BDA website, which provided a sample from university students, together with the sample from the general population to compare with adults who are not dyslexic.
Early indications from the trial would strongly suggest that QuickScreen does differentiate between dyslexics and non-dyslexics.
Additionally it provides a comprehensive literacy and attainment profile. This enables tutors to compile a statement of individual needs for study support. They can either use the report to confirm the need for a full dyslexia assessment or add their comments and use them as relevant background evidence in establishing a case for support.
Tutors are enabled to carry out remote testing and elect to withhold reports or send them to the student after adding their own comments.
The background colour bands reflect the “traffic lights” markings used throughout the report to indicate the levels of performance and highlight areas for concern.
The data used in the production of these indicative graphs is sorted in ascending order to provide a clearer picture of the differentiation between groups. The Green line represents the non dyslexic control data, the blue line represents student dyslexics and the red line dyslexics drawn from the general adult population.
This preliminary data is drawn from a total of 105 individual participants split into three groups of 35, 70 of whom have been previously diagnosed as dyslexic and a non-dyslexic control group of 35.
Note that in all cases, as might be expected, there is a noticeable gap between the performance of dyslexics and non-dyslexics which indicates that the program is capable of discriminating effectively to identify those with dyslexia in both the student and general adult population.

Dyslexia Quotient Graph – Students

The following graph compares results for the dyslexia component of the diagnosis provided by the program from non-dyslexics, with results of dyslexics from universities.

results table 19

Dyslexia Quotient Graph – Adults from the general population

The following graph compares results for the dyslexia component of the diagnosis provided by the program from non-dyslexics, with results of dyslexic adults from the general population.

results table 20

Processing Graph – University Students

The following graph compares results for the processing component of the diagnosis provided by the program from non-dyslexics, with results of dyslexics from universities.

results table 21

Processing Graph – General Adult Population

The following graph compares results for the processing component of the diagnosis provided by the program from non-dyslexics, with results from dyslexics from the general adult population.

results table 22

Processing Speed Graph – University Students

The following graph compares results for the overall processing speed component of the diagnosis provided by the program from non-dyslexics, with results of dyslexics from universities.

results table 23

Processing Speed Graph – General Adult Population

The following graph compares results for the overall processing speed component of the diagnosis provided by the program from non-dyslexics, with results from dyslexics from the general adult population.

results table 24

Compiled by Pico Educational Systems Ltd in June 2016

Further research links

The background research for the original program does give quite a bit of explanation as to the rationale behind the questions in QuickScan and it is available in the original PhD thesis by Dr Dorota Zdzienski, now available online through Leicester university entitled:

Dyslexia in Higher Education: An exploratory study of learning support, screening and diagnostic assessment

https://lra.le.ac.uk/handle/2381/9806

https://lra.le.ac.uk/bitstream/2381/9806/1/1998zdzienskidphd.pdf

There is also a book edited by Morag Hunter-Carsch who supervised the above study which is entitled:
‘Dyslexia & Effective learning in Secondary & Tertiary Education’ where there is a chapter about the program. This is still available and below is a link to Amazon’s listing for it:

http://www.amazon.co.uk/Dyslexia-Effective-Learning-Secondary-Education/dp/1861560168

With regard to Quickscan, in various research projects carried out over the years, users have quoted 95% accuracy and this figure was also noted in Gavin Reid’s research paper on his study of young offenders, which makes a similar claim in a recognised research forum.

An Examination of the Relationship between
Dyslexia and Offending in Young People and
the Implications for the Training System

Jane Kirk and Gavin Reid
University of Edinburgh, UK

A screening study was undertaken which involved 50 young offenders, serving sentences of various lengths, all from the largest young offenders’ institution in Scotland. All 50 were screened for dyslexia and a number received a more detailed follow-up assessment. The results of the screening showed that 25 of the young offenders (50%) were dyslexic to some degree. This finding has implications for professionals, particularly in respect of follow up assessment and support, and for politicians in relation to issues such as school experience, prison education and staff training. These issues are discussed here in relation to the background ands results of the study.

INTRODUCTION

Although nearly a quarter of a century has passed since Critchley and Critchley (1978) highlighted the issue of dyslexia and crime, it is only very recently that some attempts have been made to identify the real extent of the problem. Today the relationship between dyslexia and anti-social or criminal behaviour is arguably one of the most controversial in the field of dyslexia. Some studies (see below) which have attracted significant media attention have claimed to detect a significantly higher incidence of dyslexia amongst those in custody compared to the general population. If this claim is valid, it is remarkable and worrying since it might be interpreted as meaning that there is a casual connection between dyslexia and social deviance. Since it is now acknowledged that dyslexia is, in some cases, partially influenced by heredity, it would be extremely serious if, in unfavourable environments, it predisposed people to criminal or anti-social behaviour.

In the STOP project (Davies and Byatt, 1998) there was an investigation in some depth of the possibilities of screening, assessment and training in relation to dyslexia and crime. Their study revealed that 31% (160 out of 517) had positive indicators of dyslexia. Similarly, the Dyspel project (Klein, 1998) designed a screening tool for dyslexia in the form of a questionnaire and also used other established screening tests such as the Bangor Dyslexia Test (Miles, 1997). It was found that 38% of the custodial sample showed indicators of dyslexia. In addition, a study by Morgan (1996) using the Dyspel procedures found 52% of those screened had strong indicators of dyslexia. All three of these UK studies are consistent with other studies in Sweden (Alm and Andersson 1995) and the United States (Haigler et al., 1994), but have still generated some criticism. Rice (1998) suggests that here is no support for the claim that dyslexia is more prevalent among prisoners than among the general population and asserts that the prison studies which argue to the contrary are fundamentally flawed in terms of sample bias, inappropriate screening methods, and lack of clarity regarding the concept of dyslexia.

At first glance, dyslexia may seem to induce anti-social behaviour. The able school pupil, whose dyslexic condition is not diagnosed, or, having been diagnosed, receives insufficient or inappropriate support, might very well begin to feel devalued at school and turn to forms of deviant behaviour as a way of responding to the sense of low self-esteem induced by school and as a way of achieving recognition by peers. A study carried out at the University of Sunderland (Riddick et al., 1999) found that there was a significant difference in the perceived self-esteem within two groups of students in higher education. The first group, consisting of students with dyslexia, all demonstrated low self-esteem and comparatively high levels of anxiety. The control group, in contrast, were consistently more positive about their academic abilities (cf. also Reid and Kirk, 2000). If the difference is marked at this level of education, where the students with dyslexia have achieved a degree of success in gaining entry to higher education, how much more marked would the difference be if it were measured in a young offenders’ institution? Low self-esteem may lead to a pattern of anti-social or maladjusted behaviour, which could lead to more serious forms of deviant behaviour and ultimately to imprisonment. In that case dyslexia may be related, albeit indirectly, to offending behaviour.

The purpose of this study is two-fold: to conduct an investigation to identify the potential numbers in a young offenders’ institution who might display positive dyslexia indicators in a screening test and to examine the implications of the results for the training of relevant staff in the prison education system.

CHOICE OF MEASURE

It was decided to use QuickScan, a computerized self-assessment screening test for dyslexia in which the subjects are required to reply ‘yes’ or ‘no’ to the questions asked (Zdzienski, 1997). This test had been piloted with 2000 students across many subject areas from the universities of Kingston and Surrey. However, some of the vocabulary used to screen students in the south of England was judged to be inappropriate for young offenders in central Scotland. In preparation for the work to be carried out in the young offenders’ institution, the vocabulary in the questions was amended: changes were made and carefully checked so as to ensure that the sense of the question remained unaltered. One example of the linguistic difference is that the word task has different connotations in England and was replaced by the more familiar word job. All the changes were approved by the author of QuickScan.

An additional reason for selecting a computerized test was that we judged that the young offenders might respond more positively to this method of testing than to paper and pencil tests, with which they may have had negative experiences at school. The QuickScan Screening Test was thought to be non-threatening in that the questions do not focus on basic language, but rather on the processing of information. Examples of some of the questions are given below.

It was recognized that QuickScan could not offer an exact diagnosis of dyslexia. However, given the time restrictions imposed by the prison management and by the fact that the project was being televised, it was considered the most effective available tool for a study whose purpose was to find out how many young men in a sample of 50 manifested indicators of dyslexia.

The QuickScan screening test reports on 24 different performance categories, eight of which have been selected by the present authors as being particularly informative. The labels used in QuickScan to summarize the results of the different tests are open to question (e.g. ‘sequencing problems’, ‘laterality problems’), but it is the questions themselves, not the theory allegedly attached to the answers given, which is important. These questions are based on many years’ work with dyslexic adults, and this gives them a face validity that would be hard to dispute. For illustration purposes, we present one sample question from each category.

(sequencing) When making phone calls do you sometimes forget or confuse the numbers?
(memory) Do you often find it difficult to learn facts?
(family history) Do you know of anyone in your family who has dyslexia?
(general language) Is it usually easy for you to find the key points in an article or a piece of text?
(self-esteem) Are you usually a fairly confident person?
(concentration difficulties) Do you usually find it difficult to concentrate?
(organizational difficulties) Do you have difficulties organizing your ideas into an essay or report format?
(laterality difficulties) Do you sometimes confuse left and right?

CHOICE OF SAMPLE

The choice of the size of the sample group was largely determined by the prison management. They stipulated the amount of time they felt was sufficient to allow the screening to take place without completely disrupting the training and discipline within the institution. Given this time restraint, it was decided that it would be possible to have nine sessions of 30 min. The numbers taking the test were limited by the prison procedures: only six young people were allowed to take the test at any one time. This stipulation determined that our sample could at most have been 54 (nine sessions with six individuals present at each) and was in fact 50. Half an hour allowed time for group discussion about matters connected with anonymity, their exclusive entitlement to the results, and their right to stop participating at any time during the screening. A brief description of the test was offered and what it would measure. At this point, the young offenders were given the choice of whether or not to proceed: none of them refused to continue. Those taking part came from all sections of the prison: some were short-term prisoners while others were being detained for more serious crimes. Although the time-scale did not allow for individual interviews, informal discussion with the prisoners revealed histories of school-refusal, exclusions for disciplinary matters and, in many cases, a bitter dislike of school education.

Although the study was primarily a screening one, it was decided to select at random six of the young men who had demonstrated indicators of dyslexia for further testing. The aim of these full assessments, carried out by a chartered educational psychologist, was to determine whether the results of the screening tests correlated with the results from the full assessment. The tests used for this stage were the WAIS-R (Wechsler, 1981) and the WRAT-3 (Wilkinson, 1993).

RESULTS

Table 1 shows the score for each subject expressed as a percentage of the ‘dyslexia positive’ items in a given category, together with an overall figure (degree of dyslexia) and a classification in terms of ‘MM’ (most indicators), ‘M’ (many indicators), ‘S’ (some indicators), ‘BL’ (borderline) and ‘no indicators’ (symbolized by ‘0’). The final two columns report on the level of indicators of dyslexia as calculated by the programme, QuickScan. The programme makes its calculations in a somewhat complex way, with the result that it is possible for a person with a high score to have fewer indicators. The results recorded in the final two columns are not deducible from the eight columns that precede them in the present table.

The results may be summarized as follows:

Three of the subjects displayed most indicators
Three displayed many indicators
Seventeen displayed some indicators
Two displayed borderline indicators

Table 1: Percentage results of eight categories of the
QuickScan Screening Test together with final analysis

table

table

table

 

 

This gives a total of 25 young offenders out of 50 (50%) who showed at least borderline indicators of dyslexia.

It is, of course, no surprise that there are problems over the exact boundary between those who are and are not dyslexic, and for this reason two cases (nos. 29 and 30) have been entered as ‘0?’ The entry ‘skills’ opposite nos. 27, 28, 31, 32 and 33 indicates poor literacy skills without other ‘classic’ signs of dyslexia.

It can be seen from Table 1 that the change in response levels is identified from about row 25 to row 33. The results from the other 16 categories, not included in the table, demonstrate a similar pattern.

Detailed statistical analysis of the data was not considered appropriate, but simple inspection suggests that if we draw a boundary after case no. 25, then on all items the scores of subjects 1-25 are higher than those of subjects 26-50. This is particularly noticeable in the case of the ‘sequencing’ and ‘memory’ items, which are widely agreed to be indicators of dyslexia. Although there were only two questions relating to family history, the difference between the two groups is clear: four out of 25 from nos. 26 to 50 reported that they were aware of some history of dyslexia in their families compared with 19 out of 25 among nos. 1-25. Self-esteem was low in all that were found to have indicators of dyslexia.

Three of the 50 had been tested previously and found to be dyslexic. In each of these cases, QuickScan showed strong indicators of dyslexia.

In the follow-up diagnostic assessment, all six young offenders who were selected for full assessment revealed discrepant scores in processing speed and short term memory compared to verbal comprehension and verbal expression.

The findings of the present study are in broad agreement with those of the two larger projects, the STOP project (Davies and Byatt, 1998), where 31% of a near-random sample of probationers were found to be dyslexic, and the Dyspel project (Klein, 1998), where the figure was 38%. It is possible, however, that the higher percentage in the present study can in part be accounted for by the fact that the subjects were volunteers – since arguably dyslexics would be more likely than other prisoners to select themselves. However that may be, the percentage of dyslexics in all three studies is massively higher than even the highest estimates of dyslexia (say 10%) in the general population.

IMPLICATIONS

This study has three main implications. First, there is a need for a much more decisive intervention in the early stages of education to identify and support those with dyslexia. If the condition goes unrecognized the result is likely to be a low sense of self-worth, which in turn predisposes young people to offend. We suggest that the community has an obligation to mobilize resources and expertise so as to prevent that drift towards criminal behaviour, or at least seek to make it less inevitable. That much is owed to the young people themselves, not to mention the financial saving to the community if dyslexia is recognized and treated.

Secondly, the study suggests that there is a need to make appropriate provision to support young people with dyslexia when they are in custodial care. Even if the incidence of dyslexia amongst offenders is considerably less than the present study suggests, there would be a need to arrange for offenders to be screened for dyslexia and for proper support to be prescribed. In addition, when they return into the community they need help in making the necessary adjustments and in learning to acquire ways of responding to the many pressures to which they may be exposed.

Thirdly, there is a need for more detailed work on the most appropriate way of screening for dyslexia. The present study confirms earlier studies. However, it runs counter to the claims of Rice (1998), and this suggests there is a need to refine the ways in which we screen and attempt to diagnose dyslexia (cf. also Sanderson, 2000). Moreover, we need to devise a measure or measures that have the support of the whole of the research community.

It is encouraging that the Scottish Dyslexia Trust has agreed to fund a study which will seek to identify a suitable assessment measure from tools currently available and to quantify the extent of dyslexia in the prison population. This study will hopefully benefit both the prison authorities and the academic community.
References:

Alm, J. and Andersson, J. (1995) Reading and Writing Difficulties in Prisons in the County of Uppsala. The Dyslexia project, National Labour Market Board of Sweden at the Employability Institute of Uppsala.

Critchley, M. and Critchley, E.A. (1978) Dyslexia Defined. Heinemann: London.

Davies, K. and Byatt, J. (1998) Something Can Be Done! Shropshire STOP Project: Shrewsbury.

Haigler, K.O., Harlow, C., O’Connor, O. and Campbell, A. (1994) Literacy Behind Prison Walls: Profiles of the Prison Population from the National Adult Literacy Survey. U.S. Department of Education: Washington, DC.

Klein, C. (1998) Dyslexia and Offending. Dyspel: London.

Miles, T.R. (1997) The Bangor Dyslexia Test. Learning Development Aids: Cambridge.

Morgan, W. (1996) London Offender Study: Crating criminals – Why Are So Many Criminals Dyslexic? University of London: unpublished dissertation.

Rice, M. (1998) Dyslexia and Crime: Some Notes on the Dyspel Claim. Institute of Criminology, University of Cambridge: unpublished.

Riddick, B., Sterling, C., Farmer, M. and Morgan, S. (1999) Self-esteem and anxiety in the educational histories of adult dyslexic students. Dyslexia: An International Journal of Research and Practice, 5(4), 227-248.

Reid, G. and Kirk, J. (2000) Dyslexia in Adults: Education and Employment. Wiley: Chichester.

Sanderson, A. (2000) Reflections on StudyScan. Dyslexia: An International Journal of Research and Practice, 6(4), 284-290.

Wechsler, D. (1981) Wechsler Adult Intelligence Scale-Revised (WAIS-R). Psychological Corporation: New York.

Wilkinson, G.S. (1993) Wide Range Achievement Test (WRAT-3). Delaware: Wide Range Inc.

Zdzienski, D. (1997) QuickScan. Interactive Services Limited: Dublin.

Copyright 2001 John Wiley & Sons Ltd.
Originally Published in Dyslexia Journal 2001
Correspondence to: Jane Kirk, Disability Office, University of Edinburgh, 3 South College Street, Edinburgh EH8 9AA, UK. E-mail: jane.kirk@ed.ac.uk

Download QuickScan Research Document

If you think this was interesting and could be useful to others, please share a link to this page.
Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedIn