RESEARCH UPDATE

In Brief

We have just completed independent statistical research for the QuickScreen dyslexia test for 2018 – see full report details below. We again found strong statistical evidence (p-value < 0.0001) of an association between the dyslexia group (previously diagnosed vs control) and the current QuickScreen test indication.

This follows our previous research study results for the year 2016-2017 (conducted with several universities and participants via the BDA website), which also found ‘strong statistical evidence of an association between independent dyslexia diagnosis and the QuickScreen test indication.’ Scroll down for the full statistical analysis.

QuickScreen has been shown to identify those who are dyslexic, dyspraxic or have problems with processing. We are confident in the pragmatic value of this test and the insights into learning that it offers.

These research results indicate that where a candidate gets a mild, moderately high or strong dyslexia indication – then they are very likely to have been correctly identified with dyslexia and therefore their report could provide the relevant background information when seeking appropriate support within university or the workplace.

Non-dyslexic control group samples from the university studies in 2016-2017 show that no candidates get moderately high or strong indicators, and under 5% get results just over the borderline into the mild category. This is, however, not surprising, given that there may well be some students within any control group who have slipped through the net and is in keeping with the generally accepted numbers of candidates in higher education who have dyslexia. A number of candidates get a borderline result that requires further investigation and the report will highlight any areas requiring attention.

Furthermore, there will also be a natural overlap here with candidates who are dyslexic but are performing at normal levels of literacy and speed of processing, so they are largely compensated, in which case, their dyslexia may not be identified, but where there is a previous history of difficulties then further diagnosis may be sought.

We are continuously working to improve the test experience and reporting system.

Signed Dr Walker signature

Dated 07.01.2019

Dr. D. Walker – Dyslexia Consultant for Pico Education
B.A. (Hons) PGCE, Dip. SpLd. Dyslexia Institute PhD – Dyslexia in Higher Education – Leicester University
2, Carlton Court, Knole Rd. East Sussex TN40 1LG
Tel: 01424-254658
Email: Picoeducation@aol.cominfo@qsdyslexiatest.com

Statistical Support for QuickScreen Dyslexia Test

Download the 2018 research report here: Dyslexia Report

Further Analysis

Executive Summary

As in the previous study, we again find strong statistical evidence (p-value < 0.0001) of an association between the dyslexia group (previously diagnosed vs control) and the current QuickScreen test indication.

Exploring the QuickScreen test’s diagnostic accuracy, we find a high specificity with 92.4% (95% confidence interval [CI] = 84.0%, 96.6%) of those in the control group estimated to receive an indication of “None” or “Borderline”. Furthermore, we find evidence of a high Negative Predictive Value (NPV) for these indications, with 95.8% (95% CI = 94.7%, 96.7%) of control participants estimated to be predicted as “None” or “Borderline”. We also note that whilst there may be subjects in the control group who show some symptoms linked with dyslexia (perhaps leading to a “Borderline” indication), when presenting the results of the test to participants, QuickScreen provides a caveat/explanation that in the absence of other key indicators (e.g., deficiencies in literacy levels) a dyslexia diagnosis is unlikely. Furthermore, it is recognised that though participants in the control group may not have previously received a formal dyslexia diagnosis, it is possible that this group may contain a small number of previously undiagnosed dyslexics. It is also acknowledged that those in the dyslexia diagnosed group may have received their diagnosis a number of years previously, and may now potentially be well-compensated and therefore asymptomatic despite having a positive diagnosis. For these reasons, and as dyslexia is a condition with a spectrum of symptoms and severities, we recognise that it may not necessarily be possible to achieve perfect diagnostic accuracy in this context. The graduated indications provided by QuickScreen reflect this non-binary nature of dyslexia which is on a continuum of symptoms/severities, and these provide a means of communicating this uncertainty to participants.

Considering the individual components of the QuickScreen test: We find strong statistical evidence (p- value <0.0001) of a difference in the distribution of the Dyslexia Quotient scores between the dyslexia diagnosed and control groups (with a median score of 5.5 vs 0.2 and a mean score of 5.4 vs 0.9, respectively). Similarly, the data provide strong statistical evidence of a difference in the distributions between the dyslexia diagnosed and control participants for the majority of the other QuickScreen test components (p-values between <0.0001 and 0.0007).

Furthermore, for each of the QuickScreen test components, there is statistical evidence (based on a univariate classification tree [CART] approach) of there being cut-off values that are informative in discriminating between the dyslexia diagnosed vs control participants. For example, 142.1 and 185.65 words per minute (wpm) were determined as discriminating cut-offs for reading speed, with 97.3% of participants in the ‘high’ indication group (<142.1 wpm) being dyslexia diagnosed, 73.0% in the ‘middle’ group (142.1 to 185.65 wpm), and 19.1% in the ‘low’ group (>185.65 wpm).

Considering the QuickScreen test components in combination, the data provide statistical evidence (via multiple variable CARTs) of the combination of the Reading Speed (wpm), Spelling Score (%), General Speed of Processing Score minus Literacy Score, and Sequencing Scaled Score QuickScreen test components discriminating between the dyslexia diagnosed and control participants. In the ‘high’ group, for example, all participants with a Reading Speed of less than 185.65 wpm, a Spelling Score of less than 76.25%, a General Speed of Processing minus Literacy Score of less than 9.25 were in the dyslexia diagnosed group.
The results of these univariate and multiple variable CARTs may be useful in helping to inform the adjustments to the indications that we understand are currently being explored internally by Pico to refine the QuickScreen test.

Some areas of possible further exploration for the analysis are also presented in this report.

Introduction

Following an initial study in 2016, Select were again asked to help with the statistical analysis of Pico Educational Systems Ltd’s QuickScreen dyslexia test, on behalf of Dr Dorota Walker.

QuickScreen is an adult computerised screening test, developed with the aim of providing a reasonably in-depth assessment of dyslexia. The test delivers an indication of possible dyslexia without the need for users to undergo a costly professional assessment by an educational or occupational psychologist.

The focus of the previous study was to provide an initial assessment of the diagnostic accuracy of the QuickScreen dyslexia test, based on the test’s banded outcomes (None, Borderline, Mild, Moderate, or Strong). In this study, using new observational data compiled by Pico Educational Systems Ltd, the aim was to support the development of the test by providing evidence that might inform adjustments to the current QuickScreen indication category boundaries. The boundaries are currently defined with respect to a dyslexia quotient score, which is calculated by combining individual scores for various processes examined during the online assessment, such as visual, verbal, memory, reading, comprehension, etc.

In the previous study, we carried out an initial exploration of the speed of processing component results available from the QuickScreen test and found a clear association with dyslexia diagnosis. This initial analysis focussed on the categorical, banded speed of processing results (No Difficulties, Average, or Difficulties). In this study, we take this further, exploring the continuous speed of processing scores (scored from 0 to 20) and identifying the cut-off values that best discriminate between those with and without a previous dyslexia diagnosis, as well as extending this process to the other QuickScreen component assessments.

An essential step in the evaluation process of any diagnostic/screening test is to assess its accuracy via diagnostic accuracy measures. Our Latest results have now been posted (see below) and it is pleasing to note that they are very positive.

We are, however, continuously working to improve the test experience and reporting system.

Data

The QuickScreen dyslexia test results were provided in two separate spreadsheets. The files had a consistent layout and were combined prior to analysis to create a single dataset.

The data received included one set of results for participants with a previous independent dyslexia diagnosis (a “dyslexia diagnosed” group) and a separate set of results for a group of “control” participants for whom no previous independent dyslexia diagnosis was available. The control group participants were all students from the psychology department of a leading UK university. The dyslexia diagnosed group included all participants who had completed the online QuickScreen test since January 2018 and had indicated that they had a previous positive dyslexia diagnosis. This included a
combination of students from various universities, employees of public sector organisations and members of the general public (accessed via the British Dyslexia Association [BDA] website). Note: One participant in the control group spreadsheet was recorded as having previously been diagnosed with dyslexia and was therefore omitted from our analysis.

QuickScreen test results were available for analysis for 185 participants; 111 (60.0%) in the dyslexia diagnosed group and 74 (40.0%) in the control group. The QuickScreen test reports the overall possibility of dyslexia assessment outcome in terms of one of five possible indications: None, Borderline, Mild, Moderate, or Strong. Of the 185 participants included in the analysis, 54 (29.2%) received an indication of None; 56 (30.3%) an indication of Borderline; 31 (16.8%) Mild; 42 (22.7%)
Moderate; and 2 (1.1%) Strong (as shown in the cross-tabulation in Table 1 below).

In addition to the test’s banded outcomes (None, Borderline, Mild, Moderate, or Strong), scores for various QuickScreen component assessments, of processes that are thought to be associated with dyslexia, were also provided in the data. For some components, as well as a “raw” mark on the original scale (such as words per minute [wpm]), a scaled score (between 0 and 20; calculated following standard procedures for these tests) and a percentile version with reference to national norms were also supplied. For a number of the component assessments, ‘Disparity’ and ‘Factor’ variables were also provided, capturing unevenness in performance between the various components and additional symptoms over and above a main indication of dyslexia, respectively.

Prior to analysis, we also calculated a combined result (categorical grouping) and score (continuous variable) for the General Speed of Processing and Literacy components. This allowed us to directly explore potential interactions between these two processes which, following discussion with Dr Walker, we understand are expected to be associated with dyslexia. When combining the scores, as high literacy scores appeared to be associated with the dyslexia diagnosed group whereas high speed of processing scores appeared to be associated with the control group, we computed the General Speed of Processing Score minus the Literacy Score to contrast these rather than simply sum them. Note: There were some sparse categories for the Literacy Result that were excluded from the analysis, these were – “significantly below expectation”, “significantly less well developed than general ability”, and “somewhat less well developed than general ability”.

Finally, a dyslexia quotient variable (scored on a scale from 0 to 20), combining the other component assessments was also provided in the datasets received. The current banded outcomes for the QuickScreen test (None, Borderline, Mild, Moderate, or Strong) are based on the dyslexia quotient score with, prior to September 2018, e.g., quotients of less than 0.5 being associated with an indication
of “None”. Note: We understand that further work is also currently being undertaken internally on the QuickScreen indications to refine the cut-offs used in banding these, e.g., adjusting the boundaries so that quotients of less than 0.5 correspond with an indication of “None”. The aim of this work, which we hope the results of this study will feed in to, is to help narrow-down some of the categories, such as “Borderline” and “Mild” as these were perceived to previously be too broad.

Table 2, below, provides a complete list of the QuickScreen component variables considered in the analysis.


There were a number of missing values in the data for some of the component variables, as detailed in Table 2 above. These were primarily due to two reasons: timed tests as part of the QuickScreen assessment have a ceiling such that if a participant takes unduly long on an item the result is recorded as missing; and the writing component of the QuickScreen test is optional and so those participants choosing not to complete this component will have missing values for the corresponding variables. These missing values were retained within the analysis where possible.

In the following sections, we describe the statistical methods applied to the data provided followed by the corresponding results of these analyses.

We start by describing the assessments applied to the current QuickScreen test banded outcome.

Diagnostic Accuracy Assessments

Methods

To assess the performance of the current QuickScreen test banded outcome, we produced a number of diagnostic accuracy assessment summaries, including the sensitivities, specificities, and predictive values associated with each outcome indication. A similar approach was applied to that used in our original project for QuickScreen (again assuming an estimated prevalence of dyslexia in the population of 10%, when calculating the predictive values). The method to calculate these values is described in our previous report (ref: PICO001) and therefore not repeated here.

We note that ‘Diagnostic Likelihood Ratios’, whilst not explicitly given in the results of this project, can also be calculated from the Sensitivity and Specificity measures provided as follows.

Likelihood Ratio Positive = Sensitivity / (1 − Specificity)
Likelihood Ratio Negative = (1 − Sensitivity) / Specificity

Results

A Fisher’s exact test (on the data in Table 1) finds strong statistical evidence (p-value < 0.0001) of an association between the dyslexia group and the current QuickScreen test indication.

The proportion of participants without dyslexia who received each QuickScreen test result (i.e., sample specificity) and the proportion of participants with dyslexia who received each QuickScreen test result (i.e., sample sensitivity) are shown in Table 3.

For example, 59.5% of participants in the control group received a QuickScreen indication of “None”, and 37.8% of participants in the dyslexia diagnosed group received a QuickScreen indication of “Moderate”.

The proportion of participants in the control and dyslexia diagnosed groups in each QuickScreen test category are shown in Table 4. These are the raw sample predictive values, based on the observed
sample prevalence, and do not reflect estimates for the population.

For example, 81.5% of those participants with a QuickScreen test result of “None” were in the control group, and 100% of those participants with a QuickScreen test result of “Moderate” or “Strong” were in the dyslexia diagnosed group.

The diagnostic accuracy measures for each QuickScreen test category, estimated using the adjusted method (with adjusted logit confidence intervals) and assuming a 10% prevalence of dyslexia are
shown in Table 5.


So, for example, where the QuickScreen test predicted “None”, we estimate that 98.1% (95% Confidence Interval [CI] = 96.7%, 98.9%) of those candidates will not be in the dyslexia diagnosed group (this is the ‘Negative Predictive Value [NPV]’). Of those in the control group, we estimate that the QuickScreen test will predict 59.0% (95% CI = 47.8%, 69.3%) of these candidates to be in the “None” group (Specificity).

Note that the figures for the “None” group do not take account of the borderline cases. In addition to considering each category in isolation, the measures for some combinations of the QuickScreen test result are also provided above. The table includes a row, for example, for “None or Borderline” together. In this case (including the borderlines), of those in the control group, we estimate that the QuickScreen test will predict 92.4% (95% CI = 84.0%, 96.6%) of these candidates to be in the “None or Borderline” groups (this is the test Specificity for these groups when considering them in
combination). So by including the borderlines with the nones, we’re expected to detect a much higher proportion of the control group. In this case, the Negative Predictive Value (NPV) for the “None or Borderline” group remains high, with 95.8% (95% CI = 94.7%, 96.7%) of control candidates estimated to be predicted as either in the “None” or “Borderline” group.

Including the borderline cases in this way helps to address the fact that there may be subjects in the control group who show some symptoms linked with dyslexia (perhaps leading to a “Borderline” indication). When presenting the results of the test to participants, QuickScreen provides a caveat/explanation that in the absence of other key indicators (e.g., deficiencies in literacy levels) a dyslexia diagnosis is unlikely. Furthermore, it is recognised that though participants in the control group may not have previously received a formal dyslexia diagnosis, it is possible that this group may contain a small number of previously undiagnosed dyslexics. Please see the Validity section of this report for further discussion of the potential for so-called classification bias. The implication of which is that it may not be possible to achieve perfect diagnostic accuracy in this case. The graduated indications provided by QuickScreen reflect the non-binary nature of dyslexia which is on a continuum of symptoms/severities and help communicate this uncertainty to participants.

In the following sections, we move on to describing the methods applied to the QuickScreen test component variables.

Exploratory Data Analysis

Methods

It is standard practice when undertaking a statistical analysis to begin with some exploratory analyses. In this case, we produced a boxplot1 and summary statistics (calculating the mean, standard deviation [SD], median and range) for each continuous QuickScreen test component variable (i.e., excluding the categorical variables: Literacy Result, General Speed of Processing Result, Literacy + General Speed of Processing Result), split by group. These summaries help to provide an indication as to which variables might be most informative in discriminating between those in the dyslexia diagnosed and control groups, by comparing the distributions of the scores observed between these groups. A statistical hypothesis test2 was also performed to assess the evidence available for a difference in the distributions between the groups for each QuickScreen component.

Note: Missing values were excluded from the corresponding summaries for the relevant variables.

Results

The boxplots, comparing the distributions of the QuickScreen test component variables between the dyslexia diagnosed and control groups are presented in an Appendix to this report. The summary statistics by dyslexia diagnosed versus control group are presented in Table 6 below.

1  The box includes the upper and lower quartiles and therefore the middle 50% of the data and the horizontal line within the box is the median   (https://select-statistics.co.uk/resources/glossary-page/#median).    The whiskers extend to  1.5 times the interquartile range   (https://select-statistics.co.uk/resources/glossary- page/#interquartile-range-iqr) and data points outside this range are marked as dots.

2  A non-parametric, two-sample Mann–Whitney U test was applied, which does not rely on the assumption of normally distributed data, as for some variables there was evidence of a deviation from normality in the corresponding boxplots.

From these summaries, we note for example that there appears to be clear evidence of:

Lower General Speed of Processing Scores being associated with the dyslexia diagnosed group compared with the control group.

  • On average (based on the mean values), the dyslexia diagnosed group participants achieve a score of 10.2, whereas the control group participants achieve a score of
    15.6.

Lower Memory Score (%) results being associated with the dyslexia diagnosed group compared with the control group.

  • On average (based on the mean values), the dyslexia diagnosed group participants achieve a score of 40.2%, whereas the control group participants achieve a score of
    63.7%.

Lower Reading Speed (wpm) results being associated with the dyslexia diagnosed group compared with the control group.

  • On average (based on the mean values), the dyslexia diagnosed group participants achieve a wpm of 137.2, whereas the control group participants achieve a wpm of
    232.8.

Lower Spelling Score (%) results being associated with the dyslexia diagnosed group compared with the control group.

  • On average (based on the mean values), the dyslexia diagnosed group participants achieve a score of 48.0%, whereas the control group participants achieve a score of
    82.8%.

Higher Dyslexia Quotient scores being associated with the dyslexia diagnosed group compared with the control group.

  • On average (based on the mean values), the dyslexia diagnosed group participants achieve a score of 5.4, whereas the control group participants achieve a score of 0.9.

 

Furthermore, based on the Mann-Whitney U test (Table 6), we find strong statistical evidence (p<0.0001) of a difference in the distribution of the Dyslexia Quotient scores between the dyslexia diagnosed and control groups (with a median score of 5.5 versus 0.2, respectively).

Similarly, the data provide strong statistical evidence of a difference in the distributions between the dyslexia diagnosed and control participants for the following QuickScreen test components: Literacy Score, General Speed of Processing Score, General Speed of Processing Score minus Literacy Score, Spelling Score (%), Spelling Scaled Score, Reading Speed (wpm), Reading Speed Scaled Score, Memory Scaled Score, Memory Score (%), Memory Span Scaled Score, Memory Span (words), Sequencing Scaled Score, Sequencing Score (%), Visual Score (%), Visual Scaled Score, Verbal Score (%), Verbal Scaled Score, Vocabulary Score (%), Vocabulary Scaled Score, Processing Scaled Score, Typing Speed (wpm), Typing Speed Scaled Score, Accuracy Score (%), Accuracy Scaled Score, Punctuation Score (%), Punctuation Scaled Score, Ability Score (%), Ability Scaled Score, Memory Disparity, Sequencing Disparity, Processing Disparity, Processing Speed Disparity, Reading Speed Disparity, Spelling Factor, Writing/Typing Speed Factor (p<0.0001, for each of the preceding variables), Visual/Verbal Factor (p=0.0002), Processing Score (%) (p=0.0005), Comprehension Score (%) (p=0.0005), and Comprehension Scaled Score (p=0.0007).

CART Modelling

Following the exploratory analysis described above, we applied some more formal modelling to further explore the association between each QuickScreen test component variable and the participants’ dyslexia group.

Univariate Models

Methods

For each QuickScreen component variable, individually, we applied a tree-based modelling3 approach (also known as “CART” [Classification And Regression Tree]). We fit a classification tree to the dyslexia group (control versus dyslexia diagnosed) as the outcome, with each component considered as the only explanatory variable, one-by-one. This is helps to identify the thresholds/groups of values for each explanatory variable that are associated with the outcome. A final set of tree groups are produced corresponding with distinct values of the explanatory variable that have an associated proportion/probability of being dyslexia diagnosed, which are as different as possible between the groups.

The classification tree is fit using a process called binary recursive partitioning. The algorithm starts with all of the participants at the top of the tree, then as we progress down to the first “branch”, we identify the threshold, i.e., cut-off, in the QuickScreen test component variable under consideration that is the ‘best’ at discriminating between the dyslexia diagnosed and control group participants (splitting these as far as possible into separate groups). The participants are then broken down into two splits based upon the differing values of the QuickScreen variable (compared with the threshold identified), with one group going down the left-hand branch and the other the right-hand branch. The classification tree algorithm checks to see that the difference in the proportion of dyslexia diagnosed vs control participants between these groups is sufficiently discriminatory (based on a stopping rule with given tuning parameters) and, if it is, we retain these new branches. At the next step, for each of the new branches, we then consider whether they can be further split into subgroups so that there is a difference in the proportion of dyslexia diagnosed vs control participants, with the most discriminating split chosen as the next branch, and so on.

The algorithm measures the value of a potential split in terms of the ‘deviance’. This is based on viewing the tree as a probability model and considering the likelihood of observing the data given the model that we are proposing. Introducing an additional split will reduce the deviance, and the split that results in the greatest reduction in the deviance is considered the optimal choice.

To help avoid spurious results that may occur due to small numbers of participants showing an apparent effect by chance, we included a condition in the tree algorithm so that the smallest number of participants that could contribute to a final group (at the bottom of the tree) was 20.

For each univariate classification tree, we produced a table summarising the results of the corresponding tree, giving details of the splits and the associated proportions of dyslexia diagnosed vs

3 Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2nd Edition). Springer, 2009. Download-able from http://web.stanford.edu/~hastie/ElemStatLearn. Cited on page 305; Section 9.2.

control candidates in the final groups formed. The tables also detail the number (and proportion) of participants in each final group out of all of the subjects included in the study.

Furthermore, we provide summaries of the “Residual Mean Deviance” and “Misclassification Rate” associated with each tree. These can be used to compare the predictive performance of the trees to understand which may be ‘best’ at discriminating between the dyslexia diagnosed and control participants. The misclassification rate, is simply calculated by predicting/classifying all participants within a tree group with a dyslexia diagnosed proportion higher than 50% as being in the dyslexia group, and those less than 50% as being in the control group. The proportion of candidates misclassified (as being in the wrong group according to the observed data) then gives the misclassification rate. The residual mean deviance is simply the average deviance across the final groups in the tree, and can be interpreted as the lower the value the better the model performance.

This tree-based approach helps to identify the cut-off values, looking at each variable in isolation, that best discriminate between those with and without a previous dyslexia diagnosis. The thresholds indicated, which have been identified in an objective manner, could then help to potentially redefine the current QuickScreen bandings, with a view to narrowing down the classifications and hopefully improving the predictive accuracy of the QuickScreen test.

Note: Missing values were excluded from the corresponding univariate tree for the relevant variables. All analyses were performed in the statistical software package R version 3.4.3 (2017-11-30)4. The tree package5 was used to implement the classification tree models.

Results

Summaries of the results of the univariate trees for each of the QuickScreen test component variables are shown in Table 7 overleaf. The table is ordered by the tree misclassification rates, where the lowest rate (closest to zero) indicates the ‘best’ performance. Based on this metric, we find that the following variables are most informative:

i. Reading speed (wpm),
ii. Literacy + General Speed of Processing Result, iii. General Speed of Processing Result,
iv. Processing Scaled Score,
v. General Speed of Processing Score – Literacy Score, vi. Reading Speed Scaled Score,
etc.

4 R Core Team (2017). R: A language and environment for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria. URL https://www.R-project.org/.
5 Brian Ripley (2016). tree: Classification and Regression Trees. R package version 1.0-37. https://CRAN.R-project.org/package=tree








A tree diagram, visualising these results for the reading speed univariate CART model (as presented in
Table 7) is shown in Figure 1 below, for illustration.

We see, for example, that the participants are split into those with a reading speed of:

Words per minute (wpm) less than 142.1, for which 97.26% of the candidates in this group are
dyslexia diagnosed vs. control (labelled ‘high’),
wpm between 142.1 and 185.65, for which 72.97% of the candidates in this group are dyslexia
diagnosed (labelled ‘middle’); and
wpm greater than 185.65, for which 19.12% of the candidates in this group are dyslexia
diagnosed (labelled ‘low’).
So, the data provide statistical evidence of reading speed cut-offs of 142.1 and 185.65 wpm providing discrimination between participants in the dyslexia diagnosed vs control groups.

For the processing scaled score QuickScreen component, there is statistical evidence that cut-offs of
10.5, 11.5 and 13 provide a means of discriminating between participants in the dyslexia groups
(diagnosed vs control).

Similar results were found for the other QuickScreen components, so there was statistical evidence that cut-offs (as shown in Table 7) in each of these variables were informative in discriminating between the dyslexia diagnosed and control participants.

This analysis should be helpful in indicating which scores, associated with each variable, might be useful potential cut-off points for informing a dyslexia diagnosis prediction. For example, we find evidence that candidates with a reading speed of less than 142.1 wpm are very likely to be in the dyslexia diagnosis group (as described above). These results may be useful in informing the refinements to the QuickScreen test indications that we understand are currently being explored by Pico.

We note that for the dyslexia quotient score (which is currently used to define the QuickScreen test indication boundaries), cut-offs of less than 0.75, 0.75-2.25, 2.25-4.25, and greater than 4.25 are identified, corresponding with 18.5%, 44.7%, 70.0% and 100% of the participants in each group being in the dyslexia diagnosed (as opposed to the control group), respectively (as shown in Table 7). So, there is statistical evidence of dyslexia quotient score cut-offs of 0.75, 2.25 and 4.25 discriminating between the dyslexia diagnosed vs control groups.

Multiple Variable Models

Methods

CART

In the univariate trees described above, each variable is considered individually in isolation, whereas we recognise that some of the variables will likely be capturing/explaining similar information and that combinations of variables may interact, i.e., their combined effects may be greater or less than the sum of their individual effects. Therefore, to understand the combined effects of the QuickScreen test component variables, we explored fitting multiple variable classification tree models to the data, including combinations of the QuickScreen components as explanatory variables in one tree. This multiple variable approach also helps to identify which variables are the most important/discriminatory (out of the large number of QuickScreen test components recorded). Those variables that are chosen by the tree algorithm as the preferred ones to split on (and those that are split on higher up the tree) are those found to be the most informative.

This statistical framework allows us to consider different ways of combining the individual scores to create an overall assessment of the likelihood of being in the dyslexia diagnosed group. The aim being to improve upon the performance of the current approach to combining the components, corresponding with the current QuickScreen test indication bandings.

Following discussion with Dr Walker, we agreed to consider three different multiple variable trees, where the following subsets of variables were offered as explanatory variables to each.
Tree 1: Speed of processing, spelling and reading speed (all versions)
Tree 2: Speed of processing, spelling and reading speed (scaled scores only)
Tree 3: All QuickScreen test component variables (corresponding with the 46 variables listed in Table 2)

The first two trees above focus on three of the QuickScreen component processes understood to be linked with dyslexia, with the former considering all versions of these variables and the latter only considering the scaled scores (for comparability). Whereas the final model is more flexible in offering all of the QuickSreen test component variables to the tree.

These results should hopefully be useful in actively improving how the scores provided by the test can be best used to generate a dyslexia assessment, as part of the process of refining the QuickScreen test indications that we understand is already underway internally at Pico.

Note: Participants with missing values are included where possible in the multiple variable CART models (i.e., up to the point at which the variable that is split upon in the tree is the one that contains the missing values). This way, these participants still contribute to the evidence of which variables are most discriminating (between the dyslexia diagnosed and control groups) where possible, so we retain as much data as possible to inform the analysis. These participants with missing values are also then included in the overall CART summaries, i.e., Residual Mean Deviance and Misclassification Rate results.

Out-of-sample Model Performance

To further explore the predictive performance of the multiple variable classification trees, we produced a similar set of diagnostic accuracy summaries as described above (for the QuickScreen test banded indications) for each model (again using an estimated prevalence of dyslexia of 10% in calculating the predictive values). These summaries help to demonstrate the ability of the combination of these QuickScreen test components to distinguish between the dyslexia diagnosed and control groups (over and above the misclassification and mean residual deviance summaries already noted).

The summaries are calculated by labelling the CART model predictions, associated with the final groups at the bottom of each tree, as ‘low’, ‘midlow’, ‘middle’, ‘midhigh’, and ‘high’, in order of the lowest to highest predicted probability of being in the dyslexia group, respectively. These groups are then used in place of the QuickScreen test banded indications as the predicted diagnosis for comparing with the observed dyslexia diagnosed vs control group outcome.

However, if we were to calculate these performance assessments using the complete set of data available, which were the observations used to construct the CART trees, these figures may potentially overestimate how well the model will perform in practice, i.e., for subsequent participants. This is because there is the potential for the CART models to be “overfitted” to the data used to train them.

Therefore, to get a more reliable estimate of the performance of the CART models for use in practice, we can explore what’s called the “out-of-sample” model performance. This means that we produce the performance assessments for data not used to train the model. Ideally, we would use a completely new and independent dataset to do this. However, given the available data, we can alternatively use an approach called ‘cross-validation’ to similar effect.
We randomly sample a subset of the data to obtain a new training dataset which can be used to refit the CART model. The remaining data not included in the training dataset is then our “hold-out” (a.k.a. out-of-sample) test dataset which we can use to independently assess the performance of the refitted model. We repeat this process lots of times (1,000 times for each model) for different random training (and corresponding test) samples, and then explore the distribution/average of the performance results to derive a more accurate estimate of the model performance. Note: We use a consistent method to fit the training trees, but they are pruned to ensure that they have the same number of final groups as the final models, for consistency.

To help ensure that the trees fit to the training datasets reflect the final model as closely as possible, we use bootstrapping, i.e., sampling with replacement. This allows us to sample a training dataset of the same size as the complete dataset used to fit the final tree (where some observations will be included in the training data multiple times). We recognise however that these bootstrapped, training data samples will not have the same coverage of the QuickScreen test component variables as the full dataset, as not all of the original participants (with varying characteristics) will be included. Therefore the out-of-sample model performance summaries may be somewhat conservative, i.e., underestimating the performance of the final trees in practice.

Results

Tree 1: Speed of processing, spelling and reading speed (all versions)

The results of the multiple variable CART model for the speed of processing, spelling and reading speed

(all versions) QuickScreen component variables are presented in Table 8 and Figure 2 below.

We find that both the reading speed and spelling score components add to one another in discriminating between those with a dyslexia diagnosis and those in the control group, so they are not necessarily each explaining the same underlying effect (as both are retained within the model).

Having offered both versions of each of these variables (i.e., score and result for General Speed of Processing, percentile and scaled score for Spelling Score, and wpm and scaled score for Reading Speed), and we find that the reading speed wpm and spelling score percentile versions (and corresponding splits as shown in the table above) are most informative. The top two (labelled ‘High’ and ‘MidHigh’) and bottom (labelled ‘Low’) final tree groups, for example, are defined as:
‘High’: A reading speed slower than 185.65 wpm and a spelling score percentile less than
61.25%; for which 100% of the participants in this group are dyslexia diagnosed (as opposed to being in the control group).
‘MidHigh’: A reading speed slower than 185.65 wpm and a spelling score percentile between
61.25% and 76.25%; for which 90.5% of the participants in this group are dyslexia diagnosed
(as opposed to being in the control group).
‘Low’: A reading speed faster than 185.65 wpm and a spelling score percentile greater than
83.75%; for which 92.1% of the participants in this group and in the control group (i.e., not dyslexia diagnosed).

The out-of-sample performance assessments for the CART model for speed of processing, spelling and reading speed (all versions) are shown in Table 9.


So, where the CART model predicts a ‘Low’ or ‘MidLow’ probability of being in the dyslexia diagnosed group (corresponding with a reading speed of > 185.65 wpm), we estimate that 97.6% of those candidates will not be in the dyslexia diagnosed group (this is the “Negative Predictive Value [NPV]”). Of those in the control group, we estimate that the model will predict 65.9% of these candidates to be in the ‘Low’ or ‘MidLow’ group (this is the Specificity).

Tree 2: Speed of processing, spelling and reading speed (scaled scores only)

The results of the multiple variable CART model for the speed of processing, spelling and reading speed (scaled scores only) QuickScreen component variables are presented in Table 10 and Figure 3 below.

Here, for example, the first group (labelled ‘High’) is defined by participants with:

A general speed of processing score of less than 11.5 and a spelling scaled score of less than 13.

In this group, 98.5% of the participants are in the dyslexia diagnosed group (as opposed to in the control group).

The corresponding out-of-sample performance assessments for the CART model for speed of processing, spelling and reading speed (scaled scores only) are shown in Table 11.

So, where the CART model predicts a ‘Low’ probability of being in the dyslexia diagnosed group (corresponding a general speed of processing score of greater than 14.5 and a spelling scaled score of greater than 13), we estimate that 97.6% of those candidates will not be in the dyslexia diagnosed group (this is the “Negative Predictive Value [NPV]”). Of those in the control group, we estimate that the model will predict 41.3% of these candidates to be in the ‘Low’ group (this is the Specificity).

Tree 3: All QuickScreen test component variables

The results of the multiple variable CART model for all of the QuickScreen test component variables are presented in Table 12 and Figure 4 below.


In this case, we find that the General Speed of Processing Score minus the Literacy Score variable, and the Sequencing Scaled Score variable are also found to be informative (in addition to the Reading Speed and Spelling Score variables) in discriminating between the dyslexia diagnosed and control groups.

For example, the first group (labelled ‘High’) is defined by participants with:

A reading speed of less than 185.65 wpm, a spelling score of less than 76.25% and a general speed of processing minus literacy score of less than 9.25.

In this group, 100% of the candidates are dyslexia diagnosed (as opposed to in the control group).

This tree also appears to be promising in picking out the control candidates. For example, the ‘Low’ group defined by a reading speed of greater than 185.65 wpm and a sequencing scaled score of greater than 10.5, are made up of 96.9% control candidates.
So, there is statistical evidence that the combination of the Reading Speed (wpm), Spelling Score (%), General Speed of Processing Score minus Literacy Score, and Sequencing Scaled Score QuickScreen test components are informative in discriminating between the dyslexia diagnosed and control participants.

The out-of-sample performance assessments for the CART model for all QuickScreen test component variables are shown in Table 13.

So, where the CART model predicts a ‘Low’ probability of being in the dyslexia diagnosed group (corresponding with a reading speed of > 185.65 wpm and a Sequencing Scaled Score of > 10.5), we estimate that 97.8% of those candidates will not be in the dyslexia diagnosed group (this is the “Negative Predictive Value [NPV]”). Of those in the control group, we estimate that the model will predict 43.9% of these candidates to be in the ‘Low’ group (this is the Specificity).

Validity

Similar to our original study, it should be noted when interpreting the results of this analysis that their validity depends on the applicability of the sample participants to the population of interest. This includes the spectrum of severity of dyslexia in the sample. Where this might not reflect the target population, a study is sometimes said to suffer from “spectrum bias”. We note for example that the
‘control’ group are all students from a leading university. Whereas the ‘test’ group (with a previous dyslexia diagnosis) are a mixture of students and members of the public.

The potential for other biases such as classification bias, where misclassification of participants in their dyslexia diagnosed group may have occurred, should also be considered. We note that this is particularly relevant in this study where it is recognised that the control group participants may include a small number of undiagnosed dyslexics. Therefore, where QuickScreen may report a positive albeit weak indication of dyslexia (not “None”, for example) for a participant in the control group, it is understood that this subject could in fact have undiagnosed dyslexia. It is also acknowledged that those in the dyslexia diagnosed group may have received their diagnosis a number of years previously, and may now potentially be well-compensated and therefore asymptomatic despite having a positive diagnosis. The graduated indications provided by QuickScreen reflect this non-binary nature of dyslexia which is on a continuum of symptoms/severities.

A more formal, prospective cohort study may provide a more reliable assessment of the diagnostic test accuracy, by helping to eliminate potential sources of bias such as those described above. Though, we recognise that due to the challenges of obtaining a reliable, independent diagnosis and as dyslexia is a condition with a spectrum of severities, it may not necessarily be possible to achieve perfect diagnostic accuracy in this context.

Potential Further Work

In this section, we note some possible extensions that could be made to the analyses conducted to date to further support the ongoing refinement of the QuickScreen test indications that we understand is being undertaken internally at Pico.

An alternative approach to the multiple variable CART modelling could be explored. For example, a logistic regression model could be applied to predict the dyslexia group based on the QuickScreen test component variables.

Similar to the classification tree models already explored, the logistic regression model would give us a formula that could then be applied going forwards to obtain the predicted probability for new participants. This would provide a different way of combining the individual scores to create an overall assessment of the likelihood of dyslexia. A logistic regression model would estimate effects on the probability of being a dyslexia diagnosed vs control participant for linear changes in the QuickScreen test component variables. So, rather than grouping scores into splits with different effects, this assumes that each unit change in a score (increasing it by one), say, has a given effect on the odds of being in the dyslexia diagnosed group.

This is a different approach from the CART modelling, where one method is not necessarily better or worse than the other. Though, the classification trees arguably provide more intuitively interpretable results. We could potentially explore both modelling approaches and compare their performance to see which might work best (i.e., give the most accurate predictions) in this context.

Based on the results of the analyses conducted in this study, consideration could be given to amending the current QuickScreen test indication category boundaries. For example, where the CART models appear to discriminate well between the dyslexia diagnosed and control groups, inclusion of the corresponding QuickScreen component variables (and/or refinements to the thresholds currently used) could be considered, adjusting the current process for calculating the dyslexia quotients and resulting QuickScreen dyslexia indications. The Negative (i.e., Control) Predictive Value for the multiple variable CART model for all QuickScreen test component variables may offer some improvement over the current “None” and “Borderline” bandings (combined), when considering the combination of the ‘Low’ and ‘MidLow’ tree groups (97.6% versus 95.8%). The ‘Low’ and ‘MidLow’ groups correspond with a reading speed of greater than 185.65 wpm. Similarly, the ‘High’ or ‘High’ and ‘MidHigh’ groups for the multiple variable CART model for all QuickScreen test component variables appears to offer improved Sensitivity values compared with the current “Strong” or “Strong” and “Moderate” QuickScreen indications (59.4% or 73.9%, versus 3.4% or 40.0%, respectively). These groups correspond with a reading speed of less than 185.65 wpm, a spelling score of less than 76.25% and a general speed of processing minus literacy score of less than 9.25 (for the ‘High’ group), or a reading speed of less than 185.65 wpm and a spelling score of less than 76.25% (for the ‘High’ or
‘MidHigh’ group).

Ultimately, following any updates that might be made to the QuickScreen test indications bandings, ideally additional data would be collected to carry out a further, independent assessment of if and how the diagnostic performance has improved.

Appendix

Boxplots

Author: Sarah Littler (née Marley)
Reviewed by: Lynsey McColl
Revision Date: 26th November 2018
Prepared for: Pico Educational Systems Ltd
Reference Number: PICO002 v3

Download the 2018 research report here: Dyslexia Report


Previous Research Reports

QuickScreen Dyslexia Test 2016

Download QuickScan Research Document

Initial Analysis
Introduction

QuickScreen is an adult computerised screening tool, developed with the aim of providing a reasonably in-depth assessment of dyslexia.

There have been many models for assessment, from the early medical approach through to phonological skills testing and the social models of dyslexia that place less emphasis on the value of testing. Each of these models has its strengths and weaknesses and can be seen as more or less applicable to the individual being assessed depending upon the aims of the assessment.

The traditional model used by educational psychologists and dyslexia specialists, which aims to establish a discrepancy between literacy acquisition and underlying ability, continues to be required by most educational establishments. Higher education institutions in England and Wales offer support to students with a formal diagnosis of dyslexia, including extra time in examinations and coaching in study skills (Zdzienski, 2001).1

One of the problems with many existing online tests is that they do not provide the detailed underlying skills and literacy levels that would be useful at university level, and QuickScreen, which has been specifically aimed at pre-university students through to postgraduate level, seems also to work effectively for adults who have not had the opportunities provided by formal academic training. It does not assume either high or low levels of performance but it does provide the challenge for individuals to test themselves in a relaxed environment.

Since in most universities a full assessment by an Educational Psychologist is required in order for students to be granted any concessions or support, (Singleton 99a).2 at present, the quickscreen report can best be used to inform the next stage and to establish the need for study support.

QuickScreen is intended not only to identify dyslexia but also provide a comprehensive, targeted cognitive profile of learning strengths and weaknesses. It can furthermore be used to plan appropriate support in learning competencies required, as it tests adult speed of reading, writing, typing, comprehension, listening skills, spelling and punctuation. Six subtests produce a profile of verbal, visual and vocabulary skills, together with the well established underlying skills relating to dyslexia, which are memory, sequencing and processing.

QuickScreen uses data from the battery of subtests to arrive at three conclusions. The first is to establish whether there are indicators of dyslexia, the second to assess levels of literacy and the third to highlight any difficulties with speed of processing. By detailed cross referencing of results data QuickScreen has made it possible to produce a computer generated report covering all three areas.

Additionally it provides a comprehensive literacy and attainment profile. This enables tutors to compile a statement of individual needs for study support. They can either use the report to confirm the need for a full dyslexia assessment or add their comments and use them as relevant background evidence in establishing a case for support.

Early Indications

An essential step in the evaluation process of any diagnostic/screening test is to assess its accuracy via diagnostic accuracy measures. These measures for QuickScreen, are based on observational data compiled over a number of years by Pico Educational Systems Ltd.

These data were collected from participants completing the online assessment via three sources: a link offered on the British Dyslexia Association (BDA) website, personally sent links to individual email addresses, and some university trials.

Initial results suggested tentatively that a positive QuickScreen result of Mild or above could potentially be used to make reasonable adjustments without obtaining a full dyslexia assessment from an Educational Psychologist. A dyslexic student would be about 4.4 times as likely as a non-dyslexic student to obtain this result. Alternatively, if a negative QuickScreen result of None was used to advise students against obtaining an assessment from an Educational Psychologist, a dyslexic student would be about 0.4 times as likely as a non-dyslexic student to obtain this result. (Initial university trial1). 3

Early indications from the trial suggested strongly that QuickScreen does differentiate between dyslexic and non-dyslexic adults. Furthermore, there appears to be a clear association between speed of processing and a dyslexia diagnosis.
Speed of Processing is a measure of the ability to assimilate, process and record written data under prescribed conditions, important factors in the formalities of learning and study, replicating as it does, many of the skills needed for efficiency in written literacy tasks. Research in recent years has highlighted the link between slow processing speeds and the various elements involved in the word decoding process experienced by people with dyslexia. Breznitz (2008) 4/5

It has now been found that there is strong statistical evidence of an association between the independent dyslexia diagnosis and the QuickScreen test indicator. Where a candidate gets a mild, moderate or strong dyslexia indication, then they are very likely to have been correctly identified with dyslexia. Non-dyslexic control group samples show that there are no candidates getting mild, moderate or strong indicators.

It is anticipated that, in time, this would serve as a potential substitute for a full assessment when arranging study support and applying for financial allowances at university level.

Data

The QuickScreen dyslexia test results were provided in comma separated value (csv) format in a number of separate files. These csv files all had a consistent layout and were combined prior to analysis to create a single dataset.
Test results were available for 245 participants with an independent dyslexia diagnosis; 193 (78.8%) had a positive diagnosis and 52 (21.2%) a negative diagnosis. The QuickScreen test reports the possibility of dyslexia in terms of one of five possible indications: None, Borderline, Mild, Moderate, or Strong. Of the 245 participants included in the analysis, 40 (16.3%) received an indication of None; 71 (29.0%) an indication of Borderline; 65 (26.5%) Mild; 62 (25.3%) Moderate; and 7 (2.9%) Strong (as shown in the cross-tabulation in Table 1).

Information was also available to indicate where some participants were known university students. One-hundred and eighteen participants (48.2%) were identified as known university students and 127 (51.8%) unknown with regard to their university status. In order to provide greater clarity on how well the QuickScreen test is performing for potentially better compensated dyslexics, the analysis was repeated (i.e., calculation of the diagnostic accuracy measures) splitting the results by this university grouping.

Methods

The sensitivity of a diagnostic test indicates how good it is at finding people with the condition in question. It is the probability that someone who has the condition is identified as such by the test.

Whereas the specificity of a diagnostic test indicates how good it is at identifying people who do not have the condition. It is the probability that someone who does not have the condition is identified as such by the test. In this case, the QuickScreen test has five possible outcome indications. Therefore, we can calculate the sensitivity of each category in identifying people with dyslexia (treating each test category as a “test positive”) and also the specificity of each category in identifying people without dyslexia (treating each category as a “test negative”).

Another important set of accuracy measures are the predictive values of the test.. These are also termed the “post-test probabilities” and provide the probability of a positive or negative diagnosis given the test result. The predictive values therefore provide important information on the diagnostic accuracy of the test for a particular participant, answering the question “How likely is it that I have or don’t have dyslexia given the test result that I have received?”

The predictive values depend on the prevalence of the condition in question in the population, i.e., the proportion of individuals who have dyslexia, as well as the sensitivity and specificity of the test. As the sample of data available are a selection of “cases” with a positive dyslexia diagnosis and “controls” with a negative dyslexia diagnosis from observational data, rather than a random sample from the population, the true prevalence is unknown.

Based on previous research studies and the figures quoted by dyslexia organisations. it was agreed that an estimated prevalence of 10% would be used when calculating the predictive values. The observed prevalence in the data available was considerably higher than this (78.8%), indicating an oversampling of dyslexic participants. In screening situations, the prevalence is almost always small and the positive predictive value low, even for a fairly sensitive and specific test.

For each QuickScreen test category, sensitivity, specificity, positive and negative predictive value are therefore estimated. 95% confidence intervals are also provided for each, to capture any uncertainty in the estimates.

The standard estimation of binominal proportions, such as the sensitivity and specificity of a diagnostic test (i.e., taking the observed sample proportion), has been shown to be less than adequate, particularly when the sample size is relatively low. Applying a continuity correction can provide a better estimate and allow more accurate confidence intervals to be developed. The standard estimation of binominal proportions, such as the sensitivity and specificity of a diagnostic test has been shown to be less than adequate, particularly when the sample size is relatively low. Therefore, diagnostic accuracy measure values are calculated using continuity-adjusted estimates and continuity adjusted logit intervals (for further information and the formulae applied see: D. N. Mercaldo, X-H Zhou, and K. F. Lau; 2005,2) 6

Alongside these diagnostic accuracy measures, we have carried out a statistical test to assess whether there is evidence of an association between the QuickScreen test outcome and the independent dyslexia diagnosis. This would be expected if the test is useful in discriminating between dyslexic and non-dyslexic individuals. Fisher’s exact test 7 is applied (rather than a large sample test such as the Chi-square test, for example) to account for the fact that we have relatively low sample sizes, which can bias the results in asymptotic tests (as the normal approximation of the multinomial distribution can fail).

Validity

It should be noted when interpreting the results of this analysis that their validity depends on the applicability of the sample participants to the population of interest. This includes the spectrum of severity of dyslexia in the sample. Where this might not reflect the target population, a study is sometimes said to suffer from “spectrum bias”.

The potential for other biases such as classification bias, where misclassification of participants in their independent dyslexia diagnosis may have occurred, should also be considered.

A more formal, prospective cohort study may provide a more reliable assessment of the diagnostic test accuracy, by helping to eliminate potential sources of bias.

 

Results

The results of the analysis outlined in the Methods section are presented below, first for the full 245 participants and then for the known university student group (n=118) and finally the unknown university status group (n=127).
All Participants

A Fisher’s exact test (on the data in Table 1) finds strong statistical evidence (p-value < 0.0001) of an association between the independent dyslexia diagnosis and the QuickScreen test indication.

results table 1

The proportion of participants without dyslexia who received each QuickScreen test result (i.e., sample specificity) and the proportion of participants with dyslexia who received each QuickScreen test result (i.e., sample sensitivity) are shown in Table 2.

Results table 2

For example, 55.8% of participants without dyslexia received a QuickScreen indication of “None”, and 32.1% of participants with dyslexia receive a QuickScreen indication of “Moderate”.

The proportion of participants with and without dyslexia in each QuickScreen test category are shown in Table 3. These are the raw sample predictive values, based on the observed sample prevalence, and do not reflect estimates for the population.

results table 3

results table 4

For example, 72.5% of those participants with a QuickScreen test result of “None” were non- dyslexics, and 100% of those participants with a QuickScreen test result of “Strong” were dyslexic.

The diagnostic accuracy measures for each QuickScreen test category, estimated using the adjusted method (with adjusted logit confidence intervals) and assuming a 10% prevalence of dyslexia are shown in Table 4.

In addition to considering each category in isolation, the measures for some combinations of the QuickScreen test result are also provided. For example, we estimate that 96.6% (95% Confidence Interval [CI] = 86.9%, 99.2%) of non-dyslexic individuals will receive a QuickScreen indication of “None or Borderline”. An individual receiving a QuickScreen indication of “Mild, Moderate or Strong” is estimated to have a 69.0% (95% CI = 35.7%, 90.0%) probability of a positive dyslexia diagnosis

University Group

The results of the analysis outlined in the Methods section for the known university
student group are presented below.

Test results were available for 118 known university students with an independent dyslexia diagnosis; 77 (65.3%) had a positive diagnosis and 41 (34.7%) a negative diagnosis. Of these 118 participants, 28 (23.7%) received a QuickScreen indication of None; 41 (34.7%) an indication of Borderline; 33 (28.0%) Mild; 15 (12.7%) Moderate; and 1 (0.8%) Strong (as shown in the cross- tabulation in Table 5).

results table 5

A Fisher’s exact test on these data finds strong statistical evidence (p-value <0.0001) of an association between the independent dyslexia diagnosis and the QuickScreen test indication.

The proportion of known university students without dyslexia who received each QuickScreen test result (i.e., sample specificity) and the proportion of known university students with dyslexia who received each QuickScreen test result (i.e., sample sensitivity) are shown in Table 6

results table 6

The proportion of known university students with and without dyslexia in each QuickScreen test category are shown in Table 7. These are the raw sample predictive values, based on the observed sample prevalence, and do not reflect estimates for the Population.

results table 7

The diagnostic accuracy measures for each QuickScreen test category for the known university students are shown in Table 8. These are estimated using the adjusted method (with adjusted logit confidence intervals) and assuming a 10% prevalence of Dyslexia.

 

results table 8

Unknown University Status Group

The results of the analysis outlined in the Methods section for the unknown university status group are presented below.

Test results were available for 127 participants with unknown university status with an independent dyslexia diagnosis; 116 (91.3%) had a positive diagnosis and 11 (8.7%) a negative diagnosis. Of these 127 participants, 12 (9.4%) received a QuickScreen indication of None; 30 (23.6%) an indication of Borderline; 32 (25.2%) Mild; 47 (37.0%) Moderate; and 6 (4.7%) Strong (as shown in the cross- tabulation in Table 9).

results table 9

A Fisher’s exact test on these data finds strong statistical evidence (p-value <0.0001) of an association between the independent dyslexia diagnosis and the QuickScreen test indication.

The proportion of participants with unknown university status without dyslexia who received each QuickScreen test result (i.e., sample specificity) and the proportion of participants unknown university status with dyslexia who received each QuickScreen test result (i.e., sample sensitivity) are shown in Table 10.

results table 10

The proportion of participants with unknown university status with and without dyslexia in each QuickScreen test category is shown in Table 11. These are the raw sample predictive values, based on the observed sample prevalence, and do not reflect estimates for the population.

results table 10

Notably, the proportion of participants in the Borderline group with a positive diagnosis is somewhat higher in the unknown university status group compared with the known university student group (83.3% compared with 56.1%).

 

results table 12

The diagnostic accuracy measures for each QuickScreen test category for the participants with unknown university status are shown in Table 12. These are estimated using the adjusted method (with adjusted logit confidence intervals) and assuming a 10% prevalence of dyslexia.

Speed of Processing

Another area of potential further research is to explore how the QuickScreen speed of processing results vary between participants with and without dyslexia.

Table 13 below shows a cross-tabulation of the dyslexia diagnosis versus the speed of processing results available from the QuickScreen data.

results table 13

Of those 52 participants with a negative dyslexia diagnosis 27 (51.9%), 23 (44.2%) and 2 (3%) have No Difficulties, Average and Difficulties speed of processing results, respectively. Whereas of those 192 with a positive dyslexia diagnosis 12 (6.3%), 103 (53.6%) and 77 (40.1%) have No Difficulties, Average and Difficulties speed of processing results, respectively.

Hence, there appears to be a clear association between speed of processing and dyslexia diagnosis. This supports the case for considering including speed of processing as an explanatory variable in a model for the probability of dyslexia.

Potential Further Work

The analysis presented in this report provides an initial assessment of the diagnostic accuracy of the QuickScreen dyslexia test. Further work could potentially be undertaken to expand on this initial analysis and to develop the test further.

In addition to the overall QuickScreen test indications, individual scores are available for various processes such as visual, verbal, memory, reading, comprehension, etc. By using the individual test scores and additional participant demographics we could potentially build a model to predict the probability of dyslexia. This model could then be used to possibly adjust the current QuickScreen indication category boundaries to optimise the resulting diagnostic accuracy measures. This further study may be particularly useful in helping to distinguish between individuals currently in the Borderline group by accounting for participants’ university status.

 

References
1.Dyslexia and Effective Learning, (Edited Hunter-Carsch, M. & Herrington, M.
2001)

2. Dyslexia in Higher Education: Policy, Provision and Practice (Singleton, C.
1999, University of Hull)

3 Cardiff University Initial trial indications, Oct.2016.
Sarah Howey (Cardiff University), Todd M. Bailey (Cardiff University), and ORS
TBD

4. The Sage Dyslexia Handbook Sage publications. – See more at:
http://www.drgavinreid.com/free-resources/dyslexia-an-overview-of-recentresearch/#sthash.dSLyXzkw.dpuf

5. Breznitz, Z (2008) The Origin of Dyslexia: The Asynchrony Phenomenon in G.
Reid, A. Fawcett, F. Manis, L.Siegel (2008)

6 Mercaldo, Nathaniel David; Zhou, Xiao-Hua; and Lau, Kit F., “Confidence
Intervals for Predictive Values Using Data from a Case Control Study” (December
2005). UW Biostatistics Working Paper Series. Working Paper 271.
http://biostats.bepress.com/uwbiostat/paper271

7. Fisher’s Exact Test https://en.wikipedia.org/wiki/
Fisher’s_exact_test

Statistical Report Author: Sarah Marley Select Statistical Services, Oxygen House
Grenadier Road, Exeter Business Park Exeter, Devon, EX1 3LH

Produced by
Pico Educational Systems Ltd
17 Wellington Square
East Sussex
TN34 1PB

Pico Stamp

Pico Educational Systems Ltd is a supporting Corporate Member of the British Dyslexia Association

resultsresults table

 

QuickScreen – Processing and Speed of Processing

In the course of assessment activities over the past ten years with students of all ability levels at university, it repeatedly came to light that there was a strong correlation between dyslexia, difficulties with aspects of literacy and poor processing speed.

In fact the distinguishing marker between those dyslexics who could cope quite well and those who needed more considerable support was undue slowness in processing.

This clearly had an adverse impact on a whole range of study activities including completing written assignments, coping with reading lists, taking intelligible lecture notes and coping with written examinations to name but a few.

Individual aspects such as spelling and literacy did not appear to be the main defining limitations to their studies. Most of the practical aspects could be compensated for by the use of technology by, for example, using a spellchecker, or by employing strategies such as the use of “hooks” to improve the effectiveness of memory.

Given the complexities of a dyslexic adult’s learning difficulties, where many areas of weakness will have been compensated to varying degrees, the one element that consistently appears to limit performance is slow processing. If the time you need to assimilate information is too great, then inevitably there must be an impact on study.

Where the definition of dyslexia itself is so disputed and open to question, the one constant is that dyslexics have difficulty processing information at speed or when under pressure.

Sequencing is also a good indicator of processing speed and is the one test that it is not possible to complete any faster than your maximum capacity.

There have been many models for assessment, from the early medical approach through to phonological skills testing and the social models of dyslexia that place less emphasis on the value of testing. The traditional model used by educational psychologists and dyslexia specialists, which aims to establish a discrepancy between literacy acquisition and underlying ability, continues to be required by most educational establishments and forms the basis of the Quickscreen test.

Each of these models has its strengths and weaknesses and can be seen as more or less applicable to the individual being assessed depending on what the aims of the assessment may be.
One of the problems with many existing tests is that they fail to discriminate at university level, and one of the other aims in producing QuickScreen was to produce a program which would work as effectively for a person with few academic qualifications from the general adult population as for a person at postgraduate level.
Where many phonological tests have been successful with younger people, a better indicator of phonological performance at adult and university level has proved to be a reading and dictation exercise, which measures the ability to assimilate, process and record written data under timed conditions. These are important and relevant factors in the formalities of learning and study, replicating, as they do, many of the skills needed for efficiency in written literacy.
Much of the recent research in this area has highlighted the importance of processing language on a number of different levels.
There has been much debate over the years as to whether assessment should simply seek to “label” the condition and in doing so help the individual to account for their perhaps disappointing academic performance and accept a different approach to what had previously been seen in a purely negative light.

The aim might also be to help an individual integrate better socially and in their workplace, or to improve academic performance by providing access to support and funding for necessary adjustments and technology to allow them to achieve their full potential.

In producing QuickScreen it was essential to establish a clear view of what you are aiming to achieve, and, perhaps, one of the longest standing aims has been to reduce the time and effort taken to complete an assessment, while having it provide detailed information on the individual’s abilities and performance.
QuickScreen uses data from a battery of subtests to arrive at three conclusions. The first is to establish whether there are indicators of dyslexia, the second to assess levels of literacy and the third to highlight any difficulties with speed of processing. By detailed cross referencing of results data QuickScreen has made it possible to produce a computer generated report covering all three areas.
Six subtests produce a profile of verbal, visual and vocabulary skills, together with the well-established underlying skills relating to dyslexia, which are memory, sequencing and processing.
Development of the program is now complete, as is the collation of the first results from trials carried out at University and through the BDA website, which provided a sample from university students, together with the sample from the general population to compare with adults who are not dyslexic.
Early indications from the trial would strongly suggest that QuickScreen does differentiate between dyslexics and non-dyslexics.
Additionally it provides a comprehensive literacy and attainment profile. This enables tutors to compile a statement of individual needs for study support. They can either use the report to confirm the need for a full dyslexia assessment or add their comments and use them as relevant background evidence in establishing a case for support.
Tutors are enabled to carry out remote testing and elect to withhold reports or send them to the student after adding their own comments.
The background colour bands reflect the “traffic lights” markings used throughout the report to indicate the levels of performance and highlight areas for concern.
The data used in the production of these indicative graphs is sorted in ascending order to provide a clearer picture of the differentiation between groups. The Green line represents the non dyslexic control data, the blue line represents student dyslexics and the red line dyslexics drawn from the general adult population.
This preliminary data is drawn from a total of 105 individual participants split into three groups of 35, 70 of whom have been previously diagnosed as dyslexic and a non-dyslexic control group of 35.
Note that in all cases, as might be expected, there is a noticeable gap between the performance of dyslexics and non-dyslexics which indicates that the program is capable of discriminating effectively to identify those with dyslexia in both the student and general adult population.

Dyslexia Quotient Graph – Students

The following graph compares results for the dyslexia component of the diagnosis provided by the program from non-dyslexics, with results of dyslexics from universities.

results table 19

Dyslexia Quotient Graph – Adults from the general population

The following graph compares results for the dyslexia component of the diagnosis provided by the program from non-dyslexics, with results of dyslexic adults from the general population.

results table 20

Processing Graph – University Students

The following graph compares results for the processing component of the diagnosis provided by the program from non-dyslexics, with results of dyslexics from universities.

results table 21

Processing Graph – General Adult Population

The following graph compares results for the processing component of the diagnosis provided by the program from non-dyslexics, with results from dyslexics from the general adult population.

results table 22

Processing Speed Graph – University Students

The following graph compares results for the overall processing speed component of the diagnosis provided by the program from non-dyslexics, with results of dyslexics from universities.

results table 23

Processing Speed Graph – General Adult Population

The following graph compares results for the overall processing speed component of the diagnosis provided by the program from non-dyslexics, with results from dyslexics from the general adult population.

results table 24

Compiled by Pico Educational Systems Ltd in June 2016

Further research links

The background research for the original program does give quite a bit of explanation as to the rationale behind the questions in QuickScan and it is available in the original PhD thesis by Dr Dorota Zdzienski, now available online through Leicester university entitled:

Dyslexia in Higher Education: An exploratory study of learning support, screening and diagnostic assessment

https://lra.le.ac.uk/handle/2381/9806

https://lra.le.ac.uk/bitstream/2381/9806/1/1998zdzienskidphd.pdf

There is also a book edited by Morag Hunter-Carsch who supervised the above study which is entitled:
‘Dyslexia & Effective learning in Secondary & Tertiary Education’ where there is a chapter about the program. This is still available and below is a link to Amazon’s listing for it:

http://www.amazon.co.uk/Dyslexia-Effective-Learning-Secondary-Education/dp/1861560168

With regard to Quickscan, in various research projects carried out over the years, users have quoted 95% accuracy and this figure was also noted in Gavin Reid’s research paper on his study of young offenders, which makes a similar claim in a recognised research forum.

An Examination of the Relationship between
Dyslexia and Offending in Young People and
the Implications for the Training System

Jane Kirk and Gavin Reid
University of Edinburgh, UK

A screening study was undertaken which involved 50 young offenders, serving sentences of various lengths, all from the largest young offenders’ institution in Scotland. All 50 were screened for dyslexia and a number received a more detailed follow-up assessment. The results of the screening showed that 25 of the young offenders (50%) were dyslexic to some degree. This finding has implications for professionals, particularly in respect of follow up assessment and support, and for politicians in relation to issues such as school experience, prison education and staff training. These issues are discussed here in relation to the background ands results of the study.

INTRODUCTION

Although nearly a quarter of a century has passed since Critchley and Critchley (1978) highlighted the issue of dyslexia and crime, it is only very recently that some attempts have been made to identify the real extent of the problem. Today the relationship between dyslexia and anti-social or criminal behaviour is arguably one of the most controversial in the field of dyslexia. Some studies (see below) which have attracted significant media attention have claimed to detect a significantly higher incidence of dyslexia amongst those in custody compared to the general population. If this claim is valid, it is remarkable and worrying since it might be interpreted as meaning that there is a casual connection between dyslexia and social deviance. Since it is now acknowledged that dyslexia is, in some cases, partially influenced by heredity, it would be extremely serious if, in unfavourable environments, it predisposed people to criminal or anti-social behaviour.

In the STOP project (Davies and Byatt, 1998) there was an investigation in some depth of the possibilities of screening, assessment and training in relation to dyslexia and crime. Their study revealed that 31% (160 out of 517) had positive indicators of dyslexia. Similarly, the Dyspel project (Klein, 1998) designed a screening tool for dyslexia in the form of a questionnaire and also used other established screening tests such as the Bangor Dyslexia Test (Miles, 1997). It was found that 38% of the custodial sample showed indicators of dyslexia. In addition, a study by Morgan (1996) using the Dyspel procedures found 52% of those screened had strong indicators of dyslexia. All three of these UK studies are consistent with other studies in Sweden (Alm and Andersson 1995) and the United States (Haigler et al., 1994), but have still generated some criticism. Rice (1998) suggests that here is no support for the claim that dyslexia is more prevalent among prisoners than among the general population and asserts that the prison studies which argue to the contrary are fundamentally flawed in terms of sample bias, inappropriate screening methods, and lack of clarity regarding the concept of dyslexia.

At first glance, dyslexia may seem to induce anti-social behaviour. The able school pupil, whose dyslexic condition is not diagnosed, or, having been diagnosed, receives insufficient or inappropriate support, might very well begin to feel devalued at school and turn to forms of deviant behaviour as a way of responding to the sense of low self-esteem induced by school and as a way of achieving recognition by peers. A study carried out at the University of Sunderland (Riddick et al., 1999) found that there was a significant difference in the perceived self-esteem within two groups of students in higher education. The first group, consisting of students with dyslexia, all demonstrated low self-esteem and comparatively high levels of anxiety. The control group, in contrast, were consistently more positive about their academic abilities (cf. also Reid and Kirk, 2000). If the difference is marked at this level of education, where the students with dyslexia have achieved a degree of success in gaining entry to higher education, how much more marked would the difference be if it were measured in a young offenders’ institution? Low self-esteem may lead to a pattern of anti-social or maladjusted behaviour, which could lead to more serious forms of deviant behaviour and ultimately to imprisonment. In that case dyslexia may be related, albeit indirectly, to offending behaviour.

The purpose of this study is two-fold: to conduct an investigation to identify the potential numbers in a young offenders’ institution who might display positive dyslexia indicators in a screening test and to examine the implications of the results for the training of relevant staff in the prison education system.

CHOICE OF MEASURE

It was decided to use QuickScan, a computerized self-assessment screening test for dyslexia in which the subjects are required to reply ‘yes’ or ‘no’ to the questions asked (Zdzienski, 1997). This test had been piloted with 2000 students across many subject areas from the universities of Kingston and Surrey. However, some of the vocabulary used to screen students in the south of England was judged to be inappropriate for young offenders in central Scotland. In preparation for the work to be carried out in the young offenders’ institution, the vocabulary in the questions was amended: changes were made and carefully checked so as to ensure that the sense of the question remained unaltered. One example of the linguistic difference is that the word task has different connotations in England and was replaced by the more familiar word job. All the changes were approved by the author of QuickScan.

An additional reason for selecting a computerized test was that we judged that the young offenders might respond more positively to this method of testing than to paper and pencil tests, with which they may have had negative experiences at school. The QuickScan Screening Test was thought to be non-threatening in that the questions do not focus on basic language, but rather on the processing of information. Examples of some of the questions are given below.

It was recognized that QuickScan could not offer an exact diagnosis of dyslexia. However, given the time restrictions imposed by the prison management and by the fact that the project was being televised, it was considered the most effective available tool for a study whose purpose was to find out how many young men in a sample of 50 manifested indicators of dyslexia.

The QuickScan screening test reports on 24 different performance categories, eight of which have been selected by the present authors as being particularly informative. The labels used in QuickScan to summarize the results of the different tests are open to question (e.g. ‘sequencing problems’, ‘laterality problems’), but it is the questions themselves, not the theory allegedly attached to the answers given, which is important. These questions are based on many years’ work with dyslexic adults, and this gives them a face validity that would be hard to dispute. For illustration purposes, we present one sample question from each category.

(sequencing) When making phone calls do you sometimes forget or confuse the numbers?
(memory) Do you often find it difficult to learn facts?
(family history) Do you know of anyone in your family who has dyslexia?
(general language) Is it usually easy for you to find the key points in an article or a piece of text?
(self-esteem) Are you usually a fairly confident person?
(concentration difficulties) Do you usually find it difficult to concentrate?
(organizational difficulties) Do you have difficulties organizing your ideas into an essay or report format?
(laterality difficulties) Do you sometimes confuse left and right?

CHOICE OF SAMPLE

The choice of the size of the sample group was largely determined by the prison management. They stipulated the amount of time they felt was sufficient to allow the screening to take place without completely disrupting the training and discipline within the institution. Given this time restraint, it was decided that it would be possible to have nine sessions of 30 min. The numbers taking the test were limited by the prison procedures: only six young people were allowed to take the test at any one time. This stipulation determined that our sample could at most have been 54 (nine sessions with six individuals present at each) and was in fact 50. Half an hour allowed time for group discussion about matters connected with anonymity, their exclusive entitlement to the results, and their right to stop participating at any time during the screening. A brief description of the test was offered and what it would measure. At this point, the young offenders were given the choice of whether or not to proceed: none of them refused to continue. Those taking part came from all sections of the prison: some were short-term prisoners while others were being detained for more serious crimes. Although the time-scale did not allow for individual interviews, informal discussion with the prisoners revealed histories of school-refusal, exclusions for disciplinary matters and, in many cases, a bitter dislike of school education.

Although the study was primarily a screening one, it was decided to select at random six of the young men who had demonstrated indicators of dyslexia for further testing. The aim of these full assessments, carried out by a chartered educational psychologist, was to determine whether the results of the screening tests correlated with the results from the full assessment. The tests used for this stage were the WAIS-R (Wechsler, 1981) and the WRAT-3 (Wilkinson, 1993).

RESULTS

Table 1 shows the score for each subject expressed as a percentage of the ‘dyslexia positive’ items in a given category, together with an overall figure (degree of dyslexia) and a classification in terms of ‘MM’ (most indicators), ‘M’ (many indicators), ‘S’ (some indicators), ‘BL’ (borderline) and ‘no indicators’ (symbolized by ‘0’). The final two columns report on the level of indicators of dyslexia as calculated by the programme, QuickScan. The programme makes its calculations in a somewhat complex way, with the result that it is possible for a person with a high score to have fewer indicators. The results recorded in the final two columns are not deducible from the eight columns that precede them in the present table.

The results may be summarized as follows:

Three of the subjects displayed most indicators
Three displayed many indicators
Seventeen displayed some indicators
Two displayed borderline indicators

Table 1: Percentage results of eight categories of the
QuickScan Screening Test together with final analysis

table

table

table

 

 

This gives a total of 25 young offenders out of 50 (50%) who showed at least borderline indicators of dyslexia.

It is, of course, no surprise that there are problems over the exact boundary between those who are and are not dyslexic, and for this reason two cases (nos. 29 and 30) have been entered as ‘0?’ The entry ‘skills’ opposite nos. 27, 28, 31, 32 and 33 indicates poor literacy skills without other ‘classic’ signs of dyslexia.

It can be seen from Table 1 that the change in response levels is identified from about row 25 to row 33. The results from the other 16 categories, not included in the table, demonstrate a similar pattern.

Detailed statistical analysis of the data was not considered appropriate, but simple inspection suggests that if we draw a boundary after case no. 25, then on all items the scores of subjects 1-25 are higher than those of subjects 26-50. This is particularly noticeable in the case of the ‘sequencing’ and ‘memory’ items, which are widely agreed to be indicators of dyslexia. Although there were only two questions relating to family history, the difference between the two groups is clear: four out of 25 from nos. 26 to 50 reported that they were aware of some history of dyslexia in their families compared with 19 out of 25 among nos. 1-25. Self-esteem was low in all that were found to have indicators of dyslexia.

Three of the 50 had been tested previously and found to be dyslexic. In each of these cases, QuickScan showed strong indicators of dyslexia.

In the follow-up diagnostic assessment, all six young offenders who were selected for full assessment revealed discrepant scores in processing speed and short term memory compared to verbal comprehension and verbal expression.

The findings of the present study are in broad agreement with those of the two larger projects, the STOP project (Davies and Byatt, 1998), where 31% of a near-random sample of probationers were found to be dyslexic, and the Dyspel project (Klein, 1998), where the figure was 38%. It is possible, however, that the higher percentage in the present study can in part be accounted for by the fact that the subjects were volunteers – since arguably dyslexics would be more likely than other prisoners to select themselves. However that may be, the percentage of dyslexics in all three studies is massively higher than even the highest estimates of dyslexia (say 10%) in the general population.

IMPLICATIONS

This study has three main implications. First, there is a need for a much more decisive intervention in the early stages of education to identify and support those with dyslexia. If the condition goes unrecognized the result is likely to be a low sense of self-worth, which in turn predisposes young people to offend. We suggest that the community has an obligation to mobilize resources and expertise so as to prevent that drift towards criminal behaviour, or at least seek to make it less inevitable. That much is owed to the young people themselves, not to mention the financial saving to the community if dyslexia is recognized and treated.

Secondly, the study suggests that there is a need to make appropriate provision to support young people with dyslexia when they are in custodial care. Even if the incidence of dyslexia amongst offenders is considerably less than the present study suggests, there would be a need to arrange for offenders to be screened for dyslexia and for proper support to be prescribed. In addition, when they return into the community they need help in making the necessary adjustments and in learning to acquire ways of responding to the many pressures to which they may be exposed.

Thirdly, there is a need for more detailed work on the most appropriate way of screening for dyslexia. The present study confirms earlier studies. However, it runs counter to the claims of Rice (1998), and this suggests there is a need to refine the ways in which we screen and attempt to diagnose dyslexia (cf. also Sanderson, 2000). Moreover, we need to devise a measure or measures that have the support of the whole of the research community.

It is encouraging that the Scottish Dyslexia Trust has agreed to fund a study which will seek to identify a suitable assessment measure from tools currently available and to quantify the extent of dyslexia in the prison population. This study will hopefully benefit both the prison authorities and the academic community.
References:

Alm, J. and Andersson, J. (1995) Reading and Writing Difficulties in Prisons in the County of Uppsala. The Dyslexia project, National Labour Market Board of Sweden at the Employability Institute of Uppsala.

Critchley, M. and Critchley, E.A. (1978) Dyslexia Defined. Heinemann: London.

Davies, K. and Byatt, J. (1998) Something Can Be Done! Shropshire STOP Project: Shrewsbury.

Haigler, K.O., Harlow, C., O’Connor, O. and Campbell, A. (1994) Literacy Behind Prison Walls: Profiles of the Prison Population from the National Adult Literacy Survey. U.S. Department of Education: Washington, DC.

Klein, C. (1998) Dyslexia and Offending. Dyspel: London.

Miles, T.R. (1997) The Bangor Dyslexia Test. Learning Development Aids: Cambridge.

Morgan, W. (1996) London Offender Study: Crating criminals – Why Are So Many Criminals Dyslexic? University of London: unpublished dissertation.

Rice, M. (1998) Dyslexia and Crime: Some Notes on the Dyspel Claim. Institute of Criminology, University of Cambridge: unpublished.

Riddick, B., Sterling, C., Farmer, M. and Morgan, S. (1999) Self-esteem and anxiety in the educational histories of adult dyslexic students. Dyslexia: An International Journal of Research and Practice, 5(4), 227-248.

Reid, G. and Kirk, J. (2000) Dyslexia in Adults: Education and Employment. Wiley: Chichester.

Sanderson, A. (2000) Reflections on StudyScan. Dyslexia: An International Journal of Research and Practice, 6(4), 284-290.

Wechsler, D. (1981) Wechsler Adult Intelligence Scale-Revised (WAIS-R). Psychological Corporation: New York.

Wilkinson, G.S. (1993) Wide Range Achievement Test (WRAT-3). Delaware: Wide Range Inc.

Zdzienski, D. (1997) QuickScan. Interactive Services Limited: Dublin.

Copyright 2001 John Wiley & Sons Ltd.
Originally Published in Dyslexia Journal 2001
Correspondence to: Jane Kirk, Disability Office, University of Edinburgh, 3 South College Street, Edinburgh EH8 9AA, UK. E-mail: jane.kirk@ed.ac.uk

Download QuickScan Research Document

If you think this was interesting and could be useful to others, please share a link to this page.
Share on Facebook
Facebook
Tweet about this on Twitter
Twitter
Share on LinkedIn
Linkedin
Email this to someone
email