Just a brief note first: My work is offered free, and I’m unemployed. If you find it valuable, please consider dropping me little donation at paypal.me/Hume – much obliged! Also note that this is a draft – the analysis won’t change, but the text probably will!
In the UK private housing rental market, advertisements of properties to let often contain the phrase ‘No DSS’, or similar phrases (eg http://tinyurl.com/jcwpf24). This refers to the now-defunct Department of Social Security and, when used in this context, indicates that the landlord will not consider renting the property to anyone in receipt of social security benefits (such as Housing Benefit or Employment & Support Allowance).
This is an issue of contention, particularly in the current economic and sociopolitical climate. Homelessness is rising (Cooper, 2016) while the number of Housing Benefit claims have increased by ~570,000 since 2008 (Department for Work and Pensions, 2016). Property owners claim they need to refuse benefit claimants for financial stability (Landlord Blog) or because they are perceived as a risk to the property (Lawrenson, 2012). Conversely, some argue that ‘No DSS’ clauses are discriminatory and thus fall foul of UK equality legislation (eg Void, 2012).
Under the Equality Act (2010), indirect discrimination on the grounds of disability is illegal. Indirect discrimination is the situation in which, when a rule or policy (for example) is enforced uniformly but has a negative effect on particular groups, putting them at a particular disadvantage (Citizen’s Advice, 2015).
As there has been little to no academic or legal work on this topic and all the existing evidence is anecdotal, the purpose of this study is to determine whether disabled people face greater difficulty than non-disabled people in the private rental market due to ‘No DSS’ clauses, and whether this causes harm. It can then be determined whether such policies are indeed discriminatory.
To do this, it must first be established if No DSS policies truly affect benefit claimants, and that No DSS clauses are harmful. Second, it must be established whether disabled people face difficulty and harm more than non-disabled people.
417 participants completed an online questionnaire. The survey was advertised on social networks (Twitter, Facebook) in multiple ways to encourage responses from disabled and non-disabled people. Respondents were encouraged to share the survey. Thus, participants were recruited by snowball and convenience sampling. Little demographic data was collected, to encourage more responses. Counts of responses are shown below in Table 1, in the Results section. 57.60% of respondents claimed no social security benefits and 27.30% identified as disabled. No differentiation was made between different types of benefit (eg child benefit vs housing benefit)
The full questionnaire is available in the Supplementary Materials. It contains one multiple choice question to determine which benefits the respondent claims (if any), and a series of mutually exclusive yes-no questions on disability, difficulty due to No DSS clauses, and harm from inappropriate accommodation.
The frequencies of the answers of the variables of interest are shown in Table 1. From this point, ‘Benefits’ refers to whether the participant claims any benefits, ‘Disabled’ refers to whether the participant identifies as disabled, ‘Difficulty’ refers to whether the participant experienced difficulty finding accommodation due to No DSS policies and ‘Harm’ refers to whether the participant or their household experienced harm from living in unsuitable accommodation when landlords would not consider renting to them due to their circumstances.
Table 1. Response frequencies with percentages.
Do No DSS rules have an effect?
The first step in establishing whether No DSS rules are discriminatory is to establish that they have a legitimate effect on their target group – if this is the case, ongoing benefit claims will be associated with difficulty finding accommodation. This is supported by the data (Table 2; χ2 (1) = 150.76, p <.001, Φ = .60, OR = 19.36 95% CI 11.37 – 32.96).
Second, it is necessary to establish that No DSS rules are harmful. If this is the case, the proportion of participants reporting harm should be related to the proportion affected by No DSS rules. This too was supported by the data (Table 2; χ2 (1) = 135.91, p <.001, Φ = .57 OR = 16.39 95% CI 9.70 – 27.70)
Table 2. 2×2 Contingency tables for Difficulty/Benefit and Difficulty/Harm
Do No DSS rules disproportionately affect disabled people?
It has been established than No DSS clauses do affect benefit claimants, and that difficulty finding accommodation due to them is associated with harm. To answer the question of whether they are discriminatory, it is now necessary to determine whether they affect disabled people at a higher rate than non-disabled people.
To do this, we must first establish that the relationship between benefit claims and both difficulty and harm differs for disabled people compared to that of the general population. To do this, I used the observed frequencies and proportions of the 2×2 contingency tables for Benefit/Difficulty and Benefit/Harm for non-disabled participants (N=303, Table 3) as the predicted frequencies for χ2 analyses of those contingency tables for disabled participants.
|No (%)||Yes (%)||No (%)||Yes (%)|
Table 3. 2×2 contingency tables of Benefit/Difficulty and Benefit/Harm for non-disabled participants.
The observed and predicted frequencies for the two contingency tables (Benefit/Difficulty and Benefit/Harm) for disabled participants are shown in Table 4.
Table 4. aObserved and bExpected frequencies for Benefit/Difficulty and Benefit/Harm contingency tables for disabled participants
Both the distribution of counts in the Benefit/Difficulty contingency table (χ2 (3) = 268.49, p <.001) and Benefit/Harm (χ2 (3) = 250.44, p <.001) for disabled participants significantly differed from the distribution for non-disabled participants. Therefore, we can conclude that the relationship between the variables is different for disabled and non-disabled participants. Upon inspection of the standardized residuals (equivalent to z scores; Table 5), the key difference is that disabled people are over-represented in both Yes/Yes cells beyond their over-representation in the benefit claimant cells.
Table 5. Standardised residuals of distribution of counts in Benefit/Difficulty and Benefit/Harm contingency table for disabled participants compared to non-disabled participants (*p <.001).
How disadvantaged are disabled people?
It has been established that No DSS clauses do affect benefit claimants, and that they are associated with harm. Further, it has also been established that disabled participants are affected in a different way than non-disabled participants with a significantly higher proportion experiencing difficulty and harm. It is now necessary to establish just how much more disadvantage disabled people face. First, we need to establish if disabled participants faced more harm and difficulty than non-disabled people among the general population.
Table 6. Contingency tables of Disability/Difficulty and Disability/Harm
Disability was significantly associated with difficulty (χ2 (1) = 104.93, p <.001 Φ = .50) and harm (χ2 (1) =76.00, p <.001 Φ = .43). Disabled participants were 11 times more likely (OR = 11.07, 95% CI 6.71 – 18.28) to experience difficulty than non-disabled participants, and 7 times more likely (OR = 7.48, 95% CI 4.62 – 12.11) to experience harm.
Finally, it is necessary to show that disabled people face difficulty and harm beyond that of non-disabled benefit claimants and, to this end, a final two contingency tables were constructed consisting of only benefit claimants (Table 7; Disability/Difficulty, Disability/Harm).
Table 7. Contingency tables (Disability/Difficulty and Disability/Harm for benefit claimants only.
Disability was significantly associated with difficulty (χ2 (1) = 22.55, p<.001, Φ=.357) and harm (χ2 (1) = 13.71, p<.001, Φ=.278). Disabled benefit claimants were 5 times (OR = 5.07, 95% CI 2.52 – 10.17) more likely to experience difficulty and 3 times (OR = 3.15 95% CI 1.70 – 5.82) more likely to experience harm than non-disabled benefit claimants.
The power for the weakest test (Table 7) is 0.96, the strongest (Table 2) has power approaching 1. Using a conservative estimate of the a priori probability of the observed effects being real (0.10), and using the .001 significance level, the potential false discovery rate for the data presented is .009 or less (Colquhoun, 2014).
First, we can conclude that No DSS clauses are harmful to all benefit claimants, disabled and non-disabled alike. However, they disproportionately affect disabled people. Among both benefit claimants and non-claimants, disabled people are 11 times more likely to face difficulty and over 7 times more likely to experience harm due to not being able to find accommodation.
This could potentially be explained by the higher prevalence of disabled people among benefit claimants, but this was not supported by the data. Among benefit claimants only, disabled people are still over 5 times more likely to experience difficulty, and 3 times more likely to experience harm, than non-disabled benefit claimants. Disabled benefit claimants therefore face discrimination in housing beyond that experienced by non-disabled benefit claimants.
There are potential flaws in the present study which may limit its generalisability. For example, information was only collected online and thus data could not be collected from people without an internet connection. Further, to keep the questionnaire as short and simple as possible, it was not possible to ask about specific types of harm or consequences which limited the depth of the analysis that was possible and this presents an avenue for future research.
Concerns were raised about the validity of self-report data in this context. It was suggested that participant may try and bias the results. I consider this outcome unlikely. Given the range of participants (eg benefit claimants and non-claimants), it is possible that any conscious biases would be cancelled out. The questionnaire contained multiple questions which have not been analyzed. This, and the fact that the goals and hypotheses of the study were not announced to participants, potentially limits the deliberate bias that could be introduced. Thus, bias seems unlikely, albeit possible and worthy of consideration.
Ultimately, however, there is “no strong evidence to lead us to conclude that self-report data are inherently flawed” (Chan, p. 330), and self-report was the only practical method of collecting this data. There is no other way that we could determine whether participants had experienced subjective harm, for example. Future research could take a prospective approach and how long it takes disabled and non-disabled benefit claimants to find accommodation but such studies may not be practical.
Assuming the veracity of the questionnaire responses, the study’s statistical properties are promising. The study was very highly powered, used a stringent significance threshold and reports moderate to large effect sizes. A conservative estimate of the False Discovery Rate (Colquhoun, 2014) is 0.09%. Thus, it is likely that the observed relationships will be replicable.
Indirect discrimination, as defined by the Equality Act 2010, occurs when (for example) one rule is universally enforced but causes difficulties for members of protected groups, including disabled people. Having established the increased harm and difficulty these rules cause disabled people, there is a strong possibility that No DSS clauses are, in fact, illegal.
Chan, D. (2009). So why ask me? Are self-report data really that bad? In C. E. Lance, & R. J. Vanderberg (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences (pp. 309). London: Routledge.
Citizen’s Advice. (2015). Indirect discrimination. Retrieved 03/04, 2016, from https://www.citizensadvice.org.uk/discrimination/what-are-the-different-types-of-discrimination/indirect-discrimination/
Colquhoun, D. (2014). On the hazards of significance testing. Retrieved 03/04, 2016, from http://www.dcscience.net/2014/03/24/on-the-hazards-of-significance-testing-part-2-the-false-discovery-rate-or-how-not-to-make-a-fool-of-yourself-with-p-values/
Cooper, C. (2016, Homelessness: Number of rough sleepers in england rises at ‘unprecedented’ rate. Independent,
Department for Work and Pensions. (2016). Housing benefit caseload statistics. Retrieved 03/04, 2016, from https://www.gov.uk/government/statistics/housing-benefit-caseload-statistics
Lance, C. E., & Vanderberg, R. J. (Eds.). (2009). Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences. London: Routledge.
Landlord Blog. Reasons why landlord’s shoudn’t accept DSS tenants. Retrieved 04/03, 2016, from http://www.propertyinvestmentproject.co.uk/blog/reasons-why-landlords-shouldnt-accept-dss-tenants/
Lawrenson, D. (2012, The seven reasons why landlords won’t let to tenants on benefits. The Guardian,
Void, J. (2012). Are landlords breaking the law when they demand no DSS? Retrieved 03/04, 2012, from https://johnnyvoid.wordpress.com/2012/04/26/are-landlords-breaking-the-law-when-they-demand-no-dss/
Contingency table: table of data for analysis of association
χ2 = chi square statistic, a test of association
p = p value, probability of a test statistic occurring assuming null hypothesis is too (eg ‘probability of this result if there is no association’. p = 1.00 = 100%, p <.001 = less than 0.1%.
Φ = phi coefficient. A measure of strength of association, between -1 and +1.
OR = Odds Ratio. Odds of an event happening in one group divided by Odds of event in another. OR = 1 means event is equally likely in both groups. 2 means event is twice as likely in one group. 0.5 means half as likely in one group.
95% CI = 95% confidence interval. We can be 95% sure the true value of a statistic falls within these two numbers.
‘standardised residual’ = residual is the error between a prediction and an observation. ‘Standardising’ is converting to a normal distribution value, with a mean of 0 and a standard deviation of 1. Z=1 means the result is one standard deviation away from the mean. Z scores are associated with particular p values (eg z=1.96 is p = 0.05)
Power = The probability that a statistical test will correctly reject the null hypothesis when the alternate hypothesis is true.
False Discovery Rate = The proportion of studies, with this power and p value that will report false discoveries by chance, assuming a given probability that the effect is real (in this case, a conservative estimate of 10% was chosen)