‘No social security claimants’ housing clauses are associated with disproportionate difficulty and harm for disabled people in the UK. [Draft, not reviewed]

Many private landlords stipulate that they will not let their property to social security claimant, for example by advertising ‘No DSS’ in the UK. This is a contentious practice, with some claiming it is a necessary business practice while others argue it is discriminatory. Retrospective survey data (N=521), analysed by log-linear frequency analysis, shows significant interactions between disability, social security claims and both difficulty and harm when finding accommodation. Disabled claimants more frequently (OR = 4.97, 95% CI 2.59 – 9.57) experience difficulty and harm (OR = 2.79, 95% CI 1.59 – 4.90) due to No DSS clauses than non-disabled claimants.

Keywords: disability; housing; social security; discrimination

Introduction

In the UK private housing rental market, advertisements of properties often contain the phrase ‘No DSS’ or similar (e.g. http://tinyurl.com/jcwpf24). This refers to the now-defunct Department of Social Security and, when used in this context is well understood to mean that the landlord will not consider letting the property to anyone in receipt of social security benefits. This is an issue of contention, particularly given the current economic climate. Homelessness is rising (Cooper, 2016) while the number of Housing Benefit claims have increased by approximately 570,000 since 2008 (Department for Work and Pensions, 2016).

Property owners claim they need to refuse benefit claimants for financial stability (e.g. Property Investment Project’s Landlord Blog at http://tinyurl.com/RefuseDSS) or because they are perceived as a risk to the property (Lawrenson, 2012). Conversely, some argue that ‘No DSS’ clauses are discriminatory and thus illegal (e.g. blog by Johnny Void, http://tinyurl.com/JVoid). Under the Equality Act (2010), indirect discrimination on the grounds of disability is illegal. Indirect discrimination is the situation in which, when a rule or policy (for example) is enforced uniformly but has a negative effect on particular groups, putting them at a particular disadvantage.

As there has been little to no academic or legal work on this topic and the existing evidence is anecdotal, the purpose of this study is to establish whether disabled truly people face greater difficulty than non-disabled people in the private rental market due to ‘No DSS’ clauses, and whether this causes harm.

Method

Ethical Considerations

As I am not affiliated with any institution, I was not able to get approval from an Ethics Review Board for this project. However, the project was carried out according to the principles of the British Psychological Society, my accrediting body. To ensure security and anonymity for participants, limited data was collected and links to appropriate services (eg housing organisations) were provided.

Participants

521 participants completed one of two online surveys. The surveys were advertised on social networks in multiple ways to encourage responses from disabled and non-disabled people. Respondents were encouraged to share the surveys. Thus, participants were recruited by snowball and convenience sampling. 115 participants provided age data. Counts of responses are shown in Table 1.

Of the 115 participants reporting age, the median for disabled participants was 33 years (N = 38, IQR 9) while for non-disabled participants the median age was 30.50 (N = 77, IQR 16). This difference was not significant (Mann-Whitney U, z = .375, p = .71. There was also no significant difference (z = .272, p = .79) in the median age of benefit claimants (N = 46, Median = 32.50, IQR 15) and non-claimants (N = 69, Median = 33.00, IQR = 10).

Table 1.

Response Frequencies with Percentages

Answer Benefits Disabled Difficulty Harm
  N % N % N % N %
No 311 59.70 375 72.00 338 64.90 374 71.80
Yes 210 40.30 146 28.00 183 35.10 147 28.20

 

Questionnaire

The questionnaire consisted of 8 questions; one multiple choice question to determine which benefits the respondent claims (if any), and a series of mutually exclusive yes-no questions on disability, difficulty due to No DSS clauses, and harm from inappropriate accommodation.  A separate questionnaire asked for age, gender and whether they had completed the previous questionnaire.

Results

The frequencies of responses are shown above in Table 1. From this point, ‘Benefits’ refers to whether the participant claims any benefits, ‘Disabled’ refers to whether the participant identifies as disabled, ‘Difficulty’ refers to whether the participant experienced difficulty finding accommodation due to No DSS policies and ‘Harm’ refers to whether the participant or their household experienced harm from living in unsuitable accommodation when landlords would not consider renting to them due to their circumstances. The three-way (Difficulty x Benefits x Disability, and Harm x Benefits x Disability) contingency tables are shown in Table 2.

Table 2.

Three-Way Contingency Tables

Disabled Benefits Difficulty     Benefits Harm  
No No No Yes     No Yes
    257 23   No 251 29
  Yes 44a 51a   Yes 61a 34a
               
Yes No 20 11   No 17 14
  Yes 17a 98a   Yes 45a 70a

Notes. a denotes values used to construct Odds Ratios

Loglinear Frequency Analysis

Results of the loglinear frequency analyses (see Tabachnick & Fidell, 2007) of the two contingency tables shown in Table 2 are shown below in Table 3 (for Difficulty) and Table 4 (for Harm). For both Harm and Difficulty, there are significant interactions with disability and ongoing benefit claims. The significant association between disability and difficulty, and disability and harm, remains significant after the effect of benefit claims is removed.

Table 3.

Loglinear Analysis Effects for Difficulty x Benefits x Disability Contingency Table

Effect G2 df p
Three-Way Interaction 375.68 4 <.0001
Benefits x Difficulty 207.70 1 <.0001
Benefits x Disabled 127.12 1 <.0001
Difficulty x Disabled 137.64 1 <.0001
Benefits x Difficulty * 110.92 2 <.0001
Benefits x Disabled * 30.34 2 <.0001
Difficulty x Disabled * 40.86 2 <.0001

Notes. G2 ≈ χ2. * denotes partial relationship with third variable removed.

Table 4.

Loglinear Analysis Results for Harm x Benefits x Disability Contingency Table

Effect G2 df p
Three-Way Interaction 236.74 4 <.0001
Benefits x Harm 76.86 1 <.0001
Benefits x Disabled 124.36 1 <.0001
Harm x Disabled 82.24 1 <.0001
Benefits x Harm * 30.14 2 <.0001
Benefits x Disabled * 77.64 2 <.0001
Harm x Disabled* 35.52 2 <.0001

Notes. G2 ≈ χ2. * denotes partial relationship with third variable removed.

On inspection of the residual plots (Figure 1 and 2) for Difficulty and Harm, it is apparent that this relationship occurs because disabled participants are significantly over-represented in both the Yes-Benefit-Yes-Difficulty and Yes-Benefit-Yes-Harm cells of the contingency tables. Comparing non-disabled and disabled benefit claimants, more frequently (OR = 4.97, 95% CI 2.59 – 9.57) experience difficulty due to No DSS clauses, and harm from not being able to find suitable accommodation (OR = 2.79, 95% CI 1.59 – 4.90).

Figure 1.

Mosaic plot of Difficult x Disability x Social Security Claim. Shaded areas represent standardised residuals >=4.

Difficulty mosiac

Figure 2.

Mosaic plot of Harm x Disability x Social Security Claim. Shaded areas represent standardised residuals >=4.

Harm mosaic

Discussion

First, we can conclude that No DSS clauses are harmful to all benefit claimants, disabled and non-disabled alike. However, they disproportionately affect disabled people. There is a significant interaction between disability and ongoing benefit claims when trying to find a home and experiencing difficulty due to No DSS clauses and harm from not finding suitable accommodation.

King (personal communication, 2016) raised the potential confound of age – given the logical association between increasing age and decreasing health, perhaps the effects observed are related to age discrimination rather than disability per se. This is not supported by the present data. In the sample that gave demographic data, there were no significant differences in age between disabled and non-disabled participants, or benefit claimants and non-claimants.

There are flaws in the present study which may limit its generalisability. For example, information was only collected online and thus data could not be collected from people who, for whatever reason, lack an internet connection. Further, to keep the questionnaire as short and simple as possible, it was not possible to ask about specific types of harm or consequences which limited the depth of the analysis that was possible. This presents a further avenue for research, as does the fact that this analysis considered social security benefits as a homogenous group. No differentiation was made between Jobseeker’s Allowance claimants and Child Benefit payments, and ‘No DSS’ clauses, which ostensibly applying to all benefits, may not be applied universally across all benefit types.

Some concerns were raised about the validity of self-report data in this context. It was suggested that participants may try and bias the results, given the highly political subject matter and current socio-political climate. The questionnaire was designed to limit this – participants were told the project was looking at issues that can arise when trying to find a home without explicit reference to disability or social security. Ultimately, there is ‘no strong evidence to lead us to conclude that self-report data are inherently flawed’ (Chan, p. 330), and self-report was the only practical method of collecting this data. There is no other feasible way that we could determine whether participants had experienced harm, for example.  While it is possible that some participants may have intended to bias the results, the sample size, large effects, and previous research (eg Gosling, Vazire, Srivastava & John, 2004) suggest this is unlikely and the effects would be minimal.

To better understand the relationship between disability, benefit claims and housing discrimination, future research could examine this issue prospectively, observing the relative difficulties of disabled and non-disabled benefit claimants. Approaching the issue from a qualitative perspective, to better understand the particular difficulties and harms disabled people experience and the reason why landlords use No DSS clauses, would be valuable and provide insights that may be applied to reduce this discrimination.

It would be worthwhile for future work to also consider this issue from an international perspective given the currently socio-political climate in the UK, which has undergone significant social security reforms which have been demonstrated to disadvantage poorer, less healthy claimants (Hume, in press). It is possible that other states have implemented discrimination legislation or social security regulations which either protect against this ‘No DSS’ effect or make it irrelevant. For example, landlords cite the insecurity of social security payments due to conditionality as a reason they refuse any social security tenants. In states where this is not the case, the continued existence of the ‘No DSS’ effect would provide evidence against this practice. Such insights would be valuable in tackling disability discrimination housing in the UK and elsewhere.

In the UK, indirect discrimination, as defined by the Equality Act 2010, occurs when one rule is universally enforced but causes particular disadvantages for members of protected groups, including disabled people. While the Act requires that the discriminatory act not be a “proportionate means of achieving a legitimate aim” (a legal question beyond the scope of this study), this study establishes that disabled people are significantly more likely to experience difficulty finding a home due to No DSS clauses, and that this difficulty is associated with harm to the person or their household. Amongst benefit claimants, the odds of a disabled person experiencing difficulty finding accommodation due to No DSS clauses are approximately 5 times that of non-disabled people, thus disabled people face a particular disadvantage from these rules.

References 

Chan, D. (2009) “So why ask me? Are self-report data really that bad?” In Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences, edited by Charles E. Lance and Robert J. Vanderberg, 309-336. London: Routledge.

.Cooper, C. (2016). Homelessness: Number of rough sleepers in England rises at ‘unprecedented’ Rate. Independent, February 25, http://www.independent.co.uk/news/uk/home-news/homelessness-number-of-rough-sleepers-in-england-rises-at-unprecedented-rate-a6895826.html

Department for Work and Pensions. Housing benefit caseload Statistics. 2016 [cited 03/04 2016]. Available from https://www.gov.uk/government/statistics/housing-benefit-caseload-statistics.

Gosling, S. D., S. Srivastava and O. P. John (2004) Should we trust web-based Studies? A comparative analysis of six preconceptions about Internet Questionnaires. American Psychologist, 59(2), pp. 93-104.

Hume, J. N. (In press) Bias in the Work Capability Assessment: A human rights issue? Radical Statistics.

Lance, C. E., and R. J. Vanderberg, eds. (2009). Statistical and methodological myths and urban Legends: Doctrine, verity and fable in the organizational and social Sciences. London: Routledge.

Lawrenson, D. (2012). The seven reasons why landlords won’t let to tenants on Benefits. The Guardian, May 2. http://www.theguardian.com/housing-network/2012/may/02/tenants-housing-benefit-private-landlords

Tabachnick, B. G., and L. S. Fidell. (2007). Using multivariate statistics. London: Pearson

Advertisements

The Trite Sciencisms, part 1.

“The amount of energy necessary to refute bullshit is an order of magnitude greater than to produce it.” Alberto Brandolini.

Richard Morey writes here that we are training psychology undergraduates in the art of bullshit, with particular reference to how we teach research methods. I’ve been thinking for some time (I started this post about 3 weeks before I saw that blog!) that the problem is much more pervasive than merely being isolated to psychology undergrads. It affects how our society interacts with research on a more fundamental level.

Ben Goldacre wrote that Gillian McKeith’s “academic” “work” had an ‘air of science’ about it – to the layperson, it seemed ‘sciencey’ and that conferred validity. It had posh words and superscript numbers pointing to footnotes and references. There are a fair  few ‘scientific memes’ (to use it in the original academic sense) that are often repeated by laypeople as if they are making a worthwhile scientific or methodological point, but in reality add nothing of substance and only detract from the debate due to the effort it takes to either refute the usually-meaningless claims, or to simply try and move the debate on from these points.

I call these the Trite Scienceisms; overused, unoriginal statements which have been divorced from their proper context, and have gained cultural cache due to that ‘air of science’ they have, but are also often simple truisms. Critiquing research is a balancing act – what effect will X have had on the conclusion? Can it still be trusted? Does Y counteract X? – while the Trite Scienceisms are often used to merely disregard research out of hand, and that’s why they are harmful.

I’m going to start with the one that annoys me the most, as a statistics nerd.

Correlation does not imply causation.

This statement has its origins in formal logic. In natural language, it more correctly means “correlation [eg temporal correlation] does not mean causation” – the presence of causation cannot be directly inferred from the presence of correlation alone. A correlation can exist between two variables when, in reality, they are both caused by a third.

In natural language, however, the meaning has been warped to be something more like “correlations are not evidence of causation”, which is used to dismiss all non-experimental research.

The main reason this annoys me is because it is, statistically, gibberish. Inference of causality is done purely by logical criteria (eg cause precedes effect), not statistical. The correlation coefficient r and standardised mean difference d are directly related, and you can convert one to the other quite easily (r = d/[sqrt(d^2+a)], where a is a separately calculated correction factor for varying sample sizes). In fact, r is often used as the effect size measure of the t-test, used to test differences in the means of two samples (as in an experimental study). The statistics used in observational and experimental studies are frequently interchangeable (albeit some are easier to understand in specific contexts), and thus neither alone can demonstrate causality.

More insidious, however, is that this argument is used to invalidate swathes of research that cannot be done experimentally – usually for ethical reasons, or because the topic is not one amenable to experimental manipulation. In its lay use, it is equally valid when applied to the relationship between socioeconomic deprivation and rates of depression as it is when applied to the relationship between the rate of assaults and rate of icecream sales. For one of these, a logical case can be made to demonstrate causality (because all arguments of causality must be based on logic), but both are invalidated by this, my most hated Trite Scienceism.

There are, thankfully, other things you can say in place of it! A lot of the time, people mean that they think an observed relationship is ‘spurious’  – that a causal relationship hasn’t been logically shown. Ask why they’re saying X causes Y, instead of simply saying observing a relationship does not allow inference. Of course, most of the time they don’t read the study and rely on media reporting (more later), when the issue of causality would be addressed.

The study had a small sample size.

This is another one that annoys me as a statistics nerd. This is another truism deployed against any piece of research that someone simply wants to dismiss for whatever reason. This criticism has its place in certain contexts – statistical power analysis, various qualitative methodologies – but is a meaningless statement when out of this context.

Consider two hypothetical populations; one has a population mean of 100 and an SD of 10. One has a mean of 105 and an SD of 10. We can’t know what the population statistics are, so we sample them. In our first study, we sample 10 people from each population [using randomly generated data], giving us M=98.46 SD=13.42 vs M=106.87 SD = 3.13. There is no significant difference observed in the samples, t(18) = -1.930, p =.07.

Lets try this again, with a sample of 100 in each group. This gives us M=97.96 SD = 9.91 vs M=105.26 SD = 9.72. There is a statistically significant difference in this example, t(198) = -5.260, p <.001. This is what larger sample sizes do – they allow us to smaller differences, to be more certain of their existence. In the first example, the chance of observing a difference as large or larger than we did assuming both population means were equal was 7%, while in the second example the probability was 0.1%. One of these passes the standard psychological science threshold of 5%, but the smaller sample doesn’t.

What if the second population mean was 150 instead of 105? This gives us M=98.51 SD = 11.27 vs M=151.52 SD = 6.96, with 10 samples from each population. In this instance, there is a significant difference, t(18) = -12.66, p <.001.

The issue in the first example was never the small sample size, but the power of the analysis. Power refers to the ability of a statistical analysis to detect an effect that truly exists. Increasing sample size is one method you can increase statistical power, but there are others, such as increasing the effect size which is essentially what we did in the final example. In the real world, you can do this by, for example, administering higher doses of an experimental drug.

Conversely, large sample sizes can be a cause for concern but are never criticised because they are seen as inherently better. In this example, we have two populations with identical means (100) and standard deviations (10), from which we randomly sample 2000 cases each. This gives us an observed mean 100.35 SD=9.90 vs M=99.95 SD=10.09. This difference is statistically significant t(3998) = 2.204, p =.028.

Meanwhile, if we only used 10 cases from each population, the result is not significant t(18) = 1.088, p =.29.  Thus, in this example, the smaller sample gives us the true answer while the large sample gives us a spurious, chance result.

What larger sample sizes do is let us estimate population parameters (eg population mean height) with more precision. In the above example, if we use 20 samples, we can say with 95% certainty that the population mean is between 94.24 and 103.55 (we ‘know’ the true mean is 100, because we generated the population but in ‘real life’ we wouldn’t know). With 4000 cases, we can say with 95% certainly that the true mean is between 99.69 and 100.31. If you’re testing for difference (eg ‘does drug 1 have fewer side effects than drug 2?’), small sample sizes are often fine, and won’t lead to clinically unimportant (eg 0.02% fewer side effects) being shown as statistically significant. If you want to estimate the number of side effects exactly, a larger sample size is better.

The general advice in statistics is to use as small a sample size as you can get away with, calculated from what effect size you are expecting, what significant level, and so on. Adding yet more cases is a known method of ‘fudging’ results – minute, irrelevant differences can pass significant thresholds by chance alone, which is harder in smaller samples, because to pass the threshold in small samples requires larger effect sizes.