Fact or Myth: Jobless Medicaid Recipients Just Watch TV and Play Video Games All Day
How to Investigate the Reliability of a Statistical Study
Since the beginning of 2025, I have become increasingly familiar with my Senators’ email addresses. I have been contacting them frequently with my concerns about our current administration. Lately, many of my emails have been about multiple parts of the “big beautiful bill.” In my most recent email, I expressed my concerns about the enormous proposed cuts to Medicaid. I received a response from one of my Senators, Joni Ernst. Yes, the “well, we are all going to die” Joni Ernst.
There are a couple of claims from her latest response that I would like to address. The first is a claim that has been repeated by many Republicans defending the cuts that will take Medicaid away from an estimated 15+ million people:
“There are currently over 1.4 million illegal immigrants receiving Medicaid, which is draining this taxpayer benefit from those citizens who need it the most.”
Now, even if the highlighted claim were true, this does not address the remaining 14+ million citizens that would then still be kicked off Medicaid, but this claim has been debunked. Undocumented migrants are not eligible for federally funded Medicaid.
This use of misinformation in a response from a U.S. Senator left me concerned and put me on alert for more as I continued to read the response. This next questionable claim caught my attention and inspired me to write this article:
“For those Medicaid recipients who do not report working, the most common activity other than sleeping is watching TV and playing video games.”
Now immediately I knew this likely had a purposeful negative slant, because, after all, everyone’s most frequent activity is typically sleep. It is recommended humans spend roughly a third of the day sleeping. But, what is going on with this TV and video games claim? No study was cited in the response. (A bit of a red flag.) But, I popped the claim into Google and it turned out, this claim originated from a study completed by Kevin Corinth at the American Enterprise Institute (AEI).
The study exists. The claim matches what the study shows. It even comes with a nice graph and has been repeated by The New York Post and Mike Johnson, and other members of Congress. On top of that, as far as I can tell, there was no one out there refuting this evidence. So, it must be reliable…right? Let’s take a look.
What Makes A Study Reliable?
For a study to be reliable, it should reduce the possibility of bias as much as possible. In my previous article, To Trust or Not To Trust a Statistic, I discuss how bias is avoided by verifying that the study is based on a large representative sample with results that can be replicated. This is very important and addresses one type of possible bias known as sampling bias. In this article, I want to also address two other main types of bias: response bias and non-response bias.
Response-Bias
Response bias occurs when the results of a survey do not represent the truth. The main ways this can occur is through leading forms of questioning and self-reported data.
For example, let’s say we want to study how much time students spend studying for exams. I create a survey with the following question: “How many hours did you spend studying to be successful on the exam?” Is there any response bias we should be worried about? Yes, first of all we don’t need the “to be successful” part of the question. That is leading. We want to keep the question as simple, clear and concise as possible. The question also specifically asked for “hours.” This may cause the student to round up their response to 1 hour if, for example, they only spent 35 minutes studying. This introduces a chance for bias that would skew our results higher than what is true. Also, we are asking students to self-report this data. When self-reporting this time studying, it is likely that students will report spending more time studying than they actually did or will inaccurately report it due to poor memory recall.
So, how could we update this example to avoid response bias? We could have a student keep a detailed daily log of their activities in the week leading up to an exam. Then we could use those logs to calculate how much time a student logged studying. Even better, we could reduce the response bias by not telling the student that we were going to use the logs to investigate how much time they spent studying so that they don’t over inflate this time.
Now, let’s get back to the time study completed by AEI. Does it have any potential response bias? We first have to understand that the study done by AEI used results from two other studies: the American Time Use Survey (ATUS) and Current Population Survey (CPS). Participants first take the CPS and then 2 months later become eligible to also take the ATUS. As the survey’s name suggests, the goal of the ATUS is to measure how a typical American uses their time. The survey is conducted by the U.S. Bureau of Labor Statistics. The methodology of the study tell us how the time-use diary for each respondent is collected:
“This part of the interview is used to collect a detailed account of the respondent’s activities, starting at 4 a.m. the previous day and ending at 4 a.m. on the interview day. For each activity reported, the interviewer asks how long the activity lasted. For most activities, the interviewer also asks who was in the room or accompanied the respondent during the activity and where the activity took place.”
The results are collected through an open-ended conversation about what the participant did the day before. Do we think there is any opportunity for response bias here? Will this method give us a true, accurate representation of how a typical American uses their time?
This method does have potential response bias for a couple of different reasons. First, the question could lead to untrue answers from a person’s poor recall about events that happened the day prior. Second, the question being asked is meant to represent what a person does on a typical day. But, the only information collected is about the person’s activities from the previous day. That day could have been a sick day, a vacation day, a day off work or a day working, and so on.
The methodology does indicate that half of the surveys are completed on weekends and half are completed on weekdays. However, it does not ask the same person about a weekday and a weekend day, just about whatever day happened to be before they received the call, even if that day was not a typical day for that person. The study also does not separate a typical working day from a typical non-working day. All days are simply averaged together. Do your working and non-working days look the same?
What could a better survey look like? Have the participants keep a daily log of all of their activities for a longer period of time, say two weeks or a month. Separate the data into a typical working day and a typical non-working day. Be sure to have the participant indicate any days with unusual activity they would not normally participate in (sick days, vacation days, etc.). These days can then be handled appropriately to avoid skewing the data. Calculate averages for each activity over this extended period of time. Now, this is just an overview of an idea, but this design would already increase the accuracy of the data and reduce an immense amount of possible response bias present in the current ATUS design.
All in all, the current design of the ATUS introduces significant opportunity for response bias leading to skewed responses. Given this, the resulting data should already be in question.
But, let’s go ahead and check for other types of bias. The next is non-response bias.
Non-Response Bias
Non-response bias occurs when an individual selected for the survey does not provide a response. If this occurs, an opportunity for confounding variables and bias is introduced: Why did the individual not respond?
For a time-use survey, a person not responding could lead to significant skew if a common activity is resulting in the lack of response. For example, work. If work is keeping an individual busy and preventing the individual from participating in the survey this will skew the results. This could be true for other activities such as volunteering or caring for someone. The list is endless.
So, what was the response rate for the AEI study? This is a little complicated because, remember, there are two steps of surveying involved in the AEI study, the CPS and then the ATUS. We will look at both steps. The response rate for the CPS (the first survey) is reported to be 75% on average. Then, for the individuals that take the ATUS, the response rate for the years included in the AEI study was anywhere from 32.4% to 42.0%. This means that as many as 6 to 7 out of every 10 participants did not respond to the time-use survey portion.
Now of course, you would ideally want 100% of people to respond to a survey, but you cannot always expect this. So, what is a good response rate? Well, there is no magical number here. Non-response can affect surveys differently depending on how many people were surveyed in total, the content of the survey questions, and much more. But, given the major confounding variables introduced by non-response in a time-use survey, these response rates should definitely be considered. This non-response would certainly introduce an opportunity for biased results.
So, the design of the study done by AEI introduces opportunity for both response and non-response bias. This isn’t a good start (to say the least) but let’s look at sampling bias.
Sampling Bias
Sampling bias occurs when the individuals selected for the sample do not accurately represent the population. I discuss this type of bias is discussed in more depth in To Trust or Not to Trust a Statistic. So, I won’t get too far into the weeds here, but we will check the AEI study for sampling bias.
In order to check for sampling bias we need to know how the sample was collected. The methodology of both the CPS and ATUS survey indicate that the sample is created randomly from households in America. This is great! Random sampling is the best way to make sure every individual in the population has an equally likely chance of being selected.
But, we also need to look at the sample size. The CPS reports a sample size of about 60,000. The ATUS sample varies from year to year, but was estimated to be 7,700 in 2024. These sample sizes are pretty good for estimates of all American people.
So, sampling bias isn’t looking too bad so far. But we must look specifically at how the AEI study used the data from both of these surveys:
“I pool multiple years of these surveys, due to the relatively small sample that receives both the ATUS and CPS ASEC in a given year.”
This is a major red flag. A reliable study will always provide a sample size, specific methodology and possibly even the raw data. All of these are missing in the AEI study.
No sample size is given and multiple years were pooled together to estimate how all non-working American citizens on Medicaid use their time. This is problematic for many reasons. Pooling different years together introduces many confounding variables into the sample as we change from one year to another. Why were the years 2019, 2021-2023 included? Why not 2018 as well? 2017? Why not just 2022-2023? Were requirements to be on Medicaid the same in all of these years? The possibilities for confounding variables are endless. The author even addresses the concern of pooling multiple years together by leaving out results from 2020. While 2020 is a known year of skew for almost all data because of the pandemic, other years could introduce skew as well.
Moreover, while the CPS and the ATUS use random samples to represent the population accurately, we do not know how these samples were used in the AEI study. We don’t know how many were selected from each year or how large the final sample is. A study that does not provide this methodology, should always be questioned. The sample used in the AEI study may not even be random at all given this lack of information.
The unknown sample size also leads to issues in checking the last measure of sampling bias: margin of error. This is an important one, but a little more complicated. Any survey that uses a sample to approximate a population should always include a margin of error based on the sample size and statistic being estimated. For example, the AEI survey states that non-working Medicaid recipients spend 4.2 hours a day on average watching TV or playing video games. 4.2 hours is the statistic that is only specifically true for the sample. It does not mean we can assume that this exact average is true for all non-working Medicaid recipients. We did not ask all of them. So, we need to include a margin of error. If we only sampled 10 people the margin of error would be larger than if we sample 100.
What is the margin of error for the AEI study? The AEI study does not report one. This, again, is a major red flag. A margin of error is always reported in a reliable statistical study. Why? Well, what if the margin of error was 4 hours? That would mean that the estimate for the time Medicaid recipients who do not report working spend watching TV or playing video games is anywhere between 0.2 hours and 8.2 hours. That would drastically change how we interpret this information. Now, the margin of error is not likely to be that large, but because no sample size and no margin of error is given, we cannot say.
Repetition
The last critical piece needed when evaluating and trusting a study’s results is repetition. Repetition means that another surveyor can collect a different random sample and come up with similar results. This will further verify and strengthen the study’s results. I have not been able to find any other study that has replicated these results.
Conclusion
The survey completed by AEI and referenced by many others to verify that jobless Medicaid recipients spend a significant portion of their time watching TV and playing video games does not pass the qualifications for a non-biased, statistically sound, replicable survey. Response bias is introduced in the open-ended responses about a previous day’s activities that could also be skewed by poor recall and the specific nature of the previous day activities for that individual. Non-response bias is possible with low response rates. Sampling bias is introduced by arbitrarily pooling multiple years of data together and failing to report a sample size and margin of error. And, the study’s results have not been replicated.
Based on this study the claim is not verified.
Educators
This study makes for a great example when studying survey reliability specifically when teaching response and non-response bias. The ATUS study is completed by the U.S. Bureau of Labor Statistics, so this is also a good example of how just checking the source is not always enough. Check out this sample lesson below and let me know what you think!
Sample Lesson Plan
Objective: Students will evaluate a statistical study for response and non-response bias.
Materials Needed: Projector or screen, Studies linked in article
Lesson Structure:
Have students read the methodology of the ATUS study. Specifically the setup: “This part of the interview is used to collect a detailed account of the respondent’s activities, starting at 4 a.m. the previous day and ending at 4 a.m. on the interview day. For each activity reported, the interviewer asks how long the activity lasted. For most activities, the interviewer also asks who was in the room or accompanied the respondent during the activity and where the activity took place.”
Ask students: What do you notice? Are there any issues with this study design?
Discussion: What conclusions can we make from this study?
Lecture: Define response and non-response bias; Explain the issues that arise if these types of bias are present in a study; Give simple examples of response and non-response bias.
Activity: Have students work in groups to discuss any response or non-response bias they can find in the ATUS study.
Conclusion: Ask students to reflect on one question they will ask the next time they see a statistical claim.
Please always feel free to share your thoughts on this article and any questions you were left with. I have a great passion for spreading the power of knowledge and I strive to do this in an effective and reachable way.
Thanks for this, really…
My experience in Stats is from the Health Physics and PH side.
You have a great ability to communicate the logic and legitimacy of “real” data vs biased propaganda.
Sincere gratitude for your efforts.
All the best.
Are people really dumb enough to volunteer information that they get Medicare and just watch TV? Aren't these people supposed to be villainous liars and deabeats?