The Times Mirror Survey of Technology results are based on telephone interviews conducted under the direction of Princeton Survey Research Associates among a nationwide sample of 3,603 adults, 18 years of age or older, and an oversample of 402 adult online users, during May and June of 1995. For results based on the total adult sample (N=3603), one can say with 95% confidence that the error attributable to sampling and other random effects is plus or minus 2 percentage points. For results based on online users (N=997), the margin of error is plus or minus 3 percentage points.
SURVEY SAMPLE DESIGN
The survey instruments for this survey were developed by Times Mirror in consultation with a wide range of specialists in emerging technologies, the mass media, and consumer behavior. An extensive review of past surveys on technology was also made. The questionnaire from Times Mirror’s 1994 Technology study was used as a benchmark in the design of this questionnaire. Since this 1994 questionnaire had been given an extensive multi-stage pretest with over 100 respondents, the focus of the pretesting in this 1995 study was concentrated on the new sections dealing with the World Wide Web, e-mail and other online topics, and on the screener questions used to determine if the respondent was an online user, for oversampling purposes.
The designed sample is a random digit sample. The random digit aspect of the sample is used to avoid “listing” bias. According to the most recent estimates from the Bureau of the Census, there are approximately 96 million households in the United States, and just over 95% of them contain one or more telephones. Telephone directories only list about 73% of such “telephone households” and numerous studies have shown that households with unlisted telephone numbers are different in several important ways from listed households. Moreover, nearly 15% of listed telephone numbers are “discontinued” due to household mobility and directory publishing lag, and it is reasonable to assume that a roughly equal number are working residential numbers too new to be found in published directories.
In order to avoid these various sources of bias, a random digit procedure designed to provide representation of both listed and unlisted (including not-yet-listed) numbers is used. The design of the sample ensures this representation by random generation of the last two digits of telephone numbers selected on the basis of their area code, telephone exchange (the first three digits of a seven digit telephone number), and bank number (the fourth and fifth digits).
The selection procedure produces a sample that is superior to random selection from a frame of listed telephone households, and the superiority is greater to the degree that the assignment of telephone numbers to households is made independently of their publication status in the directory. That is, if unlisted numbers tend to be found in the same telephone banks as listed numbers and if, in general, banks containing relatively few listed numbers also contain relatively few unlisted numbers, then the sample that results from the procedure described below will represent unlisted telephone households fully as well as it represents listed households. Random number selection within banks ensures that all numbers within a particular bank (whether listed or unlisted) have the same likelihood of inclusion in the sample, and that the sample so generated will represent listed and unlisted telephone households in the appropriate proportions.
The first eight digits of the sample telephone numbers (area code, telephone exchange, and bank number) are selected so that they are proportionately stratified by state, county, and telephone exchange within county. That is, the number of telephone numbers randomly sampled from within a given exchange is proportional to that exchange’s share of listed telephone households in the set of exchanges from which the sample is drawn.
Only working banks of telephone numbers are selected. A working bank is defined as 100 contiguous telephone numbers containing three or more residential listings. By eliminating non- working banks of numbers from the sample, the likelihood that any sampled telephone number will be associated with a residence increases from only 20% (where all banks of numbers are sampled) to between 60% and 70%.
The sample was released for interviewing in replicates. Using replicates to control the release of sample to the field ensures that the complete call procedures are followed for the entire sample and ensures an appropriate number of completed interviews from each strata. Again, this works to increase the representativeness of the final sample.
At least six attempts were made to complete an interview at every sampled telephone number. The calls were staggered over times of day and days of the week to maximize the chances of making a contact with a respondent. In each contacted household in the general population adult sample, interviewers asked to speak with the “youngest male 18 or older who is at home”. If there was no eligible man at home, interviewers asked to speak with “the oldest woman 18 or older who lives in the household”.
For the online users sample, interviewers used the general adult sample introduction and then took the respondent through a screening interview to determine if the respondent was an online user. Respondents who were qualified were then taken through the same questionnaire as the general population sample.
Non-response in telephone interview surveys produces some known biases in survey- derived estimates because participation tends to vary for different subgroups of the population, and these subgroups are likely to vary also on questions of substantive interest. For example, men are more difficult than women to reach at home by telephone, and people with relatively low educational attainment are less likely than others to agree to participate in telephone surveys. In order to compensate for these known sources of bias, the sample data for this survey are weighted in analysis. Demographic weighting was used to bring the characteristics of each of the samples into alignment with the demographic characteristics of the relevant population.
Adult Sample Weighting
The demographic weighting parameters for this sample are derived from a special analysis of the most recently available Census Bureau Annual Demographic File (from the March 1993 Current Population Survey). This analysis produced population parameters for the demographic characteristics of Continental US telephone households with adults 18 or older, which are then compared with the sample characteristics to construct sample weights. The sample is weighted on the distributions of age by sex, education by sex, age by education, race and region.
The weights are derived using an iterative technique that simultaneously balances the distributions of all weighting parameters.
Online User’s Weighting
The demographic weighting parameters used for the online users oversample are the demographics of the weighted online users in the general population adult sample. The demographics used were age, sex, race, education, and region.
The weighted oversample of online users was then combined with the weighted online users from the general population adults sample with one final adjustment to ensure that the general population online users and the oversample online users were in their correct proportion relative to one another.
Online Users versus Non Users Analysis
Online users and non-users differ on several dimensions such as social involvement, time spent in various activities and political knowledge. However, differences between users and non- users are also evident for numerous demographics – users tend to be higher income, higher education, more likely to be male and more likely to be young. A special analysis was conducted to compare these two groups (users and non-users) while holding constant the effects of demographic differences between them. The analysis was designed to answer this question: if non-users looked, demographically, like users would there still be differences between the two groups on the other dimensions listed above or would these differences disappear when the two groups were demographically balanced?
The analyses involved the calculation of a second stage weight. The non-users (weighted as described above) were additionally weighted to bring their aggregate demographic composition into alignment with the demographics of the online users (also weighted as described above). The variables used in this weighting were age, sex, race, education, income and region.
The effect of this second stage weight was to demographically balance these two groups to remove the effects of certain demographic differences between them.
Project participants included Scott Keeter, Cliff Zukin and Margaret Petrella as survey analysts; Russell Neuman and Michael Liebhold, consultants; Robert C. Toth, editor; Carolyn Miller, survey statistician; Carol Bowman, research director; and Andrew Kohut, director.