Tuesday, December 23, 2008

Sjoberg (1998). Workd Views, Political Attitudes and Risk Perception

This study was published at Risk: Health, Safety & Environment in 1998 and is the first article I read about the limited power of cultural theory in predicting technological risk attitudes.

Basically, this study, with 141 subjects from a church organization in California, aimed to replicate the Dake's piece published in 1990 (Orienting dispositions in the perception of risk). Dake aruged in his study that "cultural bias and social relations" (including hierarchy, individualism, and egalitarianism) better explained people's risk perception than other political ideology and personality reasons.

However, Sjoberg criticized Dake's ideology measure for being too heterogeneous and having low reliability, compared to Rothman and Lichter's measure, in that it included many non-business and -economics related items.

Variables tested in this study, thus, included political attitudes, trust, affect, and Dake's scales, with dependent variables being 36 societal concerns and 51 risk ratings. Sjorboerg found that although trust was pervasively correlated with concerns, it explained only 2.6% of the variance of risks. The same situation is applicable to Dake's scales of world views. In addition, affect was correlated with risks and concerns well.


The more surprising finding pertains to the fact that Dake's scale were not technology related, leading Sjorberg to conclude that "Cultural Theory simply is wrong (p.149)" [in its use of explaining risks].

Related reading:

Peters, E., & Slovic, P. (1996). The Role of Affect and Worldviews as Orienting Dispositions in the Perception and Acceptance of Nuclear Power Journal of Applied Social Psychology, 26(16), 1427-1453.

Sunday, December 21, 2008

Peters et al (2007). Culture and technological innovation


This is one of the greatest article I've ever read. It is enlightening both conceptually and methodologically. This study was authored by Hans Peter Peters and colleagues, who examined the impact of "institutional trust" and "appreciation of nature" on food biotechnology. Specifically, they compared the dynamics of the two factors in the context of USA and Germany.

They first explained why they study general institutional trust rather than specific trust, although trust towards specific issue or personnel responsible for that issue has been found to exert larger impact on attitudes (e.g., Siegrist & Cvetkovich, 2000 in Risk Analysis). One advantage of using general trust lies in its ability to sort out the methodological ambiguity with respect to the direction of effect. Whereas it is likely that people's level of trust affect their perception of a technology, it is equally possible that their attitudes toward the technology shape how much they trust the officials or institutions responsible for doing research or regulation about the technology.

The authors of the study conceptualize the relationship between trust and attitude by utilizing the idea of a "syndrome," which refers to "a net of concepts that are tied together and vary jointly (p.196)." Peters and colleagues argue that issue-specific trust is part of the attitudinal syndrome and is therefore inappropriate to be used as a predictor, based on the reasons outlined in the previous paragraph. General trust, however, is external to the food biotechnology attitude syndrome. Thus, if a correlation is found, the causal direction should go from general trust to attitude, but not vice versa.

They found that "appreciation of nature" was associated with attitudes towards food biotechnology in both countries. However, the different levels of appreciation results in different levels of support. Germans were found to be more appreciative of nature and therefore possessed more negative attitudes towards food biotechnology than the Americans. It should be noted that although the internal reliability of the measure of "appreciation" is not very high, it predicts the outcome very well.

Quite surprisingly, trust had a positive relationship with attitudes only in the US, not in Germany. The authors attribute the finding to several reasons.

1. The attitudes of regulatory institutions are more consistent in the US; that is, more supportive of biotechnology. However, in Germany, people saw mixed signals from the equivalent institutions. That's why trust appears to be less irrelevant in Germany.

2. The higher relevance of trust in the US also results from the "technocratic" framing of the issue, as opposed to the more "political" nature in Germany. When an issue was discussed within the technocratic discourse, its scope was restricted to the scientists and administrative realm, which rendered institutional trust more relevant in the decision making process. This point is also illustrated in the Attention Cycles and Frames in the Plant Biotechnology Debate by Nisbet and Huge.

3. The awareness of the issue also makes the Germans more likely to generate their own assessment based on the available and relevant arguments than their US counterparts.

4. Trust in institutions is a more effective mechanisms for the resolution of uncertainty in a society emphasizing individualism/ universalism than in a society emphasizing particularism/ collectivism. In the latter type of society, trust is mainly established in specific people, not impersonal actors.

In concluding, the authors suggest that the concept of trust and nature should be examined in the context of nanotechnology. They also call for studies with respect to other cultural dimensions, such as moral and religious beliefs.

Sunday, December 14, 2008

The structure of public opinion (about Biotech) in The Making of Global Controversy


In a chapter in Bauer and Gaskell's book, Biotechnology: The making of a global controversy, some researchers analyzed what constitutes public opinion towards the technology.

Interestingly, they found geographical differences in terms of expectations towards biotechnology. Specifically, countries located in southern Europe are more optimistic about the technology, whereas those in the north are more pessimistic.

Another important finding pertains to the relationship between general attitudes towards biotechnology and those towards specific applications of biotechnology, such as food production, genetic testing, xenotransplant, and so on. The researchers found that these two sets of attitudes were closely related, suggesting that public opinion about biotech applications are very much based on their general impression on biotechnology. This finding somewhat parallels Kahan et. al.'s study which found that people holding positive views on other technologies also tend to be optimistic about nanotechnology. It also resonates with my own research which found that ambivalent attitudes toward Genetically Modified Food and Plants would also result in similar uncertain attitudes toward nanotechnology.

A quote from the chapter reads, "respondents applied general schema relating to biotechnology and technology to the more focused applications about which they were questioned" (p. 215).

Of course, knowledge is a critical determinants of public attitudes. However, the researchers found only moderate effects of knowledge (both subjective and objective) on the perception of risk, utility, moral acceptability, and encouragement. While knowledge was not associated with attitude differentiation, defined as the variance of attitudinal difference with respect to the six specific applications, it was connected to attitude extremity. In other words, those who had higher knowledge tended to have stronger attitudes towards biotechnology, either positive or negative.

One results that strike me is that religiosity accounted for only marginally the variance of attitudes. As we may know, the most controversial aspect of biotechnology rests with its interference with nature and the scientists' act of "playing god," it is surprising that religiosity, the main source of people's moral discipline, did not kick in as an attitudinal determinants. However, the limited effect at the individual level does not necessarily mean the same thing at the country level, as our recent study in Nature Nanotechnology has suggested.

The authors called for more well-designed information campaigns to keep people informed about the pros and cons of biotechnology. However, this call for more information should be understood in conjunction with several recent studies which identified the fact that people interpret information based on their value predispositions or worldviews. Simply providing information just does not help people understand the technology very much.

Saturday, December 13, 2008

Nature Nanotechnology (December, 2008). Nanotechnology and the Public


For the first time, Nature Nanotechnology (Impact Factor: 14.917) published three studies on the social impact of nanotechnology in the same issue.

------------------------------------------------------------------------
Scheufele, D. A., Corley, E. A., Shih, T.-j., Dalrymple, K. E., & Ho, S. S. (2008). Religious beliefs and public attitudes toward nanotechnology in Europe and the United States. Nat Nano, advanced online publication.

Kahan, D. M., Braman, D., Slovic, P., Gastil, J., & Cohen, G. (2008). Cultural cognition of the risks and benefits of nanotechnology. Nat Nano, advanced online publication.

Pidgeon, N., Harthorn, B. H., Bryant, K., & Rogers-Hayden, T. (2008). Deliberating the risks of nanotechnologies for energy and health applications in the United States and United Kingdom. Nat Nano, advanced online publication.
-------------------------------------------------------------------------

I will focus my discussion on the first two studies, since I haven't read the third one. To begin with, the Kahan and Scheufele studies shared a common theme--there exist something that shapes how people interpret messages. Kahan and colleagues suggest it to be cultural worldviews (such as egalitarian/hierarchical and communitarian/individualist ), whereas Scheufele et. al. see religiosity as the determining predisposition.

To say it simple, what these researchers argue is that the same information do not always conjure up the same meaning for people because they possess different worldviews and value predispositions. For people holding a egalitarian and communitarian worldviews, because of their pro-technology disposition, they tend to attribute information a positive light. In contrast, those who are individualist and hierarchical tend to see exactly the same information more negatively because of their proclivity to strive a balance between nature and technology, along with a balance between the rich and the poor.

Similarly, religiosity did the same job. Those who are more religious tend to see more risks associated with nanotechnology. What is noteworthy is that this relationship not only appear at the individual level, but also at the country level. Specifically, religion played a relatively important role in the US, as opposed to other countries with similar level of human development, which, in turn, leads US people to possess more reserved attitudes toward nanotechnology, compared with the more secular country such as the UK, France, and Germany. Such a difference in the level of religiosity and secular was also documented in the Secred and Secular book authored by Norris and Inglehart.

These studies were also covered by a lot of media organizations. For example:

Religion 'shun Nanotechnology' on BBC










Attitudes about nanotechnology vary according to religious and cultural differences on US News and World Report

Sjoberg (2002). Attitudes toward technology and risk: Going beyind what is immediately given

Prof. Sjoberg of Center of Risk Research in Sweden wrote an article about risk perception and the factors shaping it. He mainly focused on two technologies: gene technology and nuclear power.

As indicated by the title, this study goes beyond the properties of risks, which most scholars took as the essence of risk research. Specifically, he looked at other factors, such as the relationship between nature and technology as well as various types of trust with respect to their ability in accounting for public risk perception. In addition, he also examined how utility (benefit), risk, trust, and active risk denial accounted for general attitudes toward technology.

He found that the traditional "psychometric model," in which dread and novelty were considered as the main determinants of public risk perception, accounted for only limited proportion of variance. The addition of the variable--tampering with nature--increased the model's explanatory power. Furthermore, he contends that worldviews were not a good predictor of risk perception.

His result also indicated a less strong relationship between trust and risk perception than that of Siegrist (The influence of trust...on the acceptance of gene technology, 2000, Risk Analysis). He attributed this difference to measurement. Whereas Siegrist used the Likert scale for his measure of trust and risk, Sjoberg used ratings. Sjoberg demonstrated in this study how the "common response factor" may heighten the relationship between trust and risk, a very interesting methodological point.

In general, attitudes toward technology were accounted for mostly by perception of the benefits, followed by risks and, sometimes, the technology's "replacibility." The consequence of a risk was also found to exert more impact on public attitudes than the size of the risk itself.

This study overall laid out a bunch of important factors shaping public perception of risks and attitudes toward technology. However, it seems to me that too many variables were examined without a coherent theme explicating why these factors should be explored together with each other. Furthermore, as with other risk research, this study ask its respondents how "risky" a technology is without exploring the specific risks associated with these technologies. For example, people may consider nuclear power as risky because of the risks in relation to radiation, waste management, and safety, but not other areas. Of course, the 109 college student sample also posed questions for the generalizability of his results.