Tuesday, December 23, 2008

Sjoberg (1998). Workd Views, Political Attitudes and Risk Perception

This study was published at Risk: Health, Safety & Environment in 1998 and is the first article I read about the limited power of cultural theory in predicting technological risk attitudes.

Basically, this study, with 141 subjects from a church organization in California, aimed to replicate the Dake's piece published in 1990 (Orienting dispositions in the perception of risk). Dake aruged in his study that "cultural bias and social relations" (including hierarchy, individualism, and egalitarianism) better explained people's risk perception than other political ideology and personality reasons.

However, Sjoberg criticized Dake's ideology measure for being too heterogeneous and having low reliability, compared to Rothman and Lichter's measure, in that it included many non-business and -economics related items.

Variables tested in this study, thus, included political attitudes, trust, affect, and Dake's scales, with dependent variables being 36 societal concerns and 51 risk ratings. Sjorboerg found that although trust was pervasively correlated with concerns, it explained only 2.6% of the variance of risks. The same situation is applicable to Dake's scales of world views. In addition, affect was correlated with risks and concerns well.


The more surprising finding pertains to the fact that Dake's scale were not technology related, leading Sjorberg to conclude that "Cultural Theory simply is wrong (p.149)" [in its use of explaining risks].

Related reading:

Peters, E., & Slovic, P. (1996). The Role of Affect and Worldviews as Orienting Dispositions in the Perception and Acceptance of Nuclear Power Journal of Applied Social Psychology, 26(16), 1427-1453.

Sunday, December 21, 2008

Peters et al (2007). Culture and technological innovation


This is one of the greatest article I've ever read. It is enlightening both conceptually and methodologically. This study was authored by Hans Peter Peters and colleagues, who examined the impact of "institutional trust" and "appreciation of nature" on food biotechnology. Specifically, they compared the dynamics of the two factors in the context of USA and Germany.

They first explained why they study general institutional trust rather than specific trust, although trust towards specific issue or personnel responsible for that issue has been found to exert larger impact on attitudes (e.g., Siegrist & Cvetkovich, 2000 in Risk Analysis). One advantage of using general trust lies in its ability to sort out the methodological ambiguity with respect to the direction of effect. Whereas it is likely that people's level of trust affect their perception of a technology, it is equally possible that their attitudes toward the technology shape how much they trust the officials or institutions responsible for doing research or regulation about the technology.

The authors of the study conceptualize the relationship between trust and attitude by utilizing the idea of a "syndrome," which refers to "a net of concepts that are tied together and vary jointly (p.196)." Peters and colleagues argue that issue-specific trust is part of the attitudinal syndrome and is therefore inappropriate to be used as a predictor, based on the reasons outlined in the previous paragraph. General trust, however, is external to the food biotechnology attitude syndrome. Thus, if a correlation is found, the causal direction should go from general trust to attitude, but not vice versa.

They found that "appreciation of nature" was associated with attitudes towards food biotechnology in both countries. However, the different levels of appreciation results in different levels of support. Germans were found to be more appreciative of nature and therefore possessed more negative attitudes towards food biotechnology than the Americans. It should be noted that although the internal reliability of the measure of "appreciation" is not very high, it predicts the outcome very well.

Quite surprisingly, trust had a positive relationship with attitudes only in the US, not in Germany. The authors attribute the finding to several reasons.

1. The attitudes of regulatory institutions are more consistent in the US; that is, more supportive of biotechnology. However, in Germany, people saw mixed signals from the equivalent institutions. That's why trust appears to be less irrelevant in Germany.

2. The higher relevance of trust in the US also results from the "technocratic" framing of the issue, as opposed to the more "political" nature in Germany. When an issue was discussed within the technocratic discourse, its scope was restricted to the scientists and administrative realm, which rendered institutional trust more relevant in the decision making process. This point is also illustrated in the Attention Cycles and Frames in the Plant Biotechnology Debate by Nisbet and Huge.

3. The awareness of the issue also makes the Germans more likely to generate their own assessment based on the available and relevant arguments than their US counterparts.

4. Trust in institutions is a more effective mechanisms for the resolution of uncertainty in a society emphasizing individualism/ universalism than in a society emphasizing particularism/ collectivism. In the latter type of society, trust is mainly established in specific people, not impersonal actors.

In concluding, the authors suggest that the concept of trust and nature should be examined in the context of nanotechnology. They also call for studies with respect to other cultural dimensions, such as moral and religious beliefs.

Sunday, December 14, 2008

The structure of public opinion (about Biotech) in The Making of Global Controversy


In a chapter in Bauer and Gaskell's book, Biotechnology: The making of a global controversy, some researchers analyzed what constitutes public opinion towards the technology.

Interestingly, they found geographical differences in terms of expectations towards biotechnology. Specifically, countries located in southern Europe are more optimistic about the technology, whereas those in the north are more pessimistic.

Another important finding pertains to the relationship between general attitudes towards biotechnology and those towards specific applications of biotechnology, such as food production, genetic testing, xenotransplant, and so on. The researchers found that these two sets of attitudes were closely related, suggesting that public opinion about biotech applications are very much based on their general impression on biotechnology. This finding somewhat parallels Kahan et. al.'s study which found that people holding positive views on other technologies also tend to be optimistic about nanotechnology. It also resonates with my own research which found that ambivalent attitudes toward Genetically Modified Food and Plants would also result in similar uncertain attitudes toward nanotechnology.

A quote from the chapter reads, "respondents applied general schema relating to biotechnology and technology to the more focused applications about which they were questioned" (p. 215).

Of course, knowledge is a critical determinants of public attitudes. However, the researchers found only moderate effects of knowledge (both subjective and objective) on the perception of risk, utility, moral acceptability, and encouragement. While knowledge was not associated with attitude differentiation, defined as the variance of attitudinal difference with respect to the six specific applications, it was connected to attitude extremity. In other words, those who had higher knowledge tended to have stronger attitudes towards biotechnology, either positive or negative.

One results that strike me is that religiosity accounted for only marginally the variance of attitudes. As we may know, the most controversial aspect of biotechnology rests with its interference with nature and the scientists' act of "playing god," it is surprising that religiosity, the main source of people's moral discipline, did not kick in as an attitudinal determinants. However, the limited effect at the individual level does not necessarily mean the same thing at the country level, as our recent study in Nature Nanotechnology has suggested.

The authors called for more well-designed information campaigns to keep people informed about the pros and cons of biotechnology. However, this call for more information should be understood in conjunction with several recent studies which identified the fact that people interpret information based on their value predispositions or worldviews. Simply providing information just does not help people understand the technology very much.

Saturday, December 13, 2008

Nature Nanotechnology (December, 2008). Nanotechnology and the Public


For the first time, Nature Nanotechnology (Impact Factor: 14.917) published three studies on the social impact of nanotechnology in the same issue.

------------------------------------------------------------------------
Scheufele, D. A., Corley, E. A., Shih, T.-j., Dalrymple, K. E., & Ho, S. S. (2008). Religious beliefs and public attitudes toward nanotechnology in Europe and the United States. Nat Nano, advanced online publication.

Kahan, D. M., Braman, D., Slovic, P., Gastil, J., & Cohen, G. (2008). Cultural cognition of the risks and benefits of nanotechnology. Nat Nano, advanced online publication.

Pidgeon, N., Harthorn, B. H., Bryant, K., & Rogers-Hayden, T. (2008). Deliberating the risks of nanotechnologies for energy and health applications in the United States and United Kingdom. Nat Nano, advanced online publication.
-------------------------------------------------------------------------

I will focus my discussion on the first two studies, since I haven't read the third one. To begin with, the Kahan and Scheufele studies shared a common theme--there exist something that shapes how people interpret messages. Kahan and colleagues suggest it to be cultural worldviews (such as egalitarian/hierarchical and communitarian/individualist ), whereas Scheufele et. al. see religiosity as the determining predisposition.

To say it simple, what these researchers argue is that the same information do not always conjure up the same meaning for people because they possess different worldviews and value predispositions. For people holding a egalitarian and communitarian worldviews, because of their pro-technology disposition, they tend to attribute information a positive light. In contrast, those who are individualist and hierarchical tend to see exactly the same information more negatively because of their proclivity to strive a balance between nature and technology, along with a balance between the rich and the poor.

Similarly, religiosity did the same job. Those who are more religious tend to see more risks associated with nanotechnology. What is noteworthy is that this relationship not only appear at the individual level, but also at the country level. Specifically, religion played a relatively important role in the US, as opposed to other countries with similar level of human development, which, in turn, leads US people to possess more reserved attitudes toward nanotechnology, compared with the more secular country such as the UK, France, and Germany. Such a difference in the level of religiosity and secular was also documented in the Secred and Secular book authored by Norris and Inglehart.

These studies were also covered by a lot of media organizations. For example:

Religion 'shun Nanotechnology' on BBC










Attitudes about nanotechnology vary according to religious and cultural differences on US News and World Report

Sjoberg (2002). Attitudes toward technology and risk: Going beyind what is immediately given

Prof. Sjoberg of Center of Risk Research in Sweden wrote an article about risk perception and the factors shaping it. He mainly focused on two technologies: gene technology and nuclear power.

As indicated by the title, this study goes beyond the properties of risks, which most scholars took as the essence of risk research. Specifically, he looked at other factors, such as the relationship between nature and technology as well as various types of trust with respect to their ability in accounting for public risk perception. In addition, he also examined how utility (benefit), risk, trust, and active risk denial accounted for general attitudes toward technology.

He found that the traditional "psychometric model," in which dread and novelty were considered as the main determinants of public risk perception, accounted for only limited proportion of variance. The addition of the variable--tampering with nature--increased the model's explanatory power. Furthermore, he contends that worldviews were not a good predictor of risk perception.

His result also indicated a less strong relationship between trust and risk perception than that of Siegrist (The influence of trust...on the acceptance of gene technology, 2000, Risk Analysis). He attributed this difference to measurement. Whereas Siegrist used the Likert scale for his measure of trust and risk, Sjoberg used ratings. Sjoberg demonstrated in this study how the "common response factor" may heighten the relationship between trust and risk, a very interesting methodological point.

In general, attitudes toward technology were accounted for mostly by perception of the benefits, followed by risks and, sometimes, the technology's "replacibility." The consequence of a risk was also found to exert more impact on public attitudes than the size of the risk itself.

This study overall laid out a bunch of important factors shaping public perception of risks and attitudes toward technology. However, it seems to me that too many variables were examined without a coherent theme explicating why these factors should be explored together with each other. Furthermore, as with other risk research, this study ask its respondents how "risky" a technology is without exploring the specific risks associated with these technologies. For example, people may consider nuclear power as risky because of the risks in relation to radiation, waste management, and safety, but not other areas. Of course, the 109 college student sample also posed questions for the generalizability of his results.

Saturday, November 29, 2008

News coverage and risk

My work recently involves the examination of how media cover risks. The area of risk is broad and can include environmental, health, technological hazards and so on. During my search for literature, I come across a website: Risk: Health, Safty, & Environment.

Signal and Noise, by Boykoff and Rajan (2007), is another piece comparing news coverage of (environmental/ global climate change) risks in the US and UK.

An interesting idea is that something considered risky in one country/ culture may not trigger the same perception in another culture. My goal is to identify why and how these cultural difference in worldviews affects media representation of the risk issues.

Mass communications and agrobiotechnology.

Monday, November 24, 2008

Why should everyone care about the same environmental risk?

An article from the New York Times, Sustainability for rich and poor, has triggered some throughts from me.

The article talked about the different perceptions of the seriousness about global climate change in countries with different levels of economic development and personal well-being.

The deputy environment minster of Angola has made his point clear that the most immiment environmental concern of his country is the over-population of their capital city--Ruanda--and the ensuing infrastructural problems, such as sewage and waste. Global climate change is not on top of their list.

Well, this makes perfect sense. Countries with different stages of human and economic development will definitely define risks differently. And who has the right to decide what people should care about? Will people care about climate change when they don't even have enough food? Or do they even need to be concerned?

Dake and Wildavsky have published a series of studies, which highlight the influence of cultures and worldviews on what constitutes risks. Specifically, they suggest that people in the egalitarian, individualist, and hierarchist societies will define risks very differently (see "Theories of risk perception: Who fears what and why?").

However, what we see here is more than culture. The economic structure and human development (i.e., to what extent can people fulfill their basic needs of life?) also play a significant role in shaping attitudes toward environmental issues. This example not only strengthens the importance of taking into account country level variables, but also gives researchers good reasons to go beyond cultural factors when studying policy and public opinions of environment-related issues.

Saturday, October 25, 2008

Nanotechnology data

Here are some newly-found data about public perception of nanotechnology.

Nanoforum is an European organization that publishs events, reports, and news about nanotechnology. One of its reports actually gave a decent overview about the debates surrounding nanotechnology; i.e., the balance between risks and benefits. For people who don't know much about nano, this is a good starting point.

Benefits, Risks, Ethical, Legal and Social Aspects of Nanotechnology

Although the report is not data-driven, it includes some findings from other research institute. For instance, there is information about media coverage of nanotechnology in Germany. It also includes an Internet poll conducted by the Royal Society in UK.

Of course, when talking about datasets in Europe, the European Commission is a good resource. In the "Publications and Events" page under the CORDIS website, there are tons of reports and survey results, although the pages for survey don't seem to work.

From the CORDIS website, I found a Eurobarometer special survey (#224, 2005) that asked questions related to nanotechnology. It seems comparable to another survey in 2001. But I haven't checked yet.

Monday, October 13, 2008

More about culture, value, and structural differences around the globe

Worldviews and values

Gallup World Poll looks very cool, but doesn't seem to have much information about Taiwan.

Henk Vinken's Website: Henk Vinken is a sociologist in Netherlands, who has done a lot of comparative studies on culture and values. This Website contains relevant publications and links. Absolutely useful.

Schwartz Value Survey.

International Social Survey Programme.

Country development

World Development Indicators by World Bank

• Internet penetration rate (http://www.internetworldstats.com/top25.htm) (http://www.internetworldstats.com/stats.htm)
• Press freedom (http://www.freedomhouse.org/template.cfm?page=16&year=2007
)
• Newspaper circulation rate – Table 13 (pg 99) of the "Statistical Yearbook" of the United Nations. The 2006 volume is available at the Memorial Reference Desk, call number HA 12.5 U63v.2006
• Television sets penetration rate (see links below)
• Telephone lines / Mobile phones penetration rates (see links below)
• Personal computers penetration rates (see links below)
http://www.itu.int/ITU-D/ict/statistics/
http://www.itu.int/ITU-D/ict/handbook.html
http://www.unctad.org/Templates/webflyer.asp?docid=9479&intItemID=1397&lang=1&mode=downloads

Wednesday, October 1, 2008

Data Analysis in practice

I'm not sure if you feel about the same thing: you have been working with a professor for so long you think you know everything that s/he knows. But every time s/he impresses you with something new. This is what I felt about my advisor--DAS.

He offered a data-analysis course this semester, which covers a wide range of topics ranging from pre-analysis data cleaning, index-building, to various analytical approaches, with the statistical software of SPSS. As I have been working on data for years, I assume this class won't be a big challenge for me. I was thinking that even I just learn one thing new in each class, that will not be too bad. However, what I get turns out to be more than I expected. (I hope this doesn't mean I knew little before going to this class.)

The first class when I sat in was about building a composite index; for example, a "newspaper use" variable. I know exactly the technical process of doing this. But why should we build a multiple item index? I got questioned by a journal reviewer about my "talk" variable, which was based on single survey question. What is wrong with a single-item measure?

Basically, this at least partially has to do with the idea of systematic error, which refers to the missing of data due to mechanisms that does not affect everyone equally. For example when asking about people's income level, those who are weathy tend to be more concious and senstive to the question and have a higher chance of skipping the question. This is called a systematic error because it happens not to everybody, but only to those who don't feel comfortable answering the question (usually the wealthier ones).

Does setting up a yard sign, showing a bump sticker, or donating money each consititue a valid measure ofr political participation? The answer might be negative. Those who put up a yard sign need to have a lawn at the first place. By the same token, for people to show bump stickers, they need to have a car. In addition, those who can donate usually are economically better off. All these measures favor a demographic with higher SES status and thus can not be considered a comprehensive index of participation individually. That's why researchers usually combine all of these variables to form a composite index that gives a clearer picture regarding the concept of "participation."

Another source of systematic error comes from the tendancy for some people to give "socially desirable" answers--people are prone to give anwswers that are adored by the society. There are several strategies employed by large survey institution to deal with such situation. For example, GSS matched the interviewer and interviewee to the level or gender and race in order to solicit valid answer. One appraoch I felt very interesting is by asking respondensts a "hypothetical policy proposal" which does not exist at all. If the respondents said they have heard about it, they are people who tend to provide socially desirable answers.

Friday, September 19, 2008

Asia Values

While I was rummaging literature about worldview differences between West and Asia, I found several useful databases that may provide quantitative evidence. Because large-scale, cross-cultural surveys are relatively few in Asia than in the US (e.g., NES, GSS...) and in Europe (Eurobarometer), I think it's worth a note.

1. Asiabarometer is a comparative survey in Asia, covering East, Southeast, South and Central Asia. A Japanese scholar is in charge of the center. Many cross-national studies are based on the datasets. The data can be downloaded either at its Web site or through ICPSR, of which the University of Wisconsin is a member.

2. Asian Barometer is surprisingly different from the previous dataset. It is maintained by the Institute for Advanced Studies in Humanities and Social Sciences, National Taiwan University. The survey includes some interesting questions about "traditional Chinese values;" for example, "it is a shame for a man to work for a woman." There are also measures about political participation, media use, and national identity.

3. East Asia Value Survey. Unlike the previous two, which focus more on the "opinion" aspect, EAVS emphasizes the "value" aspect. For example, EAVS probes the relationship between human and nature or people's environmental beliefs. Taiwan is also included in this survey.

The Democracy Study Center at UC-Irvine actually compiled a webpage that includes many international survey databases.

A ASEP/JDS-maintained website also gives good resources for comparative studies.

Tuesday, May 13, 2008

Sources in news about epidemics

What sources played a role in news about avian flu and West Nile virus? Does sourcing patterns in news coverage of public health issues different from those of political issues?

The research of news source is pivotal because of mainly two reasons. First, journalists can not say things in their own names. They have to cite the witness of an incident, participant of an event, or government officials in order to be objective. Second, journalists need sources because they provide information not availabe to reporters or explain diffcult ideas.

Because sources have the "privilege" to appear in the media, they are also considered capable of shaping the focus of a story by supplying information to their advantage. Many researchers, in fact, think of the news construction process as the stuggle between sources.

In the case of epidemic diseases, a recent study found that journalists rely on bureaucratic officials and experts as source of information. The affected, such as humans infected by WNV or paltry farmers affected by avian flu, did not get much attention from the media. Even medical care providers, such as doctors and nurses, did not appear very frequently.

The depedence on institutional or government officials is more explicit when looking at trend lines illustrating the distribution of number of news coverage, key sources, number of press releases from key regulatory agencies, and number of infected cases.

The chart below indicates that the ups and downs of media attention to epidemics conform to the ups and downs of the number of press releases and government officials used. The findings suggest that the number of coverage did not reflect the magnitute of diseases, but reflect what government said about the seriousness of diseases.

The case illustrates that government authorities control the information journalists needed for public health issues and therefore have much power to shape media agenda of these issues.



Monday, April 28, 2008

Framing public epidemic hazards


A recent study published in Mass Comm and Society compared news coverage patterns of three epidemic diseases: mad cow disease, West Nile virus, and avian flu. The findings indicated that although journalists emphasized similar themes in general across these three issues, their narrative considerations changed as the diseases developed into different stages.

Specifically, this study looks into media framing and issue attention cycle of epidemic issues. We have talked about framing a lot. But not much research link framing with issue attention cycle, which, in a simplified way, refers to the ups and downs of media attention.

Although how to operationalize the "attention cycle" has undergone some debates, many researchers measured the concept by story numbers in different periods of time, which this study did. Previous research on this topic often focused on environmental issues (e.g. Downs, 1972) or climate change (McComas & Shanahan, 1999). This study expands the line of research to a different realm.

One thing interesting but not highlighted as the main selling point is that this study actually found different cyclical patterns of epidemic hazards from those of environmental issues. In other words, there was a "maintenance" stage for the latter, but not for the former. (The "maintenance stage" refers to a stage of relatively stable news coverage of an issue.) This indicates that media pays attention to different issues in a very different way.

What should be noted is that the issue of avian flu is in an early stage when media still having high interests. It is therefore not possible to compare media frames of avian flu to those of the other two hazards. Maybe future research can address this gap.

For more studies on issue attention cycle, please see:
1. Attention Cycles and Frames in the Plant Biotechnology Debate: Managing Power and Participation through the Press/Policy Connection, by Nisbet & Huge, 2006

2. Are issue-cycles culturally constructed? A comparison of French and American coverage of global climate change, by Brossard, Shanahan, & McComas, 2004.


Sunday, April 20, 2008

Global warming is local

This weekend I attended a workshop about global warming at the local level, held by the Holtz Center for Science and Technology Studies and the Nelson Institute for Environmental Studies.

So, what does it mean that the global warming is local? The workshop did not provide evidence of the weather getting warmer in Wisconsin. Instead, those experts presented models for evaluating climate change, introduced the situation of a governmental task force in charge of policy-making related to climate change, and discussed what UW has done in its effort to tackle the problem.

It is interesting to learn that battling global climate change does not and can not dependent on a simple, all-encompassing approach. Different areas faced different problems and challenges in relation to climate change. For instance, Wisconsin uses a lot of coal for electricity, which released a great amount of carbon into the air. The main source of carbon or CO2 in California, however, comes from vehicle emissions.

It is also interesting to learn that UW has been dedicated to many efforts that is environmentally beneficial. For example, many of the buildings on campus use energy-effective glasses, nice ventilation systems, and rain water recycling systems.

They also talked about the use of biofuel, fuel produced from plants and crops, to decrease the level of greenhouse gas in the air. This remind me of an interesting conversation taken place between a friend of mine and me. One day when we were at a BP gas station, which always touts its efforts in contributing to a better environment by providing "green" fuels, T.L. Lin saw the "10% ethanol" label at the pumps and asked me what good ethanol will do to our vehicles. Both of us were thinking maybe it will make our cars run faster or to have better gas millage. In other words, we both relate the addition of ethanol in petrol to the economic aspect of our life, instead of the environmental aspect.

It is good to see that some steps have already been taken, at the state or local level, to combat climate change. Although I sometimes complain about the unusual hot weather during the summer, it never occurs to me that the battle grounds are so close to me.

-------------------------------------------------------------------------------------------

For some discussion about the relationship between ethanol (in the fuel) and global warming, please see Ethanol and Global Warming

For more articles related to biofuel and global warming, please see the Science page in the NY Times and Turn Food into Fuel in the Time Magazine.

Center for Sustainability and the Global Environment (SAGE)

Thursday, April 10, 2008

Fear of science


This week I watched a movie The Mist, a Sci-Fi horror adapted from Stephen King's novel. Of course the movie has something to do with mist. Well, the storyline is sort of like this. After a night of thunderstorm, a unusual mist rolls into a small town where everybody knows each other . It is to the residents surprise that there seems to be something in the mist that catch and kill people. They later find out that those human-hunting monsters lurking in the thick mist were actually unleashed by some secret scientific experiments or actions carried out by the US military.

Many people found this movie successful in creating the creepy and breath-holding atmosphere. But to me, it is more like a vivid example of people's fear of science.

Although people in general agree that science and technology bring about positive impact on our society, there is a concomitant worry that the development of science may go beyond our control. Although imaginary, the reckless military scientific action and the ensuing detrimental impact reflect lively people's worry in "run-away science."

We have seen the same fear in the development of nuclear power, where people were excited about its ability to solve energy problems on the one hand and worried about its unexpected outcome on the other hand. The case of nanotechnology is probably more recent. A sizable proportion of people in a national opinion survey actually consider the "self-replicating nano-robots going out of control" as an important risk of nanotechnology.

These examples indicate that there exists "ambivalence" in people's attitude toward science. Unless scientists can claim confidently that they have a full control on what they are doing, people will always be skeptic about science. The mist is not the only instance that script writers express such fear (for American people). The movie--Godzilla, where radiations emitted from French nuclear test mutated lizards into gigantic monsters--was actually one that reflects the similar fear.

Are these only imaginations or likely to come true? I don't know. But seeing the fact that the catastrophes usually come from "secret" governmental or military operations, keeping the process of scientific development open to the public may not be a bad idea to reduce people's level of fear.

P.S. For those interested in the movie, please see an article in the NY Times.

Something Creepy This Way Creeps, and It Spells Bad News


Tuesday, April 1, 2008

nano uncertainty


Uncertainty is a concept worth exploring in the context of emerging technologies. A common definition of uncertainty is a state of mind due to an incomplete of information. Probability is therefore a prevalent expression of uncertainty. We often see predictions such as "20% chance of precipitation" or " the likelihood of the outbreak of a disease is X%." Experts provide probability because they don't have sufficient information for making a conclusive call.

Uncertainty comes not only by way of insufficient knowledge, it also originates from disagreements between scientists or experts. The case of global climate change provides a vivid example. Although there is a consensus among most of the scientists that human activities are culprits of the temperature increase, the balance reporting tenet that guides American journalism makes people feel like as if much more research is still needed in order to hold human accountable. Specifically, because of the need to be objective and balanced, journalists tend to give equal space to the two contradictory views (human-induced temperature increase vs uncertain cause), even there exists some consensus in the scientific community.

See discussions by Nisbet and Mooney (2007) in Science, 316 (5821) , p56.
And Journalistic Balance as Global Warming Bias by Boykoff and Boykoff (2004).

In the case of nanotechnology, people feel uncertain because the technology is new and a lot of its risks, properties, and benefits are unclear. In Europe. uncertainty about risks, benefits, and moral acceptability of nanotechnology is found to relate to people's uncertain attitudes of how they should support nanotechnology. (support with regular regulation, support with strict regulation, ...)

More interestingly, uncertainty about nanotechnology is found to be associated with uncertainty about GMF and GMP. The result may suggest two stories:

1. There exist an underlying uncertainty about sciences and technologies. In other words, it doesn't matter which technology was the subject of the question. People's uncertainty will be there as long as science is involved.

2. People use their past experiences of GMO, biotechnology, cloning, or stem cell research and apply them in the case of nanotechnology. That is, above and beyond how much people know about nanotechnology, the mental templates created by other controversial sciences also play a role.

Inasmuch as nano-outreach personnel wants to increase nano acceptance, they need to understand these factors or mechanisms that work to shape public attitudes, or uncertainty. They need to know that, in addition to providing information about nanotechnology, they should also deal with the similarities and differences between nanotechnology and other controversial technologies that people are more familiar with.

Monday, March 24, 2008

Framing the "One China market"


The "One China Market" was a hot topic during the presidential campaign. The idea was originally proposed by the Nationalist vice presidential candidate, Vincent Siew. He made the "cross-strait common market" analogous to the European Union, which is economically beneficial to its member countries.


The same issue was framed by the DPP camp as "invasion of labors from China and the possible influx of low-quality merchandises." This frame was expected to solicit the votes of farmers or industry workers, whose life would be most affected by this policy.


The "black-hearted or adulterated products" frame didn't end up working very well, as public opinion polls failed to show significant changes longitudinally. The ineffective framing might be attributable to its inability to resonate with the general "mass" who weigh "economic prosperity" more heavily than " invasion of labors and bad products." In other words, the stagnant economy under the DPP regime has triggered people's stronger aspire for a healthier and better-off market, although people might also be worried about the downside associated with it.

This example suggests that frames are closely related to people's past experiences, beliefs, expectations, and so on. The frame that echoes better in people's minds usually "wins" the battle.

Another factor that may influence the inequality of the two frames might be "trust." The DPP government apparently has lost people's trust during the past 8 years. On the other hand, Ma ying-jeou has assumed a lot of expectation to turn the situation around. The confidence of people in Ma, therefore, drew them away from the "invasion" frame.

For more coverage and framing of this issue, please see

Taitung farmers, workers oppose 'one China market'

Presidential election 2008: 16 days to go: Siew defends term `one China market'

Thursday, March 20, 2008

Nanotechnology and religiosity

An AAAS presentation at Boston showed a huge gap between U.S. and several European countries in terms of their opinion about whether "nanotechnology is morally acceptable. "

In the US, only 29.5% of the respondents out of a sample of 1,050 gave affirmative answers, whereas the figures in UK, France, and Germany are well above 50%.

The primary researcher, Dietram Scheufele, attributed this difference to levels of religiosity. According to his analysis, Americans are relatively religious, compared with the countries mentioned. The negative relationship between religiosity and the moral issue involved in nanotechnology is even more conspicuous when more countries are taken into account.

There are criticism, of course, to this study. Some criticized about the question wording, while others suggested the ignorance of the science-skeptical nature of a post-modern society. Still others questioned the sample size being too small. In addition, knowledge or awareness of nanotechnology is thought to affect people's response to the "morality" question, although Scheufele has ruled out the possibility.

Many people may have the same question as I do about what makes nanotechnology morally-loaded? The author mentioned that the "playing god" element embedded in the development of nanotechnology, similar to that in stem cell research or biotechnology, is where the morality issue kicked in. So the question becomes: do people's attitudes toward other controversial technology affect their attitudes toward nanotechnology? Is there a underlying attitudinal "factor" that transcends different types of technologies? In other words, is it possible that people don't know much about nanotechnology, but their past experiences with biotechnology, cloning, or stem cell research step up as "heuristics" that provides mental guidances for them with respect to their attitudes toward nanotechnology?

I personally find these questions intriguing and think somebody should investigate them someday.

More readings:

Two-thirds of Americans think nanotechnology is morally unacceptable -- wait, what?, by Engadget.com

Nanotechnology Is Morally Unacceptable, by Business Technology Blog



Sunday, March 9, 2008

Prediction market part 2

Just want to post some additional news or info about the prediction market mentioned in my last article.

The technique has been widely applied to a variety of events, including the recent Olympic qualification game in Taiwan. The market shows a growing value of the Taiwanese team "stock" after it beat Mexico last Friday, suggesting public confidence in the team. Click here to read the news.

There are also predictions of the presidential election that will take place two weeks from now. Right now the predictions (or I should say,the traders) favor Ma and give him a 20% lead. Click here to read the news.

For more information about the introduction of this approach and the comparison between prediction markets and polls, please visit the Prediction Market Center at NCCU or the Mandarin version of Scientific American.

Or go to Swarchy (未來事件交易所) to chip in your wisdom.

Friday, March 7, 2008

Markets vs Polls


The march issue of Scientific American published an article: When Markets Beat the Polls.

The first thing I want to say is that I'm glad public opinion polls are considered "scientific" and discussed in a periodical related mainly to pure science and technology.

The article, however, is not about polls per say after I read it. Polls were used as a contrast to the main topic: market analysis.

The concept of the stock market was applied to predict election by a group of University of Iowa scholars in 1988 when Bush senior and Dukakis were vying for the presidency. The researchers set up an Internet-based interface so that people (traders) can buy "contracts" on Bush or Dukakis. It functions like a stock market where people buy or sell their stocks depending on how they expect the stocks to be in the future. So, for example, if a Bush contract costs you $.53 to buy it, it may signal that Bush would obtain 53% of the votes on the election day.

Although the characteristics of the traders are not nationally representative, the Electronic Market often predicted better than public opinion polls, according to some data in the article. The "free market" and "the wisdom of crowds" ideas are definitely fascinating. But its assumption is somewhat contradictory to my recent understanding of voting behavior.

First, economists have believed that people will make "rational" decisions, which means people will weight risks and benefits, and make a decision that maximize their advantage. However, in the field of public opinion and election, it has been found that people are not always rational--they don't collect every bit of information they need when deciding whom to vote for. In other words, the public is not information "maximizer". They are information "optimizer," those who use whatever information is accessible at the time of decision making. There are even occasions where people don't need information at all.

As Popkin illustrated in his seminal book "The Reasoning Voter: Communication and Persuasion in Presidential Campaigns," voters sometimes reply on heuristics or information shortcuts when making voting choices. These shortcuts save people a lot of cognitive energy that would have been devoted to rummaging through different pieces of information. Some examples of heuristics include party identification, religious beliefs, and people's ideological preferences.

So, if people are not rational most of the time, how can a mechanism based on rational decision making function so well? Well, I don't know.

The second point is about how close the outcome is related to an individual. In an election, many people couldn't care less about who get elected because they don't think their life would be affected. However, in some of the electronic markets, traders are investing real money. The final outcome will affect how much they earn or lose monetarily. Therefore, how people make decisions in these two different scenarios is interesting for further investigation.

But this is definitely an intriguing topics especially when it provides a different way to measure public opinion. Unfortunately, there is no free electronic file of the article to share with those interested.

Monday, March 3, 2008

content analysis and intercoder reliability

Content analysis is a major subfield of communication research. The world-famous agenda-setting study has content analysis as one of its two important components. Understanding what's in the media, after all, is the precursor of understanding what effect media has.

There are a variety of ways for investigating media content. Qualitative researchers employ "textual analysis," whereas quantitative researchers use "content analysis." They are similar methods except content analysis is touting for its "systematic and scientific" approach to analyzing media content.

Content analysis often involves the coding process, when researchers pre-define a series of variables of interests and tally how frequently these variables appear in news articles. This coding process, therefore, brings about the question of reliability. That is, if other researchers are going to do the same thing, can they get similar results using the same coding schemes? This is the question of inter-coder reliability. For a detailed review of or methods for calculating IR, please see here.

Therefore, before researchers start to "code" stories, they have to make sure that each of them would agree on as many cases as possible with respect to the presence or absence of a particular variable. This is the "systematic" part of this method. The process is absolutely tedious and arduous. but it also represents how credible the study is. Obtaining an acceptable level of reliability is therefore the basic requirement for getting a study published.

There are different ways of measuring inter-coder reliability. Some people calculate "percentage of agreement." This method has been criticized for being too lenient. Other approaches, such as Krippendorf's Alpha and Scott's Pi, expanded on the previous method and correct for the effect of chance. The effect of chance means that the coders have consistent results simply by accident. Such effect is especially salient if there are only two categories in the variable.

Which method to use depends on how stringent you want your research to be. But using a Scott's Pi is not necessarily superior than using percentage of agreement, especially when people start to criticize Scott's Pi for being too strict. One of my experience of using Scott's Pi is that it is really difficult to please. For instance, all the values in a variable (e.g., 1,2,3...) should appear in order for Scott's Pi to function. If, say, the president does not appear as a news source in all the stories coded and both you and your coding partners reached a 100% agreement on this, the Scott's Pi will still tell you that the reliability coefficient is not calculable. If, unfortunately, the president appears once in the news stories and only one of the coders catch it, the reliability coefficient will be extremely low, although the percentage agreement may be close to 100%. This is very tricky!

Most of the journals require the authors to report reliability for each variable coded. But some journals requires only an average coefficient, which saves a lot trouble because some bad variables can be averaged out by good variables.

Seeing the time-consuming and tedious process of content analysis, communication scholars have offer their sincerest suggestion for their successors--just don't do it!

For examples of quantitative content analysis, please see McComas and Shanahan (1999): Telling Stories About Global Climate Change and Nisbet et al. (2003): Framing science.

Thursday, February 28, 2008

Writing

For graduate students, or people in academia, writing is the staple of their life. How to write an organized and intriguing paper is a difficult task, especially for people learning English as a second language.

One tip that I personally feel useful when writing a paper is to determine in advance "what story am I going to tell the readers?" Indeed, people like to hear about stories and even a profound research article can have attractive story lines. The story line(s), or narratives, will make papers not only more readable, but also more coherent.


Another tip of writing a research paper is to make your paper like a "funnel." What this means is that you start fairly broad (introduction of the current field of study), gradually narrow down to the specifics (lay out your research focus and the results), and then go broad again to discuss the implication of your findings and your contribution to the knowledge body.

This structure is also applicable when composing the literature review section, where you start with very broad overview of current literature and end with your particular interests in that study. These points are mentioned in the book on the right side. However, this is only one of many elements that make a good literature review. Another main idea to keep in mind, as far as I know, is to write around arguments, not around authors!

I learn this from my adviser, who might learn this somewhere else too. This statement suggests that literature review is not only an accumulation of various studies from different authors. It should be a systematic depiction and summary about relevant studies. Writing around authors would make the literature section look like a pile of names without knowing the connection or relationship between studies. According to another professor in my department, you can tell whether the authors are students or faculty by reading the literature review. But don't get me wrong. I am not saying that writing around the arguments is the necessity.

The element of good writing should also include choosing the right language. Here I am not talking about selecting the right/ precise word. I am talking about how to "frame" your argument. For instance, "limitation" is a section common to most of research papers. However, many researchers call it "discussion of the NATURE of the data." The difference might seem subtle. But the "nature" of the data would sound more positive and agreeable than "limitation," which make your study sound unimportant. Another point about limitation is that "do not end with it." Your study should finish with the "big bang," not constraints.

These are just some tips that I either heard from my academic mentors or read from some writing handbooks. I believe there are tons more and I'll be extremely happy if anyone would share their own tips or experience with me. A good article, after all, does not come from memorizing principals, it comes from constant practice and discussion.

Wednesday, February 27, 2008

Recent deadlines

  1. Barrow Minority Doctoral Student Scholarship (US citizen only)
  2. March 15, NCA Doctoral Honors Seminar @ Boulder, Colorado
  3. March 15, Nanorisk annual conference, click here for more information
  4. April 1 (11:59pm, EST), AEJMC, submit paper here
  5. June 1, 2008, Nicholas C. Mullins Award for more information, click here
  6. Call for Papers on Communicating Climate Change
  • notice of intent to submit, by July 15, 2008
  • submission deadline: August 15, 2008
  • with an aim for March 2009 issue of Science Communication

Monday, February 25, 2008

Is there a true public opinion

Some public opinion surveys conducted by media organizations have been criticized for its inprecision. The large gap between poll results and the real election results in the 2004 presidential election has illustrated the problem vividly.

A commentary published in BBC Chinese.com pointed out that "one" medium close to the pan-blue camp has long suffered from the high refusal rate, thereby decreasing the reliability of its polls. This commentary was taken as an evidence for many pan-green supporters to attack the KMT and the media friendly to the party.

However, if you read the article, the sentence criticizing the polls conducted by a certain medium was nothing more than an "observation" or "anecdote." No systematic analysis or research is involved. This article was cited by many bloggers just because it came from the internationally acclaimed BBC (British Broadcasting Company).

Here, I am not going to criticize or defend the pollsters. What I care about is what affects survey responses.

In many Journalism research method class, students have been taught to pay attention to sampling methods and sampling errors. But there are way more than issues related to sampling that would affect survey results.

For example, question wording matters. The effect of wordings on responses was documented in the study of Kahnamen and Tversky about framing. The famous Asian disease experiment can be found here.

Question order is also a factor. If a question about president Chen's job performance is preceded by a question asking about current status of economy, chances are that his rating will be low. This is an effect called "priming," with Iyengar and Kinder (1987) being the early scholars in this field. The effect rests on the assumption that human knowledge is composed of a network of inter-connected nodes. What opinion people may have depends on which "note" is activated.

Other factors, such as the numbering of response categories, the choices provided, question context, and so on, also rendered effects on poll results.

As a media audience, we therefore can not just look at the percentages presented in an opinion poll, for these numbers can be misleading. To be aware of this is very important, especially in an era when people frequently encounter opinion polls and when media coverage focuses only on "who is ahead and who is trailing."

Sunday, February 24, 2008

Another framing example

"Benevolent act"

A couple of days ago, the KMT presidential candidate Ying-jeou Ma presented the nation's young man a gift. He said, if he is elected, the government will offer those young newly-wed two million dollars as home loans without interest for two years.

Seems a pretty good policy, right?

"Punishment for being single"

The DPP candidate, Frank Hsieh, however, considered this commitment malignant to single people.

Here the same issue is portrayed differently by the two candidates. One deemed it as positive and good will, whereas the other thought of it as negative. This is similar to the abortion issue in the United States that pit "right of choice" against "right of life."

The two frames again invoked very different images and may create different public climate. Needless to say that those who see it as a punishment will be against the policy, or even the candidate, while those think of it as benevolence will react the other way around. Which frame people may adopt rests in their demographic background, social economic status, social experience, and so on. Therefore, many scholars suggest that frames live a double life: one lives in the media and political world, with the other residing in people's mind. Frames work most effectively when these two aspects match!

From this example, we can see that political actors can "package" an issue strategically, arouse different emotions from people, and distinguish supporters from non-supporters. Based on the framing effect, it is evident that what is important is not what you say, it is how you say it! When a candidate or a politician can find a "term" or "frame" acceptable to most of the people, chances are that (s)he will garner their support.

Friday, February 22, 2008

Framing CKS


The blatant act of the ruling Democratic Progressive Party (DPP) to rename the CKS memorial hall and the plague at the front entrance has captured the attention of the whole island.

Many people may be curious why the government would bother to change the name of a memorial hall and make it such a big news?

This is because what's involved is not just name change. It is a struggle of two diametrically oppositional ideologies. To put it differently, it is a war between two "frames."

Frames refer to perceptual filters that tell you how to consider an issue and how to make sense of new information with existing knowledge base. In this way, our brains do not have to learn new things all the time. Many scholars argued that frames are culturally relevant. We can see this dimension clearly in this "re-naming incident."

As the original name, CKS memorial hall, suggests, the former president was such a great leader that everyone should be aware of his contributions. He was so sublime that he has a huge brass bust over hundreds of fliers of stairs. People here are more like pilgrims than visitors. The name and the structure of the memorial hall suggest CKS a historically memorable leader and the country should all memorize him in honor of his contribution.

The other perspective is quite opposite. People holding this opposite belief thought of CKS more as a dictator who should be held accountable for the 228 tragedy in 1947, which led to large-scale social commotions and death. The incident was later characterized by some people as a conflict between Taiwanese and "outside province people." In the eyes of these people, CKS was a person who obstructed the democratization of Taiwan. Since the ruling DPP has long been branding itself as a party representing grass-root voices, it's not surprising that its leaders would advocate changing the "CKS Memorial Hall" to "Taiwan Democracy Memorial Hall."

As we can see, the images invoked by the two frames are different; that is, the way people will look at the former president is different. One implies that CKS is a hero, with the other implying him being a villain. One recognizes CKS's contribution to the country; the other holds him responsible for the tragedy.

Therefore, whether you support or oppose to the name-changing act depends on which frames you adopt. In the "war of words," there are no right and wrong. There are just differences!

Thursday, February 21, 2008

[Nanotechnology] Scientists worry more than the public


The research published in the November issue of Nature Nanotechnology indicated that nano-scientists worry more about the health and environmental impact of nanotechnology than the general public.

The research has garnered extensive media attention since the article in Nature released. There are also many bloggers talking about the implication of the findings. I am, therefore, not going to spend time doing redundant things.

What will be focused here is the background and stories of the scientists survey leading to this Nature article.

About an year from now when I was back in Taiwan for vacation, I received email from my professor about an seemingly easy assignment--looking for contact information for leading nano researchers in the US.

By contact information, I mean which institution they are affiliated with, their physical address, email address, office number, and everything you can think of about a person. Well, since Google has been so powerful a tool where you can almost find anything, I am not too "worried" about the task, especially when I was only responsible for 700 researchers (out of around 1,000 in the whole sample).

Things turned out to be messier and time-consuming than I expected. First of all, as we are looking for "first author" only, many of them are graduate students and had been graduated by the time their research got published and collected in our database. Second, not all of the researchers were affiliated with a University, which usually maintains a more comprehensive information about its constituent. For those who worked in companies or research centers, such as IBM or Dupont, information was really difficult to gather. The third thing that exacerbated the searching process is the citation format used by the academia. As is widely known, many nano authors are from out of the country and it is extremely difficult to ensure the identity of people with the same last name and the same first name initial. For example, you may find a lot of "Wang, T" in different universities or organizations.

As I rummaging through the webpages trying to collect as much information as I can, I started to appreciate those laboratories with detailed Web site explaining what they are doing and, most important of all, their staff and researchers. The coolest thing is that some of them even listed their past group members and their later whereabouts. This information just saved my life!

I am glad that all these are done and the results published in internationally acclaimed journal. This post is perhaps less academic, but it certainly shows the cumbersome process of collecting valuable data.

More detail about the study can be found with the reference listed below.

Scheufele, D. A., Corley, E. A., Dunwoody, S., Shih, T., Hillback, E., & Guston, D. (2007). Nanotechnology: Scientists worry about some risks more than the general public. Nature Nanotechnology.

For more discussion about this study, please visit: Nanopublic authored by Dietram A. Scheufele.

New site

This cite is mainly for me to post some articles or information relating to my research interests, most of which is about the interplay between media, emerging science, and politics.

Not sure how well I can do and how far I can go. But setting this blog up is at least the first nice step. :)