Monday, March 24, 2008

Framing the "One China market"


The "One China Market" was a hot topic during the presidential campaign. The idea was originally proposed by the Nationalist vice presidential candidate, Vincent Siew. He made the "cross-strait common market" analogous to the European Union, which is economically beneficial to its member countries.


The same issue was framed by the DPP camp as "invasion of labors from China and the possible influx of low-quality merchandises." This frame was expected to solicit the votes of farmers or industry workers, whose life would be most affected by this policy.


The "black-hearted or adulterated products" frame didn't end up working very well, as public opinion polls failed to show significant changes longitudinally. The ineffective framing might be attributable to its inability to resonate with the general "mass" who weigh "economic prosperity" more heavily than " invasion of labors and bad products." In other words, the stagnant economy under the DPP regime has triggered people's stronger aspire for a healthier and better-off market, although people might also be worried about the downside associated with it.

This example suggests that frames are closely related to people's past experiences, beliefs, expectations, and so on. The frame that echoes better in people's minds usually "wins" the battle.

Another factor that may influence the inequality of the two frames might be "trust." The DPP government apparently has lost people's trust during the past 8 years. On the other hand, Ma ying-jeou has assumed a lot of expectation to turn the situation around. The confidence of people in Ma, therefore, drew them away from the "invasion" frame.

For more coverage and framing of this issue, please see

Taitung farmers, workers oppose 'one China market'

Presidential election 2008: 16 days to go: Siew defends term `one China market'

Thursday, March 20, 2008

Nanotechnology and religiosity

An AAAS presentation at Boston showed a huge gap between U.S. and several European countries in terms of their opinion about whether "nanotechnology is morally acceptable. "

In the US, only 29.5% of the respondents out of a sample of 1,050 gave affirmative answers, whereas the figures in UK, France, and Germany are well above 50%.

The primary researcher, Dietram Scheufele, attributed this difference to levels of religiosity. According to his analysis, Americans are relatively religious, compared with the countries mentioned. The negative relationship between religiosity and the moral issue involved in nanotechnology is even more conspicuous when more countries are taken into account.

There are criticism, of course, to this study. Some criticized about the question wording, while others suggested the ignorance of the science-skeptical nature of a post-modern society. Still others questioned the sample size being too small. In addition, knowledge or awareness of nanotechnology is thought to affect people's response to the "morality" question, although Scheufele has ruled out the possibility.

Many people may have the same question as I do about what makes nanotechnology morally-loaded? The author mentioned that the "playing god" element embedded in the development of nanotechnology, similar to that in stem cell research or biotechnology, is where the morality issue kicked in. So the question becomes: do people's attitudes toward other controversial technology affect their attitudes toward nanotechnology? Is there a underlying attitudinal "factor" that transcends different types of technologies? In other words, is it possible that people don't know much about nanotechnology, but their past experiences with biotechnology, cloning, or stem cell research step up as "heuristics" that provides mental guidances for them with respect to their attitudes toward nanotechnology?

I personally find these questions intriguing and think somebody should investigate them someday.

More readings:

Two-thirds of Americans think nanotechnology is morally unacceptable -- wait, what?, by Engadget.com

Nanotechnology Is Morally Unacceptable, by Business Technology Blog



Sunday, March 9, 2008

Prediction market part 2

Just want to post some additional news or info about the prediction market mentioned in my last article.

The technique has been widely applied to a variety of events, including the recent Olympic qualification game in Taiwan. The market shows a growing value of the Taiwanese team "stock" after it beat Mexico last Friday, suggesting public confidence in the team. Click here to read the news.

There are also predictions of the presidential election that will take place two weeks from now. Right now the predictions (or I should say,the traders) favor Ma and give him a 20% lead. Click here to read the news.

For more information about the introduction of this approach and the comparison between prediction markets and polls, please visit the Prediction Market Center at NCCU or the Mandarin version of Scientific American.

Or go to Swarchy (未來事件交易所) to chip in your wisdom.

Friday, March 7, 2008

Markets vs Polls


The march issue of Scientific American published an article: When Markets Beat the Polls.

The first thing I want to say is that I'm glad public opinion polls are considered "scientific" and discussed in a periodical related mainly to pure science and technology.

The article, however, is not about polls per say after I read it. Polls were used as a contrast to the main topic: market analysis.

The concept of the stock market was applied to predict election by a group of University of Iowa scholars in 1988 when Bush senior and Dukakis were vying for the presidency. The researchers set up an Internet-based interface so that people (traders) can buy "contracts" on Bush or Dukakis. It functions like a stock market where people buy or sell their stocks depending on how they expect the stocks to be in the future. So, for example, if a Bush contract costs you $.53 to buy it, it may signal that Bush would obtain 53% of the votes on the election day.

Although the characteristics of the traders are not nationally representative, the Electronic Market often predicted better than public opinion polls, according to some data in the article. The "free market" and "the wisdom of crowds" ideas are definitely fascinating. But its assumption is somewhat contradictory to my recent understanding of voting behavior.

First, economists have believed that people will make "rational" decisions, which means people will weight risks and benefits, and make a decision that maximize their advantage. However, in the field of public opinion and election, it has been found that people are not always rational--they don't collect every bit of information they need when deciding whom to vote for. In other words, the public is not information "maximizer". They are information "optimizer," those who use whatever information is accessible at the time of decision making. There are even occasions where people don't need information at all.

As Popkin illustrated in his seminal book "The Reasoning Voter: Communication and Persuasion in Presidential Campaigns," voters sometimes reply on heuristics or information shortcuts when making voting choices. These shortcuts save people a lot of cognitive energy that would have been devoted to rummaging through different pieces of information. Some examples of heuristics include party identification, religious beliefs, and people's ideological preferences.

So, if people are not rational most of the time, how can a mechanism based on rational decision making function so well? Well, I don't know.

The second point is about how close the outcome is related to an individual. In an election, many people couldn't care less about who get elected because they don't think their life would be affected. However, in some of the electronic markets, traders are investing real money. The final outcome will affect how much they earn or lose monetarily. Therefore, how people make decisions in these two different scenarios is interesting for further investigation.

But this is definitely an intriguing topics especially when it provides a different way to measure public opinion. Unfortunately, there is no free electronic file of the article to share with those interested.

Monday, March 3, 2008

content analysis and intercoder reliability

Content analysis is a major subfield of communication research. The world-famous agenda-setting study has content analysis as one of its two important components. Understanding what's in the media, after all, is the precursor of understanding what effect media has.

There are a variety of ways for investigating media content. Qualitative researchers employ "textual analysis," whereas quantitative researchers use "content analysis." They are similar methods except content analysis is touting for its "systematic and scientific" approach to analyzing media content.

Content analysis often involves the coding process, when researchers pre-define a series of variables of interests and tally how frequently these variables appear in news articles. This coding process, therefore, brings about the question of reliability. That is, if other researchers are going to do the same thing, can they get similar results using the same coding schemes? This is the question of inter-coder reliability. For a detailed review of or methods for calculating IR, please see here.

Therefore, before researchers start to "code" stories, they have to make sure that each of them would agree on as many cases as possible with respect to the presence or absence of a particular variable. This is the "systematic" part of this method. The process is absolutely tedious and arduous. but it also represents how credible the study is. Obtaining an acceptable level of reliability is therefore the basic requirement for getting a study published.

There are different ways of measuring inter-coder reliability. Some people calculate "percentage of agreement." This method has been criticized for being too lenient. Other approaches, such as Krippendorf's Alpha and Scott's Pi, expanded on the previous method and correct for the effect of chance. The effect of chance means that the coders have consistent results simply by accident. Such effect is especially salient if there are only two categories in the variable.

Which method to use depends on how stringent you want your research to be. But using a Scott's Pi is not necessarily superior than using percentage of agreement, especially when people start to criticize Scott's Pi for being too strict. One of my experience of using Scott's Pi is that it is really difficult to please. For instance, all the values in a variable (e.g., 1,2,3...) should appear in order for Scott's Pi to function. If, say, the president does not appear as a news source in all the stories coded and both you and your coding partners reached a 100% agreement on this, the Scott's Pi will still tell you that the reliability coefficient is not calculable. If, unfortunately, the president appears once in the news stories and only one of the coders catch it, the reliability coefficient will be extremely low, although the percentage agreement may be close to 100%. This is very tricky!

Most of the journals require the authors to report reliability for each variable coded. But some journals requires only an average coefficient, which saves a lot trouble because some bad variables can be averaged out by good variables.

Seeing the time-consuming and tedious process of content analysis, communication scholars have offer their sincerest suggestion for their successors--just don't do it!

For examples of quantitative content analysis, please see McComas and Shanahan (1999): Telling Stories About Global Climate Change and Nisbet et al. (2003): Framing science.