Saturday, October 25, 2008

Nanotechnology data

Here are some newly-found data about public perception of nanotechnology.

Nanoforum is an European organization that publishs events, reports, and news about nanotechnology. One of its reports actually gave a decent overview about the debates surrounding nanotechnology; i.e., the balance between risks and benefits. For people who don't know much about nano, this is a good starting point.

Benefits, Risks, Ethical, Legal and Social Aspects of Nanotechnology

Although the report is not data-driven, it includes some findings from other research institute. For instance, there is information about media coverage of nanotechnology in Germany. It also includes an Internet poll conducted by the Royal Society in UK.

Of course, when talking about datasets in Europe, the European Commission is a good resource. In the "Publications and Events" page under the CORDIS website, there are tons of reports and survey results, although the pages for survey don't seem to work.

From the CORDIS website, I found a Eurobarometer special survey (#224, 2005) that asked questions related to nanotechnology. It seems comparable to another survey in 2001. But I haven't checked yet.

Monday, October 13, 2008

More about culture, value, and structural differences around the globe

Worldviews and values

Gallup World Poll looks very cool, but doesn't seem to have much information about Taiwan.

Henk Vinken's Website: Henk Vinken is a sociologist in Netherlands, who has done a lot of comparative studies on culture and values. This Website contains relevant publications and links. Absolutely useful.

Schwartz Value Survey.

International Social Survey Programme.

Country development

World Development Indicators by World Bank

• Internet penetration rate (http://www.internetworldstats.com/top25.htm) (http://www.internetworldstats.com/stats.htm)
• Press freedom (http://www.freedomhouse.org/template.cfm?page=16&year=2007
)
• Newspaper circulation rate – Table 13 (pg 99) of the "Statistical Yearbook" of the United Nations. The 2006 volume is available at the Memorial Reference Desk, call number HA 12.5 U63v.2006
• Television sets penetration rate (see links below)
• Telephone lines / Mobile phones penetration rates (see links below)
• Personal computers penetration rates (see links below)
http://www.itu.int/ITU-D/ict/statistics/
http://www.itu.int/ITU-D/ict/handbook.html
http://www.unctad.org/Templates/webflyer.asp?docid=9479&intItemID=1397&lang=1&mode=downloads

Wednesday, October 1, 2008

Data Analysis in practice

I'm not sure if you feel about the same thing: you have been working with a professor for so long you think you know everything that s/he knows. But every time s/he impresses you with something new. This is what I felt about my advisor--DAS.

He offered a data-analysis course this semester, which covers a wide range of topics ranging from pre-analysis data cleaning, index-building, to various analytical approaches, with the statistical software of SPSS. As I have been working on data for years, I assume this class won't be a big challenge for me. I was thinking that even I just learn one thing new in each class, that will not be too bad. However, what I get turns out to be more than I expected. (I hope this doesn't mean I knew little before going to this class.)

The first class when I sat in was about building a composite index; for example, a "newspaper use" variable. I know exactly the technical process of doing this. But why should we build a multiple item index? I got questioned by a journal reviewer about my "talk" variable, which was based on single survey question. What is wrong with a single-item measure?

Basically, this at least partially has to do with the idea of systematic error, which refers to the missing of data due to mechanisms that does not affect everyone equally. For example when asking about people's income level, those who are weathy tend to be more concious and senstive to the question and have a higher chance of skipping the question. This is called a systematic error because it happens not to everybody, but only to those who don't feel comfortable answering the question (usually the wealthier ones).

Does setting up a yard sign, showing a bump sticker, or donating money each consititue a valid measure ofr political participation? The answer might be negative. Those who put up a yard sign need to have a lawn at the first place. By the same token, for people to show bump stickers, they need to have a car. In addition, those who can donate usually are economically better off. All these measures favor a demographic with higher SES status and thus can not be considered a comprehensive index of participation individually. That's why researchers usually combine all of these variables to form a composite index that gives a clearer picture regarding the concept of "participation."

Another source of systematic error comes from the tendancy for some people to give "socially desirable" answers--people are prone to give anwswers that are adored by the society. There are several strategies employed by large survey institution to deal with such situation. For example, GSS matched the interviewer and interviewee to the level or gender and race in order to solicit valid answer. One appraoch I felt very interesting is by asking respondensts a "hypothetical policy proposal" which does not exist at all. If the respondents said they have heard about it, they are people who tend to provide socially desirable answers.