One of the challenges of dealing with climate change is that it’s difficult to communicate to the public how much confidence the scientific community has in a particular theory. Here’s a hypothesis: people have a better intuitive grasp of relative comparisons (A is bigger than B) than they do with absolutes (we are 90% confident that “A” is big).
Assuming this hypothesis is true, we could do a broad survey of scientists and use them to rank-order confidence in various scientific theories that the general public is familiar with. Possible examples of theories:
- Plate tectonics
- Childhood vaccinations cause autism
- Germ theory of disease
- Theory of relativity
- Cigarette smoking cause lung cancer
- Diets rich in saturated fats cause heart disease
- AIDS is caused by HIV
- The death penalty reduces violent crime
- Evolution by natural selection
- Exposure to electromagnetic radiation from high-voltage power lines cause cancer
- Intelligence is inherited biologically
- Government stimulus spending reduces unemployment in a recession
Assuming the survey produced a (relatively) stable rank-ordering across these theories, the end goal would be to communicate confidence in a scientific theory by saying: “Scientists are more confident in theory X than they are in theories Y,Z, but not as confident as they are in theories P,Q”.
You start with nothing, a blank editor window and some LaTeX boilerplate, some half-baked ideas, a few axes to grind and a tremendous apprehension at how much your life is going to suck between now and the deadline.
Great post by Matt Welsh on writing scientific papers.
She’s very good at everything, very smart,” Haas said of her daughter. “She loves chemistry, loves math. I tell her, ‘Don’t go into science.’ I’ve made that very clear to her.”
Where the jobs aren’t. Brutal.
Scientific World Journal retracted two papers because of the use of excessive references to other papers in order to inflate citation metrics.
In the North American tradition of journalism, it is considered inappropriate for journalists to have actual opinions about the news stories they are covering. But humans have opinions, and everybody knows that, so suppressing this is a ridiculous fiction. What’s pernicious about this tradition is that journalistic writing is more compelling when authors write in their own voice, instead of the detached view from nowhere shtick that Jay Rosen (rightly!) complains about. It’s even worse in analysis-type pieces, because the journalist is supposed to express their opinion in the piece, but aren’t allowed to do so explicitly, so what happens instead is they find sources they agree with, and then publish quotes from those sources.
This piece by Dave Weigel from a few days ago is an example of the kind of journalistic writing that becomes possible if a journalist writes in their own voice. Great stuff, and the Washington Post is much poorer for having let him go.
When you’re programming, and the program you’re working on behaves in some way you don’t expect, then by definition there’s some assumption you’ve made about the system that’s incorrect.
Today I was working on adding some fields to a Django admin interface, and I got an unexpected error message when I tried to create a new object in the admin interface and save it. The code was very similar to other code I had previously written that already worked, so I tried to figure out what was wrong by identifying the difference.
The important difference I couldn’t see was that the name of a variable was the source of the problem. In this particular case, the field name (image) was colliding with the field name in a grandparent class, which caused the error to occur. I hadn’t considered that the name of the variable was a meaningful difference that could cause a problem, until I did a search on Stack Overflow and found someone who encountered a similar problem.
This is one of the hardest parts of programming: being able to systematically test your assumptions until you discover which of your current beliefs are false.