November 12, 2010

is there any validity in academic testing?

Now that I'm getting ready to apply for jobs and graduate programs, I've have to come into contact with my transcript more than I would have liked to. Over the past four years, I've had my share of abysmal grades. And I'm not being dramatic, some semesters have been just as atrocious as you can get. When I think back to the courses in which I had received such poor grades, I realize that the classes were based almost solely on exam grades. You do poorly on an exam or two, and you don't really have an opportunity to help yourself. Is that really fair? Grades, after all, are supposed to be indications of how well a student is understanding the course material, not punishments for inadequate or misguided study habits.

I get that "intelligence" or "knowledge of a subject" needs to be operationalized somehow in order to make comparisons. While an exam is a way to numerically gauge how much a student has retained about a subject, it almost never truly displays their understanding of it. When you see a student cramming for an exam - whether it be biochemistry or history - you see them hunched over a notebook full of facts and figures, attempting to jam their head full with as much as possible, with the hope that come exam time, they will be able to regurgitate enough to form cohesive answers.

If exams really did a good job of assessing knowledge of a subject, why do students dread cumulative exams so much? If we had truly gained an understanding of material we were previously tested on, we would prefer a cumulative exam, which would allow us to integrate across material of multiple exams. If professors are going to really use testing to assess a student's understanding of material, they should make exams cumulative, because to me, mastery of a subject requires integration, rather than being able to report on bits and pieces of material.

While standardized testing has been continually lambasted for being an inadequate measure of how prepared a student is for college or graduate schools, there is not much more of an option. However, once you're at school, professors have an entire semester to gauge their students' understanding of the course material. By spreading the grade out over a variety of assignments, some of which are subjectively graded by the professor, not only will students who consider themselves "bad test-takers" feel that they have equal opportunity to do "well" in the class, but there will be much less extrinsic motivation and would most likely spur students' interests in the course material itself. Isn't that what college courses are supposed to be about anyway? Spending four years consistently worried about grading and cramming could potentially be a big precursor for giving students a propensity to dislike fields they would otherwise be interested in.

It would be interesting to perform some kind of meta-analysis comparing students who take classes in which grades are overwhelmingly based off of exam grades and those who receive grades more based off of presentations/papers/integrative assignments. Not only would it be cool to see how GPAs differ, but to see which students pursue careers directly correlated with what they studied in college. How satisfied were they with their collegiate experiences? How do they perform on other, non-academic based, integrative and/or memory tasks? Are there really such people as "bad test-takers"?

November 4, 2010

two key aspects to successful comparative psychology research

This semester, I'm taking a seminar on Comparative Psychology, and have found myself to be repeatedly frustrated with an overwhelming majority of the studies we have read. Maybe I'm becoming too opinionated of a reader, but it seems to me that two fundamental things are missing from a lot of the studies. When trying to understand whether or not non-human subjects possess specific psychological abilities (e.g. theory of mind, self-recognition, reciprocal altruism), you must keep two things in mind. First, animals may not perform the behaviors we are looking for without adequate motivation to do so. Secondly, completely novel situations may provoke an animal to react in a way in which they would not customarily react. Therefore, it is important to provide situations that are somewhat familiar to ones they might encounter in their day-to-day lives in their natural, environment to which their species has adapted (unless, of course, using a novel situation removes a bias in the experiment).

(1) Experimenters should provide adequate motivation for the subject to elicit behavior: In an attempt to ascertain whether or not chimps possessed the ability to display the most efficient search technique available using logic (Call & Carpenter, 2001), the experimenters baited one of three tubes with a reward and gave the subjects opportunity to search for it. They called systematic/exhaustive search techniques "inefficient", but without any time or disciplinary motivation, why shouldn't the subjects perform exhaustive searches? As a human, it would be smart to do so. If there were no costs for me to check every option for a reward 2-3 times, why wouldn't I? After all, why should I assume there is only one reward? Maybe if I only had a certain amount of time before my options were taken away, I would be forced to employ a more efficient strategy. Perhaps in this study, after 60 seconds in each trial, the experimenter should have covered the apparatus, thus ending the trial. Eventually, the chimpanzees would realize they had limited time to make their best attempt at finding the food reward, and begin employing more logical/"efficient" strategies. If they did not, then perhaps we could assume they do not have the potential to do so.

(2) Studies should provide situations that are relevant to the natural habitat of the subject species: A study in 2005 (Hattori & Kuroshima) attempted to decipher whether or not Capuchin monkeys possess the ability to cooperate with one another to accomplish a common goal. The results of their study asserted that the monkeys spent a significantly greater amount of time looking at their partner when they needed help on a task. This result conflicted with previous studies (Visalberghi 1997; Visalberghi, et al. 2000) that had failed to provide any evidence of communication during cooperative tasks. The difference here is that the 2005 study used a task that was more intuitive for Capuchin monkeys, whereas the earlier studies used more unfamiliar scenarios. In the earlier studies, the monkeys may have just been more confused about the task in general, and did not fully understand that it required their partner's cooperation. In the end, laboratories are only logistic necessities, and it is more useful to understand whether or not animals can perform in their natural environments.

Basically, in order to conclude that a subject with whom you cannot communicate with does not possess specific mental abilities, you must design an experiment that will do its best to elicit it. Only then can you conclude that the subject does or does not have such capabilities.

October 31, 2010

maybe liberals just can't help themselves...

I'm a pretty firm believer that just about every single aspect of an individual, from their personality/tendencies to appearance and everything in between, is gene-regulated. That being said, I don't think I ever considered the possibility that you're born with a predisposition to be either Conservative or Liberal.

DRD4, the gene that codes for dopamine receptor 4 in humans, has recently been linked to a tendency for individuals to be politically liberal. A medical genetics professor at UC San Diego, in reference to the new finding, asserts that "we hypothesize that individuals with a genetic predisposition toward seeking out new experiences will tend to be more liberal - but only if they had a number of friends growing up." Being more open to new experiences is only one aspect of having a "more liberal personality", if it is at all.

Whether you consider yourself to be a Republican, a Democrat, Libertarian, or even a member of the Rent is Too Damn High Party (see video below), you're only one to a certain degree. Political ideology is on a spectrum, due to the amalgam of platforms and issues. Can we really associate an individual's propensity to try new things with being politically liberal? Are there any personality traits we can really link to voting one way or the other? I personally think political views are one of those things that are mostly influenced by environmental factors (SES, upbringing, friends, etc.) rather than genotype, which could explain why/how an individual's political views can shift with age/environment.

October 26, 2010

when is it no longer "just a fluke"?

This is a screen shot I took from the NHL website a few days ago, when the New York Islanders were leading the Eastern Conference, and League, in points:

The Islanders, who finished 26th (out of 30) in the league last season, and who have been less than impressive for the better part of the decade, seem to be holding their own. Even without their "best" players who were injured in preseason or signing any big time players during the off-season, the Isles are managing points in almost every game. How is this happening?

According to Rangers and Leafs fans, of course it's a fluke. But how many games will the Isles have to win to shut 'em up? In a season of 80 games, will we have to wait for 20 games? 40? Or is it about consistently pulling out W's against the "hardest hittin' teams in the league"? Or will we have to wait and see who scoops up the coveted playoff spots? It's usually a fan's defense mechanism to put down a rival team's success by calling it a "fluke", but how long do I have to wait before I can legitimately remind Rangers fans how much more money they spend per year on "star" players that always seem to disappoint?

In the same token, I think the 5-1 record has finally proven to be enough to stop Giants fans from calling the Jets season a "fluke"? Given the season is only 16 games long in comparison to the NHL's 80...

October 23, 2010

a spray of DNA keeps the bad guys away?

Some businesses in Holland have been using a new burglary system - one that sprays synthetic DNA on robbers as they walk out the door. The spray is deployed by an employee without the knowledge of the robber - it's odorless and invisible - and alerts the local police department. The synthetic DNA (which doesn't even cost that much to produce) is specific to the store from which it's deployed, and is meant to help cops link burglars to the scene of the crime.  Businesses that have used this system (usually installed by the police departments) have reported declines in crime rate, although there is no current data as of yet.

What could be causing this decline? The synthetic DNA alarm system hasn't been used yet to identify a criminal, but it has been triggered accidentally many times, which would allude to it being used as a scare tactic more than anything else. Businesses that have this system installed are required to have a sign posted outside alerting consumers. Even if it weren't required, it'd be a good move. The appearance of "DNA" on a sign outside of stores definitely deters prospective burglars. I don't see how synthetic DNA spray is any more effective than using UV ink, but fear of the unknown is daunting enough for most people. Even by glancing at readers' comments below the NYTimes article, I'm surprised at how little people actually understand about the use of DNA. Some worry about the prospect of being sprayed by "hybrid-human DNA", without realizing synthetic DNA is completely inactive and would cause no harm to the individual it's been sprayed on. On a more general level, I think criminals tend to associate "DNA" with "getting caught", which is enough to dissuade them.

In this case, the fear of the unknown appears to be effective enough to discourage robbers. It'll be interesting to see raw data pertaining to crime rates though...