Every semester in my introductory classes, I do an assignment on push and pull factors, and how they relate to migration. Of course, push and pull factors are a relatively easy concept to understand, in that push factors are those ideas about a place that “push” people away from living there, while “pull” factors are those perceptions which attract people. It’s (really) old news for population geographers, and there have been plenty of critiques on the conceptualization, but it’s nice and tidy for getting intro-level undergrads interested in migration.
Usually, textbooks contain some sort of perception map to talk about this, based on mental maps of some undergrads at a random institution at which the textbook authors teach. Like this one from whichever textbook:
That’s all well and good that a textbook shows this map, but it’ll maybe warrant a few seconds glance from the students (IF they open the text) and that’s it. So, why not expand on the idea but have the students see what they think?
The assignment I give them is based on their own perceptions. I have them simply pretend that they’ve graduated with their bachelors degrees and are now sought after by firms in every single state. Then, I give them a map and have them rate, on first gut reaction, each of the 50 states a score of 1 (personal hell) to 10 (a land of milk and honey) as a potential place to live. In groups, they come up with maps displaying averages of their ratings, and then the groups discuss why places are rated highly or lowly. They’ve never got the resolution of analysis that the map above has, but it’s far more personal and links much better with what student interests are.
This, of course, leads into larger course discussions, like those about brain drain — few ever rate Ohio very well, and the power of perceptions — usually California and Hawaii are the most positively rated states, but few students have ever even visited either. Of course, the discussion almost always touches on the lack of spatial resolution in this exercise, when one student gives New York a rating of 10 because they think NYC, but another gives it a 2 because the only New York they know is Buffalo. It also leads nicely into topics like qualitative versus quantitative data, and serves as a really good gateway into healthy skepticism about statistical analysis, examining issues like sample size, interpolated data and other such critical thinking skills.
This semester, I put the assignment to two different classes at the University of Akron:
- One Introduction to Geography course, mostly freshman with a few sophomores, 8:50am MWF. The course enrollment is 41, but 30 were there to complete the assignment.
- One Geography of Cultural Diversity course, all sophomores and juniors with two semesters of English composition as prerequisites., 11:00am TR. The enrollment is 34, but only 25 were present that day.
Students were told to rate each state from 1 to 10, with 10 being the absolute best. Beyond the ratings, they were to designate the absolute best with a star, and the absolute worst with a frown face. After their group exercises and discussions, I collected the assignments and combined the data into some quickie maps, purposely ignoring cartography for sake of clarity, consistency and speed (really, I can do better!) and posted them on the course Facebook pages. The results are what you see below. I think they’re exceptionally interesting, not only in terms of which states are desired, but the differences between classes.
Map Set One: Raw Averages from Each Class.
Averaged ratings from all students. Easy.
In this class, North Carolina gets the highest accolades, followed by South Carolina, California, Ohio and New York. This class thinks that South Dakota’s the worst, followed by North Dakota, Nebraska, Minnesota and Idaho.
Map Set Two: “Stretched” Ratings from Each Class.
Using some simple arithmetic, transformed the lowest rated state into a rating of 1, the highest rated into a rating of 10, and stretched the ratings in-between. A good way to open the discussion about data manipulation as a true window into respondent intention. Of course, the rankings will be identical.
Map Set Three: Best State Votes.
A simple count of how many students, in each class, rated a state as the most preferred.
Map Set Four: Worst State Votes.
A simple count of how many students, in each class, rated a state as the least preferred.
Map Five: My Ratings.
I always end up being asked what my ratings are, so I figured it’s fair to share. I always make my own map and post it alongside the combined maps that I put on Facebook. Here’s mine for this year… and oddly enough, it too changes every year.
That’s enough for now. Maybe I’ll revisit this topic later this week and include some of the students’ explanations behind their ratings.