This is one of the infographics that is not so easy to read, but well worth the effort. It maps out the flow of scientific research talent across 16 countries. Created by information designer, Giorgia Lupi and here team in Italy, as a follow-up to several celebrated graphics (this and this), it was not initially apparent to me that it is a scatter plot.
Some things are better discovered than obvious.
The X axis represents the percent of the country’s GDP that is invested in research and development, with Sweden, Japan, Denmark and Switzerland leading the field. The Y axis represents the number of researchers in the country for ever one million people. At the top here are Denmark, Japan, Sweden and the U.S.A.
The lines show the migration of scientific researchers. For instance, Denmark exports talent to Great Britain and the U.S. The U.S. exports to Canada, Germany, Great Britain and Australia. Just about everybody exports to the U.S.
Of particular interest are the percent of foreigner and emigrant researchers in the countries compared to the total foreign and emigrant residents.
All in all, it’s worth a study by teachers and STEM and social studies students.
In the past I have done word counts from ISTE’s conference programs to illustrate topic trends, especially during the early days of the social web. Its fairly unscientific,
This year, I decided to do the same, but also use the activity as an opportunity for playing around with Tumult HYPE 1.6, a Macintosh app for creating HTML5 animations. From their web site:
Tumult Hype’s keyframe-based animation system brings your content to life. Click “Record” and Tumult Hype watches your every move, automatically creating keyframes as needed. Or, if you’d prefer to be more hands-on, manually add, remove, and re-arrange keyframes to fine-tune your content.
As is often the case, I probably payed more attention to pushing the tech, than perfecting the communication. This was a learning experience, after all. Take a look and see what’s trending down and what few topics are trending upward. Click [here] to see the animated infographic.
I have felt bad about not blogging lately. It’s partly because of travel, but mostly because of three projects that have drawn most of my attention lately. One of those has been preparation for the NCTIES conference later this week. It’s a special event for me because NCTIES is the ISTE affiliate for my home state and also because it is an especially successful conference. This year’s featured speakers include Richard Byrne, Patrick Crispen (regular), Rushton Hurley, Peggy Sheehy, Kathy Schrock (regular) and Tammy Worcester, with a kickoff keynote by Ken Shelton.
One of my presentations will explore instructional potentials of data visualization and infographics and in preparing for this session, I found one of the coolest things I’ve seen in a while. I ran across the link via Nathan Yau’s Flowing Data blog, where he quoted Jeffrey Winter…
There was an idea floating around that continuously following the first link of any Wikipedia article will eventually lead to “Philosophy.” This sounded like a reasonable assertion, one that makes a certain amount of sense in retrospect: any description of something will typically use more general terms. Following that idea will eventually lead… somewhere.
Winter’s explanation of how he accomplished a test for this idea made it sound easier than I’m sure it was. But the outcome was an intriguing mashup where you can type in a word or numerous words separated by comas, and his app will thread through the first link in each linked-to article until it reaches Philosophy.
Sitting in Starbucks, I looked for logical connections between Starbucks, coffee and caffeine. (click img to enlarge)
What struck me as I played with this data visualization, was how this operation meshes with our notions of curriculum and of libraries.
When information is scarce and education is defined by knowledge delivery, then the job of curriculum and of libraries is to package content into subjects and units and dewey decimal classifications.
When I watch seemly unrelated topics threading their way to a common subject and re-examine Boyack, Klavans and Palen’s Map of Science, which shows how various disciplines are interconnected by citations, it seems clear to me how schools and libraries need to become more like learning-literacy playgrounds than managed corals.
But that’s me!
Click to link to the original Washington Post graphic
In 1986, I was the director of instructional technology in a rural school district in North Carolina, a job that hadn’t existed when I’d started teaching only 10 years earlier. Thanks to researchers at the University of Southern California, we now know something about the state of technology ten years into my career.
For instance, In 1986, 41% of the world’s computer processing power was in pocket calculators. Personal computers made up 33%, with 17% going to servers and mainframes. A whopping 9% powered video game consoles. According to that study things had changed dramatically by 2007. The amount of the world’s processing power residing in personal computers had doubled, to 66% and calculators had disappeared from the picture. Video games accounted for 25% of the processing power and new comers, mobile phones and PDA (which didn’t exist when I was director of technology), held 6% of the world’s computing power. Servers and mainframes dropped to 3% and supercomputer weighed in at 0.3%.
But the real sign of change is in information. Back in 1986, the world held 2.64 billion gigabytes of information — and 2.62 of them resided on analog media (paper, film, audiotape and vinyl and videotape.) The growth of information soured over the next 16 years, when, in 2002, the amount of digital content exceeded the information we stored with analog technologies.
By 2007, our quantity of information had risen to 294.98 billion exabytes of information, and only just less than 19 of them still resided on analog media. If you took only the paper — and film, audiotape and vinyl used to store information today, it would account for only 0.004% of the world’s content. That means that anyone, whose schooling and experience has not included the skilled, responsible and practiced use of contemporary information and communication technologies, well for more than 99.6% of the worlds information, they are practically illiterate!
What it means to be educated has been flipped on it’s side!
Flickr Image (LHC Tunnel) by Mario Alemi
I’m on another of those wonderful stretches at home catching up with family, trying to catch some movies and mostly spending every spare minute trying to get as much office work done as I can (some writing and work on Citation Machine) with no time to read and blog.
But this post by David Wiley at Iterating Toward Openness was one of those sneaky reads that tricked me into wonder if I’m actually wrong about something. In The LHC and Education, Wiley started with his interest in the Large Hadron Collider. To say that the LHC was incredibly expensive is a sad understatement, and the machine does little more than generate data. But with that data, scientists will map realms of the universe that most of us can’t even imagine, much less see.
Whiley loves data. I love data. But he switched contexts, lamenting that…
The data that we, educators, gather and utilize is all but garbage. What passes for data for practicing educators? An aggregate score in a column in a gradebook. A massive, course-grained rolling up of dozens or hundreds of items into a single, collapsed, almost meaningless score. “Test 2: 87.”
It’s one of the reasons that “data driven decision making” doesn’t make my heart flutter the way that it does for others. It’s that, even in the best of situations, the data is scarce, shallow, grainy, and awfully expensive to collect — not to mention that the only people who can make much use of it are the data dudes that school sytems have been hiring over the past few years.
Then he totally chaffed my soul by suggesting (and rightly so from some points of view) that, “..using technology to deliver content is not improving the effectiveness of education…” but that another way of using tech might. Whiley continues,
I believe there is (another eay). I believe it so strongly that for the first time in several years I am opening a new line of research. I believe (and I fully admit that it is only a belief at this point) that using technology to capture, manage, and visualize educational data in support of teacher decision making has the potential to vastly improve the effectiveness of education.
I believe that I have written recently (Where Obama is Getting Education “Wrong”) that I think we should be teaching students to capture, manage, and visualize data as a basic working skill. It seems to me that ushering it away to the central office to worry over data as an educational concern may actually be detrimental to the learning our students need to be engaged in. Limited resources will cause us put undue emphasis on what can be easily measured at the expense of those important skills and knowledge that can’t.
But Whiley compellingly inspired in me a willingness to reconsider and I found my problem. It isn’t that I object to using data to inform better instructional decisions. It’s that the data is so lousy — ..scarce, shallow, graining, and awfully expensive to collect.
What if all of our students were doing all of their content and content processing digitally. What if all of the information transactions of learning, besides the most appropriately open conversations, was done with abundant, networked, and digital content. That would be an enormously dense, rich, and seductively meaningful mass of data that could be analyzed and visualized in a wild variety of ways. I’d be happy with that — especially if students became partners with us as self-analyzers and self-assessors, mastering their own skills as information artisans.
Powered by ScribeFire.