Friday 9 September 2011

Teaching doesn't get more productive over time, so it gets to seem more expensive

This point is not often recognised:  it takes just as long to teach a child (or indeed an adult on a vocational training course, for example) today as it did 20 or 200 years ago.  The same is true of caring for the young or the elderly, or providing a meal in a restaurant.  Technology does not in itself make these activities more productive, because in these occupations quality is a more critical issue than quantity, and people haven't become inherently better learners since the 19th century.  (This relates to the points I made in my earlier post on craft, about mass production techniques being inappropriate in education.)  But the point needs stressing because in many other areas of work and production, salaries rise in response to productivity gains.  'Teachers' salaries however, like those of other 'efficiency-resistant' professions, rise in response not to productivity gains in education (which have thus far proven extremely difficult to achieve)' - or indeed to measure accurately - 'but to higher salaries in other sectors.  This means that staff wages consume an ever larger share of the budget, even if education does not improve.  And because the reasons for this are not immediately apparent to the average voter, dissatisfaction with education budgets may also grow.'  There also seems to be an implicit assumption among politicians that introducing new technology inevitably results in increased productivity. A great deal of research says that this is not the case in any clear and straightforward way - and see, for example this post from 'Creative Catalyst': Technology Does Not Make a Classroom Succesful, the Teacher Does.



This rather grim point was made 15 years ago by Paul Pierson, in 'The New Politics of the Welfare State', World Politics, 48.2 (1996): 143-179.

It's important therefore for educationists to question this assumption about increased productivity over time, perhaps by asking how such increases would be recognised, exactly where they might be expected to manifest themselves, and exactly how they could be achieved.  And to get one out of the way straight away, new technology does not straightforwardly enable teachers to teach more students effectively.

Wednesday 7 September 2011

Tuesday 6 September 2011

Is it true that the digital world values information rather than ideas?

Neal Gabler argues in the New York Times that thinkers and ideas are harder to find in the public sphere than in the past, not because they are not there, but because they are swamped by our hunger ofr information, no matter how trivial.  He suggests that 'we prefer knowing to thinking becaue knowing has more immediate value.  It keeps us in the loop, keeps us connected to our friends...ideas are too airy, too impractical, too much work for too little reward'.  See http://www.nytimes.com/2011/08/14/opinion/sunday/the-elusive-big-idea.html?_r=1

Gabler's careful not to argue that people have got less attentive or more stupid, just that there's a lot more to be distracted by.  I like some of Jaron Lanier's suggestions for using the blogosphere in such a way as to promote ideas rather than distraction:
  • Create a website that expresses something about you that won't fit into a template available to you on a social networking site
  • Post a video once in a while that took you one hundred times more time to create than it takes to view
  • Write a blog post that took weeks of reflection (You are not a gadget, Penguin 2011)
These ideas suggest that the problem is that when technology makes writing and publishing so easy and immediate, there is a danger that we assume it has to be done quickly, ie without reflection, without taking time:  'Never mind the content, feel the speed we got it to you!'   This links nicely to my arguments in earlier posts about the craft attitude, and the importance of time in the development of skills and expertise, and in the development and maturation of ideas too. 

The 'modern' argument against is that the web allows ideas and products to be developed collectively, by groups of people who may not even know each other, and that apart from anything else, innovations produced in this way will have fewer bugs because more people have contributed to their development.  This is the argument made by Eric Raymond more than ten years ago, in his important paper The Cathedral and the Bazaar (http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/) comparing the production processes of the Windows and Linux operating systems.  A key element of the Linux process, according to Raymond, is the regular publication of updated versions of the code, so that innovations in the design are available to be checked: this didn't happen with Windows because, of course, the Windows production process was and is secret, so as to protect Microsoft's profits. In this way, the web's ability to facilitate collective design and production is seen as intrinsically better than traditional, individually driven processes of innovation.

But this argument doesn't address the issue of time and maturation as key elements of quality processes: whether collectively or individually-produced, successful innovations are usually the product of processes that take time.  Lanier fulminates against the so-called 'wisdom of crowds' - a concept which may embody democracy, but doesn't at all guarantee quality or even factual accuracy, though it may guarantee a kind of bland acceptability.  Is this what we want?  Lots of stuff that's 'not bad', or 'will do'?
My view is in sympathy with Gabler's, but I would argue that even if the cultural sphere is dominated by trivialities, there's no reason for ideas not to thrive. There will always be discerning voyagers on the web, looking for material that is different from the noisy stuff that's on all the front pages.  It's up to those who think ideas and arguments are important to get active and contribute, and also to signal themselves clearly, so as to be found more easily by those looking.