Archive for the ‘technology’ category

The rise of the ‘smart university’?

January 15, 2019

A few years ago for this blog, I interviewed the then Irish Minister for Education and Science, Ruairi Quinn. He was one of those relatively rare examples of an education minister with a real understanding of and sympathy for higher education, and indeed a set of civilised and cultured values.

However, at the time he was trying to think through what needed to change in the university system, and he offered the following thought. If one were to take an early 20th century surgeon, he suggested, and transfer him to a 21st century operating theatre, all he would be able to do that would be of any use would be to mop the patient’s brow and sweep the floor. Take a professor from that era and put him in a 21st century lecture theatre, and he would mostly feel at home and get on with the lecture. So, what had happened, or not happened, that made universities so immune to the passage of time?

One could of course argue, and indeed argue emphatically, with his premise. Most 21st century university lecture venues contain all sorts of new technology, not least the screen with its egregious Powerpoint slides. Our 20th century academic would have been astonished at, and probably not that pleased with, all the paperwork and audit trials and so forth. He (and it would be ‘he’) would have noticed a much better (though not perfect) gender balance. But then again, if in his home era he had just purchased and read F.M. Cornford’s 1908 book, Microcosmographia Academica, he might well have found that much of its satire on academic life was totally apposite a hundred years later. The argument might therefore be that the technology and bureaucracy and demographics had changed, but the basic methodology and the academic outlook had not; or something like that.

It is in this context that I wonder about concepts such as the ‘smart university’, which has been explored in recent literature such as the book Smart Education and e-Learning 2016, by Vladimir Ustov et al (Springer Verlag). The authors explore the concept of the smart university and suggest that it must have a number of key elements to quality as such, these being adaptation, sensing, inferring, self-learning, anticipation, self-organisation and configuration, restructuring and recovery. They see the new university as being technology-driven with far fewer boundaries between branches of scholarship, reflected also in more fluid structures.

As we look into the higher education future, we are bound to experience some tension between a defence of intellectual integrity and intellectual autonomy on the one hand, and a system that is driven by new concepts of knowledge acquisition and processing on the other. What impact will this have, and what are the implications for higher education regulation? What  will it do to the student experience, and even more importantly, to the graduate’s understanding of what she or he has experienced and acquired in their studies? Perhaps of equal importance, can this democratise knowledge (and undermine the value of elite networks), or will it support societal authoritarianism?

The future of universities is, for all sorts of reasons, one of the most important topics for society in the coming era.

Advertisement

Going electric

November 27, 2018

Nearly three months ago, I made a major change: I bought an electric car. Not a hybrid, but a fully electric vehicle without an internal combustion engine and therefore without any fuel tank. In some ways the change might not seem massive. I still put my foot on the accelerator pedal to get moving, or on the brake to stop. The steering wheel moves the car to the left or to the right. I indicate when I intend to turn. And so forth.

And yet, this is a fundamentally different experience. The car moves more or less noiselessly. It is heavily computerised, and almost every control is operated not by a lever or button, but on the touchscreen. You ignore filling stations, but spend some time planning your journey (if it’s a longer one) so that you know where you will charge the car. It feels like being part of something quite revolutionary, even when so much of it is the same.

And yet, is this the future, or just a staging post to the real thing? Will we soon be in an era in which we won’t drive our own cars at all any more, but call an autonomous self-driving vehicle that takes us where we want to go and then moves off somewhere else? Or indeed will we still take it for granted at all that we can travel at will from A to B?

Transport habits can change at a certain tipping point with extraordinary speed. In this image you can see New York’s 5th Avenue in 1900 and 1913. In these 13 years the traffic changed from almost entirely horse-drawn to entirely motorised. What will happen between 2018 and 2030 is not at all clear, but there is every likelihood of fundamental change; and there should be, not least because we need to stop urban air pollution.

So maybe I am taking part in something important. Or maybe it is just a very minor step towards something that will, in a short space of time, be quite different. We’ll see.

Transport and social mobility

August 6, 2018

As we head into the next wave of technology-driven social and economic change, it is worth asking whether we always focus on the most important elements of such change. Looking at at the impact of previous and current industrial revolutions, it seems to me that the key drivers of change have always been information and mobility. The printing press opened the previously closed world of scholarship and learning to a much wider social group – potentially to everyone – while the railways introduced physical mobility, thereby effectively ending the feudal system. In particular, mass transport introduced the growth of urbanisation.

As we survey the momentum of change associated with big data, robotics and automation, we sometimes forget that transport and mobility will also be key drivers of social and economic change in this next industrial revolution. But government planners are remarkably unimaginative about this: generally it is planning around faster trains, bigger airport runways – essentially improvements in existing frameworks of transport infrastructure. Other preoccupations are, understandably, focused on technology to reduce or remove polluting emissions.

But if the 18th and 19th century railways enabled people to make more autonomous choices about where they would live and work, and if that was a key to economic re-positioning at the time, what will be the equivalent in the next phase of human development? To get to the right destination, we need to do more than just tweak or slightly modernise the systems we have now. We need to ask questions of social policy, about what kind of mobility will enhance the quality of life and the generation of fairly distributed wealth, and how that can be delivered. More importantly, we need to decide what social and technological research should start now to make that possible in the near future.

You say you want a revolution…

May 7, 2018

Anyone following contemporary debates about the future of work and civilisation will, sooner or later (and very probably sooner), be listening to comments about the ‘Fourth Industrial Revolution’. It’s everywhere, and while its exact meaning may not always be clear, what is constantly repeated is that it is happening now and is changing absolutely everything. Everything is being digitised, brought online, automated, and subjugated to robotics. Your job and mine will go, we will be replaced by machines that will not only do the job better, but will also understand better than we can how the job needs to evolve. The jobs we may apply for 10 years from now don’t on the whole exist yet, so we can’t properly prepare for them, and the best we can do is acquire every possible transferable skill and find out what will still need real human interaction; unless robots get better than us at that too. And watch that toaster, it’s online, smart, and may be planning to do away with you so it can watch daytime TV rather than bother with your nutrition.

That sort of thing.

As with everything else, the best thing to do when you encounter breathless hype is to take a step back and think about what you are being told. There is no doubt that the digital world is moving at a fast pace and is changing how we do things: how we communicate, how we analyse, how we adapt our technology to improve safety and efficiency, how we access news. The ‘internet of things’ is creating smart gadgets and appliances. Big data is yielding insights and solutions that eluded us in the past.

But the use of science and technology to effect social and industrial change is not new, nor are we now witnessing profound and speedy change for the first time in history. The development of the printing press and the use of paper to allow high-volume dissemination of its outputs probably produced a bigger social upheaval than anything we are seeing today: suddenly information and knowledge were no longer the private property of the elite, and absolutely everything changed. The (first) Industrial Revolution totally changed the way we live and work, in particular by opening up mass transport and urbanisation, putting an end to agrarian societies with feudal structures, and ushering in the age of capitalism with its attendant consequences, good and bad. The two world wars of the 20th century changed global politics beyond recognition. Contraception changed social interaction and opened up the workforce.

It may be interesting to observe that while a typical person, not from any social elite, would have had a fundamentally different life in the 19th century from what a similar person might have had 100 years earlier, the life we live now is not so fundamentally different from that experienced in the post-war 20th century. The technology has changed and allows us to do things that we couldn’t have done before or which would have been much more laborious, but socially and culturally our experiences are still recognisably similar. What is it that makes us think that the next few years will be so totally different?

We have always been bad at predicting the future, particularly where technology is involved. This is in part because we sometimes predict the future with the same kind of sensibility we apply to science fiction, including the desire to get a thrill from something really horrible. So when Elon Musk makes our flesh creep at the prospect of the spread of malignant artificial intelligence, he is tapping into the same fascination that gave us the Terminator movie franchise a couple of decades earlier. And to be honest, I’ve got sick of the statement (by now a real cliché) that 40% (or whatever your preferred percentage is) of jobs in demand in 10 years time don’t exist today. Well, maybe they don’t, but history doesn’t support this proposition: what job known to you now didn’t exist 10 years ago? Jobs may change in what they demand of those doing them, but that is a natural process of evolution.

This blog post is not an invitation to go into denial about the pace of change today. There is of course a huge technological, digital, fast-paced evolution taking place. Google, Amazon, Uber, Airbnb, Tesla – even the possibly departed Cambridge Analytica – are changing all sorts of things in our lives. But how adapt to that, and how we reform society to contain the risks, are issues to be debated and decided in a sober frame of mind. In that process, we do well to look at some of the social fundamentals, such as how we can protect the integrity of truth in the face of all-out assaults by those wanting to manipulate us, and perhaps worry a little less about what our toaster might get up to. Even if the latter is more fun, in a Hitchcockian sort of way.

EdTech: something so important nobody is talking about it. Yet.

April 9, 2018

A couple of years ago I suggested in an interview that university education had, in its basic methodology, hardly changed since the Middle Ages. I was of course being deliberately provocative and was exaggerating my argument, but nevertheless I did believe that I was making a valid point. Over the next few days I was met with howls of indignation, some of them in public and in print, from colleagues in other institutions who said my assertions were ludicrous; and who listed the zillions of things that had changed in universities since Thomas Aquinas had paced the lecture rooms of the University of Paris in 1250. Certainly he wasn’t holding an iPad as he paced, and he was never having to address the attentions of the Quality Assurance Agency. He might even have been quite unable to explain the nature and purpose of a MOOC. You get the idea.

None of that of course was my point, and me being me, I probably expressed myself badly. I certainly wasn’t out to insult anyone, as I have nothing but respect for those who labour in the vineyards of academia, and who do not get the recognition they deserve. What I was trying to convey was that we were using the same pedagogical understanding of our educational process as in the Middle Ages, and that while we may have adopted various new methods of communication and technology, these did not change our understanding of what was involved in teaching and learning. I don’t believe that even the adoption of ‘learning outcomes’ changes the game fundamentally.

So what we have, mostly, is a new technological portfolio sitting on top of traditional pedagogy. But because the technology is now so ground-breakingly different, it is becoming more and more important to have a proper insight into how disruptive this can be. The thinking that has emerged so far, usually contained under the heading of EdTech (which however covers education at all levels, not just higher education), has tended to be driven more by industry than by academia. More interestingly, it has become an increasingly fertile terrain for entrepreneurs and start-ups. Now interest by governments is emerging, and with it the potential for some funding; though it is not at all clear yet where that funding will actually go.

It has been a recurrent theme of this blog that we need much deeper thinking on pedagogy. This is as true in EdTech as anywhere else; but it should be a call to universities to take that on and accept the potential benefits of technology that may disrupt our traditional understanding of education; and to own the policy ideas that underpin it.

Thinking about the digital economy

December 5, 2017

Some years ago when I was spending a morning in a somewhat obscure library in London looking for materials relevant to the development of a British trade union in the 19th century, I came across a sermon delivered shortly after 1800 in a London church. The clergyman in question was most exercised by what we would now call the impact of new technology. He feared that humanity’s ability to perform ‘miracles’, which should be the sole preserve of God, would create a materialistic society in which a very small number of people would reap the rewards of science and engineering, while the majority would become redundant and face destitution.

I was reminded of this recently when the US company Boston Dynamics, a spin-off from MIT, unveiled a humanoid robot that could jump up and down on various obstacles and, finally, do a back somersault. You can see the whole spectacle here. This display quickly led to a whole tsunami of online anguish about how we are all doomed. If a robot could successfully mimic an athlete, then humans might as well all just go home and wait to be put out of our misery by the new artificial master race. You get the idea.

As for me, I thought the Boston Dynamics machine was pretty smart engineering, but to be honest I was less captivated by it than by another recent item of news: a group of engineering researchers helped by an economist were able to design a robot which delivered a lecture to economics students and successfully answered questions from them at the end. Apparently the robot answered questions with stuff like ‘Well, this is a hotly contested point, but I tend myself to support the view that…’

Today, lots of people are talking about the digital economy and what it may involve and what it may do to us. The science and engineering of it all is of course important, but it may be as important for us to come to grips with what it all means: how it affects our understanding of humanity and human purpose. This isn’t a debate about automation; that’s a debate we’ve been having for 250 years, and to be honest there aren’t many new things to say. It’s a debate about who we are, and how we will harness human ingenuity, and how we can ensure that we evolve successfully to engage that ingenuity with the new means at our disposal.

Screen them out?

November 28, 2017

One morning in 1986 I walked into a classroom in Trinity College Dublin to deliver one of my scintillating lectures. Just as I was about to start, the lecture theatre door opened and a student walked in carrying – no, I’ll say lugging – what turned out to be a so-called a ‘portable computer’. It was ‘portable’ in the sense that someone was carrying it, but if I remember correctly not without a lot of physical effort and perspiration. He then settled down, sort of, on a seat, and what ensued was a search for a socket so he could fire up the machine. This involved carrying the plug, which was at the end of a pleasingly long cable, to the not-quite-nearest wall where he had identified the presence of a source of power. He then switched on the device (though not before tripping over the cable on his way back). The device, we soon discovered, had an industrial-quality fan that managed to drown out various other noises coming from the floppy disk drive (5 1/4 inch of course). So settled in and visibly proud of this epoch-marking technological marvel, the student turned encouragingly to me to await my pearls of wisdom; and as I delivered them, the clicking of his keyboard was almost audible above the storm-force fan.

Yes, dear reader, you could say that was distracting. But it was also invigorating, as we all had a ringside seat as the new digital era was ushered in. And how far we have come since. My sister has just bought a laptop which, as far as I can make out, would fit easily into a modest document folder and which makes no noise whatsoever unless specifically asked to perform in this way. And of course you and I have all sorts of technology available to us, from phones that would put a 1986 mainframe computer to shame to tablets on which you can read the most extensive textbook while simultaneously listening to Taylor Swift. And all of these devices are in every classroom.

But not to everyone’s satisfaction. Susan Dynarski, Professor of Education, Public Policy and Economics at the University of Michigan, has had quite enough of laptops:

‘The best evidence available now suggests that students should avoid laptops during lectures and just pick up their pens.’

She has concluded this on the basis of research carried out in two Canadian universities and, curiously, the United States Military Academy. This research, in summary, suggested that laptops stop students from learning effectively: not just the students using them, but anyone within a reasonable range. Other studies appear to support this conclusion.

It seems obvious enough to me that my student in 1986 was himself distracted and had a distracting effect on others, as would be the case if, say, someone entered a classroom on a motorbike. But the rest of this seems to me to be more arguable. What matters much more than the technology or the device is the attitude of the teacher and the engagement of the student. Technology is good if used well and bad if used badly. Achieving the former (beneficial) effect depends on the skill of the teacher and the approach to pedagogy. I suspect that the analysis of educational technology needs that a more elaborate consideration of what may constitute good practice. And by the way, during the same lecture in 1986 a student’s pen broke while he was writing sending ink through the air landing on his neighbour’s clothes. That was even more distracting, not least because his neighbour reacted slightly violently. Maybe they shouldn’t reach for their pens, either.

The social academy?

April 3, 2017

You’re all very young, so you’ve probably never even heard of Bebo. But actually, Bebo was the real thing in social networking before Facebook got going properly.

Anyway, I first came across Bebo (and social networking) in 2006, when a colleague in my then university asked to see me urgently and rather urgently implored me to ban access to the website, particularly in the library, but also everywhere else. Students were, he told me, logging in to it at all times and were neglecting their studies. Some could even be seen looking at Bebo during lectures (on their laptops, no real smartphones in use back then) and inviting others to look over their shoulders. The world as we knew it was about to end.

It was not just my colleague who was concerned. A few weeks later I received an email from a student, complaining that she could not get access to computer workstations in the library because other students were on Bebo and were preventing her from using them for her studies.

Nevertheless, I decided I would join Bebo, which I did that year. And as I became aware of it I also joined Facebook in 2008; and Twitter in the same year. As some readers will know, I am a regular twitterer, though a more restrained user of Facebook. I occasionally use WhatsApp and Instagram.

Fast forward to the current decade, and Bebo has been bought and sold and bankrupted and re-released as something entirely different; but Facebook and Twitter are still very much there. In universities in the meantime the discussion is not about whether or how to ban social networking on campus, but how and whether to include it in the academy’s armoury. This has become even more important as students have tended to move away from other forms of electronic communication (including email).

An interesting study carried out in the University of Glasgow revealed that 68 per cent of students think social media can enhance their learning experience; though it also concluded that inexpert use of social media can make it all go badly wrong. Overall, it is hard to ignore social media – and universities cannot operate in an environment that is divorced from the experience of their students. Back in the early 1960s I learned to write with a nib pen that you had to dip in an inkwell every few words. We don’t use that now, nor should we expect students to use the technological equivalent (for them) of the inkwell.

Universities are generally taking a more direct interest in social media as marketing tools. But the more interesting potential lies in pedagogy, not least because social media, as the name implies, provide a social experience which can be an enabler for learning collaboration. Some interesting work on this has been done by Dr Fiona Handley at the University of Brighton.

The significance of social media in higher education is not that universities can invade their students’ social spaces, but that they can adopt the look and feel, and the potential for learning interaction, that social networking platforms provide. That is the place to start.

Thumbs down for educational technology?

October 24, 2016

It is exactly 30 years ago today that I took delivery of my first personal computer. It was an Apple Macintosh, and it had an incredible 1 megabyte of RAM and, er, no hard drive. A week later I produced my first computer-generated presentation for my industrial relations class, which however I had to print out on acetates in order to display the slides on an overhead projector. For me, technology-enabled education had begun. Colleagues looked on in admiration.

We have of course come a long way since then. Nowadays every higher education curriculum in any institution will feature a truckload of technology-enabled learning, the assessment of which is then crunched on various data programs to produce good-looking spreadsheets to please any board of examiners.

But is it adding value to the learning experience? No, according to the results of a recent survey conducted by Inside Higher Education. Or rather, not necessarily. Academics seem to value the opportunities for innovation provided by technology, but are sceptical as to whether the accumulated data gathered by IT systems is being used appropriately; or whether the quality of the learning experience is being much enhanced. They suspect that technology is deployed more to impress those evaluating institutions than to help students.

We must not be Luddites: educational technology is here to stay. But it must be used properly, and for the right reasons. This must mean in particular that the design of technology must be driven by academics rather than administrators, and must target the student experience and pedagogy rather than efficiency of processes. And there must be a clear understanding of how standards are affected – for good or bad – by online methods.

Talking points: Keeping watch

August 8, 2015

Is the Apple Watch a major success or has the company made a mistake? Those assessing this particular product don’t seem too be able to make up their minds, or agree. Recent reports suggest that Apple may have got it right again. If so, it is ironic that Apple may be about to revive the fortunes of a particular accessory – the watch – that its other products had been busily killing. A group of students told me recently that they would not wear watches because their iPhones told them the time; watches were superfluous and awkward.

But of course the Apple Watch is more than a chronograph. It puts a number of elements of my smartphone on to my wrist, and it monitors my lifestyle and my health. The information it gathers can of course do more than amuse me; I suspect insurers would love to have it.

I have an Apple Watch, having been given it as a present. I like it. And I wonder what it tells us about times yet to come.