Archive for Charlie Rapple

About the Author: Charlie Rapple

From frontline to baseline: 10 takeaways from UKSG’s One-Day Conference on ‘Open Access Realities’

Last week, I had the privilege of chairing UKSG‘s One-Day Conference on ‘Open Access Realities‘. There was a full house of about 150 people, with librarians, publishers and other ‘interested parties’ such as subscription agents and technology vendors fairly equally represented – this is not always the case at such events, and reflects UKSG’s unique role in ‘connecting the knowledge community’. The programme was also different to many other open access events that I’ve been to, with a particular focus on the practical realities of implementing OA. Although there are still debates to be had at the ‘cutting-edge’ of the movement – for example, in the area of open access to data – it’s also important to step back from the ‘frontline’ and ensure that organisations across the community are keeping up with the ‘baseline’. In my introduction, I suggested that we can compare the progress of OA to Bruce Tuckman‘s model for group development (below): the idea of OA was formed, has been through quite a stage of storming, and we’re now in the process of ‘norming’ – working out the logistics, diversifying its application, taking different routes around roadblocks, trying to pin down a common language, experimenting and developing. tuckman
Within that analogy, it’s events like UKSG’s One-Day Conference, that focus on the practicalities, that will help us achieve the stage where OA is comprehensively ‘performing’. I thought it might be helpful to share the points that gave me most food for thoughts on the day:

  1. Contrary to what many assert, the general public does access and read research content: “If PLOS gets an article on the front page of Reddit, we get 140,000 readers” (Damian Pattinson, editorial director, PLOS)
  2. Dependent as it is on the subscription publishing model (and publishers’ policies), how can green OA be more than a promotional model during a period of transition? (Lars Bjørnshauge, director of European library relations at SPARC Europe, and director of DOAJ)
  3. In order for libraries to be able to transition budgets to fund APCs, they should centralise (nationalise?) procurement and management of the core / majority if content that is common across most institutions (Lars Bjørnshauge again)
  4. Since Finch, there has been more progress on increasing global access to UK research than on increasing UK access to global research. We must be careful not to get too far ahead, and end up bearing a disproportionate amount of the global costs of OA (Michael Jubb, director of the Research Information Network)
  5. For university leaders, open access (to research publications) is only one aspect of a wider trend toward transparency; the Research Sector Transparency Board is also focussed on open data and data security (equally big, if not bigger, issues) (Adam Tickell, provost and vice-principal, University of Birmingham)
  6. OA’s facilitation of data mining helps to identify research misconduct in ways peer review never could (Adam Tickell; Peter Murray-Rust later showed an excellent example of this, where a machine reading an article identified a doctored image that the ‘naked eye’ could not see)
  7. Agile, innovative responses to OA can be better served by a ‘hacker culture’ of small organisations and individuals collaborating than by established organisations where expectations are too high to allow trial and error (extrapolation from points made by Caroline Edwards, lecturer at Birkbeck and director of the Open Library of Humanities)
  8. Small initiatives can also benefit from extensions of the ‘gift culture’ that exists in academia, where academics are used to giving away their work and time for free (Caroline Edwards)
  9. Publishers’ perceived slowness in terms of OA adoption in part reflects that “we’re a service industry based on the needs of researchers” and there isn’t yet a clear grassroots demand to help inform the nature of the OA transition (Vicky Gardner, open access publisher, Taylor & Francis)
  10. Content mining is an important extension of OA rights – publications should be made more machine-readable to maximise their value to ongoing research and application (Peter Murray-Rust, reader in molecular informatics at the University of Cambridge)

Videos of the conference are now available on UKSG’s YouTube channel. Speakers’ slides (where used) are available on the event homepage.

How to create an infographic

Infographics are being increasingly used in many marketing contexts, and are working their way into the scholarly information sector. They enable people to evaluate and digest information visually, making it easier to scan and, for many people, more memorable. Infographics can also liven up an otherwise densely text-based document or website, and express something more than a stock image.

What is an infographic?

There’s no strong consensus as to what, exactly, an infographic is. For some people, it’s a diagram that visually represents some data. For others, it doesn’t even need to involve data but can be some artworked text. When I use the term infographic, I mean a graphic where the structural and design elements being used to convey the data are meaningful in themselves, either reflecting either the overall topic of the graphic, or a metaphor for that topic. For example, I’m a big fan of Kester Mollahan‘s “Vital Signs” graphics in The Sunday Times Magazine, such as this one that asks “Who makes the most from the movie industry?

Who makes the most from the movie industry?

Who makes the most from the movie industry?
“Vital signs” infographic by Kester Mollahan, August 2013
The Sunday Times Magazine

Top tips for creating infographics

So, how do you go about creating an infographic? Here’s a summary of the process the TBI team goes through when we’re creating infographics. A key point there is the word “team”. It’s helpful to bring together different skills: creative design capabilities are perfectly complemented in this process by data visualization skills, and by broader communications expertise; if an infographic, more than any other, is the “picture that paints a thousand words“, then you need to be clear what those words are before you start painting.

  1. Digest all the information that forms a background to the graphic, and filter this down to the key points that actually need to be conveyed visually – a common mistake is to cram too much information into the graphic, undermining its ability to convey information clearly and quickly.
  2. Find the story – if this were text, what narrative would you weave around the facts to get them across clearly and memorably? This critical part of the process is often overlooked, meaning you get into the visual stages without having a clear and simple sense of what you are trying to convey. Creating an infographic without having “found your story” is like playing Pictionary and being given entire essays to draw instead of nice, concise keywords.
  3. Decide whether the story lends itself directly to a strong visual (as does the example above) or whether a metaphor might be useful to give added visual punch. Statistics lend themselves more readily to direct illustration; metaphors are helpful when you are trying to convey something more complex and abstract, such as the role and function of a service or system, particularly one that might form an essential but not necessarily sexy – or strongly differentiated – part of organizational infrastructure. Talk to the team selling the product / system / service – they might already employ metaphors that you can build on. In the past, I’ve used a rail network to represent a major system consisting of different modules (lines), each of which has multiple features (stopping points) supporting customer processes that, here and there, intersect (junctions). The visual metaphor was then complemented in the surrounding brochure’s copy with verbal metaphors such as “make the connection”, which in turn aligned with the client’s top-level brand messaging about moving content forward.
  4. Choose an image / shape / visual theme that represents the overall story (or its metaphor) and think about how the components of the story can be conveyed in a way that is meaningful within the overall image or visual metaphor – as in my example above, using the different lines, stopping points and junctions of the overall rail system to represent different aspects of the story.
  5. Draft your graphic, applying relevant brand guidelines (for colour palette, typeface, balance of white space, etc). Pare it back as much as you can – avoid unnecessary visual detail and edit text elements as much as possible. Test it out on people who haven’t been involved in the design process and who aren’t familiar with the concept being conveyed – are they quickly able to understand what the graphic is telling them? If so, then you have passed the infographic test!


SSP round-up – what’s lighting the scholarly publishing touchpaper?

Many of you will have been at the Society for Scholarly Publishing annual conference in San Francisco earlier this month. The conference seems to have taken on a new lease of life in recent years, with a growing number of delegates, and an increasingly substantial program (props to program chairs Jocelyn Dawson and Emilie Delquié for a job well done). Of course, much of the business of a conference is also transacted in the discussions that take place in, around and beyond the conference venue – at the dinners, receptions, and even (thanks to jetlag?) the surprisingly buzzy breakfast meetings. So, like all good conferences, SSP passed in something of a blur, but it’s interesting for me to take a step back a week or so later and think about the key points that have remained with me:

Closing the loop was the theme of Tim O’Reilly‘s opening keynote; I think it’s fair to paraphrase it as “using data and technologies to enrich products / services and make them work better”. He gave a myriad of examples of what he means by this, from Google’s driverless car to distributed peer review of open source software. He laid down the gauntlet for the assembled publishers – how can we reinvent some of the more dated aspects of our ecosystem (think Impact Factors, peer review) to make better use of available data and technologies? While throwing out suggestions such as Wikipedia-style revision control, Tim also made the point that scholarly publishers could make more of the  opportunities offered by being closer to their markets than some other (trade) publishers (a drum TBI has been banging for a while with our talks on advocacy and relationship marketing – and indeed, I gave a talk on “getting closer to customers” at SSP the following day). He also picked up on the notion that, as we evolve to become more service-oriented, publishers begin to look more and more like societies – so we have a lot to learn from each other. In short, said Tim, publishers need to take seriously the obligation to reinvent the world of information dissemination.

Much of this reminded me of the talk given by David DeRoure at the recent ORCID–Dryad Symposium on Research Attribution, in which he talked about “the social machine” – in which big data comes together with social technologies (and people’s use of them) to overcome past obstacles in creative, intelligent, joined-up ways. If I’ve understood both speakers correctly, then “the social machines” David talked about are examples of “closing the loop”! – and both very inspirational for publishers. Check out David’s slides on Slideshare (“2066 and all that“).

Having heard Tim O’Reilly open the conference, everything else I said and heard there seemed to be shaped by or interpreted within the context of closing the loop. Talks about standards – such as that given by Ringgold‘s Jay Henry, in which he made a well-supported plea for standards such as ORCID to be better used, even mandated, by publishers – seemed to fit well with this theme. O’Reilly’s reference to Eric Ries’ “minimum viable product” also seemed to capture the zeitgeist, with many publishers seeing this as a way to make the most of (seemingly minimal) product development budgets and pursue more innovative approaches to everything from discoverability to video (O’Reilly – again – referenced‘s $70m video training business: “Take video seriously,” he said. “Take small units of video very seriously.” – and of course several publishers are, with Elsevier and IOPP among many who have reported significant increases in content usage driven by video abstracts).

Finally, of course, it’s not just publishers who need to / are innovating – an excellently curated panel session on MOOCs, with a set of speakers from different departments / roles at Stanford University, providing a fascinating insight into what institutions are doing to reinvent themselves and reach wider audiences. I enjoyed hearing that Stanford has appointed a Vice-Provost for Online Learning whose mission, among other things, is to “unleash creativity and innovation in online learning”. An aspiration for all of us, perhaps!

Ten things we’ve learnt in ten years of iTunes

Yesterday was the 10th anniversary of the launch of iTunes, and an interesting article in The Times last week (by Ed Potton; paywalled, sorry) had three music industry insiders commenting on what it has taught us. Yes, the analogy between the music industry and the publishing industry is over-used and under-relevant. But there are still some nicely articulated points in this article that are worth pondering; let’s start with this paragraph:

4/ The album is dead … long live the album:
It’s also commonly said that iTunes celebrates songs at the expense of albums, allowing us to buy individual tracks without purchasing their parent LP. The UK iTunes store recently sold its billionth single but last year album sales fell by 11.2 per cent in the UK. [Stephen] Bass [co-founder of the Moshi Moshi record label]  thinks that we sometimes over-eulogise the album, a format that “was born by the accident of a record being able to hold approximately 45 minutes of music”. But, he adds: “The idea of a world in which you can only have singles is quite scary.” [Tim] Dellow [director of Transgressive Records] agrees: “There is still an affection for what the album can represent. The challenge is to develop records that are that cliché: all killer no filler, so people are compelled to buy the whole thing.”

If “singles” and “albums” are exchanged for “articles” and “journal issues”, this provides an interest avenue of thought on the emerging article economy:

  • do we “over-eulogise” the journal / issue as wrappers of articles?
  • do we find the article economy a bit “scary”?
  • is there still an “affection” for what the journal / issue represents?
  • will the article economy (and, alongside it, the move to author-pays OA) start to filter out some lower-value content such that we move towards “all killer and no filler”?

Meanwhile, the music industry is under-utilising the capabilities of mobile (“‘The functionality and power in a person’s smartphone is just being ignored by iTunes,’ Bass says. We should be able to unfurl an array of extra goodies on our mobile screens, he says, liner notes, photos, links to merchandise and gigs.”) We in publishing also have yet to deliver a truly mobile-first experience for our users, and while I don’t necessarily advocate for “goodies” per se, I do think we need to understand at a much more detailed level not only the mobile behaviours, but also the broader workflows and information needs of our users, in order to better integrate our content and thereby make it more valuable. (Yes, I have banged this drum before.)

The final point that I think has real resonance for us in publishing is the need to avoid walled gardens. Anthony Mullen, Senior analyst at Forrester, comments in the article on Apple’s “Ping”, an attempt to integrate social networking and iTunes which closed after two years because it was a walled garden (“a lack of integration with music labels and other social networks and … confined to purchased music”). Whether in terms of our efforts at social media integration, or our engagement with other technologies that support research workflows, publishing too is learning that the publisher- or journal-specific walled garden approach just doesn’t work. Successful examples are the exception rather than the rule, because walled gardens don’t reflect or support typical user discovery and usage behaviours. Recognising this means developing and pursuing brand, product, technology and partnership strategies that centre on users rather than publishers or journals – strengthening our propositions such that we can cede a little bit of control in order to reap the benefits of wider visibility, usage and citations.


UKSG top takeaways: “open or broken”, intelligent textbooks, research stories

Several years ago, I started the UKSG blog to report on the organization’s annual conference, which provides a forum for publishers and librarians to network and share strategic and practical ideas. Between 2006 and 2012, I enthused on the blog about topics including metrics, publishing evolution, innovation, research assessment, user behaviour and workflows. All those topics still fascinate me today (expect more on all of these from my Touchpaper postings) – and they were all covered again at UKSG this year. But this year – shock, horror! – I wasn’t blogging about them; my role for UKSG has changed, and others are carrying the blog torch now. 

This frees me up to take a more reflective look at what I have learned at UKSG, rather than trying to capture it all in realtime for those who can’t attend. So – here on “my” new blog, TBI’s Touchpaper – is my snapshot of another great conference:

1. Let go of “publish or perish”. Accept “open or broken”. 

UK academics’ submissions to REF 2020 (the process by which government evaluates academic institutions) *must* be OA at the point of publication. That is surely the game-changer that will mean, from this point on, academics will be trying to submit their best work to a publication that supports immediate OA. We may not yet have completely worked out the kinks, but events have overtaken us; it’s time to satisfice – adopting an imperfect model, refining it as we go. The lack of additional government funding for article processing charges (APCs) means that this particular mandate will have to be met as much by “green” self-archiving OA as by “gold” version-of-record OA. Both publishers and higher education institutions need to be sure that they have a clear strategy for both. (More from Phil Sykes’ opening plenary)

2. Information resources should be SO much more intelligent.

We were all blown away by student Josh Harding‘s vision of textbooks that “study me as I study them” – using learning analytics to identify a student’s strengths and weaknesses, comparing this to other students, adapting content as a consequence, reminding the student to study (“telling me I’m about to forget something because I haven’t looked back at it since I learned it”) and generally responding to the fact that we learn not just by reading, but also by hearing, speaking, and (inter)acting with information. (The highlight of the conference – Josh’s talk is must-see inspiration for all publishers’ product development and innovation.)

3. Authors need help to tell better stories about their research.

With increased pressure to justify funding, and the need to communicate more effectively with business and the general public, researchers need to be able to highlight what’s interesting about, and demonstrate the impact of, their work. Journal articles are but one node in a disaggregated network that together makes up a picture of their research. That network needs to be more cohesively visible. At the moment, the journal article is the hub but it doesn’t do a great job of opening up the rest of the network. I think publishers’ futures will be shaped by the extent to which they help academics surface / tell that whole story. (More from Paul Groth and Mike Taylor‘s breakout on altmetrics).