Monday, 28 April 2008

Bouncing back to social intelligence

I know it's only been a week or two since I last wrote about enterprise social intelligence, but I just had a sneak preview of the latest release from Trampoline Systems. This is the newest offering in the firm's ongoing quest to provide organisations with the tools to extract and use the latent knowledge of their workforce. And it's called Sonar Dashboard.



Following the firm's earlier releases Sonar Server – the technology which effectively sucks out all the information from different corporate systems – and Flightdeck – the management diagnostics tool, Dashboard is actually what the end users get to play with.



Chief executive Charles Armstrong told me they've gone for the look and feel of a social networking site to minimise user training to virtually zero. And if you've been on Facbook or LinkedIn you'll have absolutely no problems using it. The main page look s a bit like a Facebook profile page, with regular updates on what all your contacts are currently working on.



Another page allows you to view the main topics a particular contact has been working on – the ones in larger and bolder type being the ones they've been involved in more often – and who they've been talking to. There's also an area where you can input information about yourself, upload CV or connect to your LinkedIn profile. As is Trampoline's wont, relationships between contacts can be viewed in an easy-to-digest graphical format.



Privacy is also a key factor here. Users can view an email every two weeks listing which information has been collected about the projects they're working on, and they can then choose which bits they want hidden from their contacts. In these days of uber-sensitivity about privacy and surreptitious data mining, it's an important part of the trampoline jigsaw.



I guess the point with this release is that the vendor is trying to avoid the mistakes of the unwieldy knowledge management systems of the 90s, by enabling the automatic extraction and updating of individuals' key information. In other words it does all the heavy lifting and, as Armstrong says, is why a consumer-type social networking tool is no good for this purpose, because it depends wholly on the individual to update their details themselves.


Friday, 25 April 2008

Can the growth/sustainability circle be squared?

What does 'sustainability' mean to you? The more time I spend with IT vendors, the more I wonder if they see it as something for other people.


To quote from the frequently-cited Brundtland Report: "development that meets the needs of the present without compromising the ability of future generations to meet their own needs."


Lots of companies pay lip service to that but, somehow, it always results in them making more things for us to buy. It's as if they've had trouble getting to grips with the 'needs' part of 'meet the needs of the present'. Do we ever ask what we really 'need' in order to stay afloat in the today's complicated world?


Of course, IT does have the potential to address the 98% of carbon emissions (say) that are not attributable to IT operations. Many manufacturers tell a good tale. They speak of reuse of components in future products, of their adherence to this and that regulation and, especially, of how the application of more IT will help cut someone else's environmental footprint.


In the end, though, there's no hiding the fact that they're still hooked on growth. Understandable, by the way. And while they could possibly achieve this through services, the global vendors see hundreds of millions of souls in developing countries as fine prospects for more 'things', even if they are made of fewer and more easily recycled materials.


Perhaps, by getting in early enough with IT equipment as a travel or printing substitute, for example, these vendors can help the developing countries avoid some of the excesses of the West. Frankly, I'm not optimistic, but I'd love to be proved wrong.

Thursday, 24 April 2008

Library & Information Show, e-Book update

There was certainly a buzz around the NEC's Library & Information Show (23- 24 April 2008). The topic of technology and e-books was one of the main concerns, writes Peter Williams


This interest was reflected in the presentation by Caren Milloy, e-books project manager, JISC collections on JISC’s national e-books observatory projects. According to Milloy in higher and further education interest in e-books extends beyond reference books to include text books. One of the problems is that the demand for e-books is hidden so publishers and aggregators have been slow to meet it, and as a result the development of coherent and workable business models has been equally slow. One key frustration for information professionals is knowing what e-books are actually out there. There is no central site providing a cohesive list.


JISC’s research shows that librarians in HE and FE are leading the demand for e-books, but that is a reflection of the demand they are hearing from students and to a lesser extent from teachers. The research tested the demand for e-books in business, engineering, medicine and media studies – a deliberately eclectic choice to see if, for instance medical students differs from media students in their use of e-books.


Looking around the LIS it is clear that with e-books the market is responding. For instance, Swets is due to launch the latest addition to its SwetsWise platform, SwetsWise ERM which helps information professionals keeps track of the licences they have – and that includes keeping tags on the right to browse e-books.


Information professionals’ utopia for e-books (according to JISC research) includes element such as concurrent usage, free archive, common standards, great integration with virtual learning environments (VLEs) and great metadata which encompasses not only texts but multimedia with open access. Simple eh? According to JISC e-books are a maturing market. That may be the case but from the discussions at LIS there is still some way to go in widespread knowledge, understand and adoption.

Wednesday, 23 April 2008

Literacy Project encourages greater collaboration

According to a Google press release that landed in my inbox this morning, not only is 23rd April St George’s day, but also the date that Miguel de Cervantes and William Shakespeare died on in 1616.


Somewhat stretching this tenuous co-incidence, Google announced that in honour of such formidable writers, they were promoting ‘innovative literacy and reading-related projects’ through the Literacy Project initiative. Partners include the World Book Day organisation, Lit Cam and UNESCO Institute for Lifelong Learning.


For those of you who haven’t heard of it before, the Literacy Project is all about using the internet to connect likeminded literacy-promoting organisations to collaborate and communicate with each other. Today they will include some new tools to assist in exactly those kinds of initiatives. For example the project’s Literacy Map has been updated so organisations can update news on what they are working on as well as talk to others through the Literacy Project forum. 


What you might find of interest are the academic papers that explore areas of improving literacy. Within the literacy and technology section there is a whole host of grey literature material in Google Scholar. This mass of potentially valuable (but largely unpublished) information, can originate from anything produced by organisations such as booklets, presentations and reports to name but a few. Information literacy is well served here.


If you will now permit me to insert my own tenuous link, May’s issue of IWR will be examining the possibilities of using grey literature. There will be pointers on where to go and how to get that valuable but obscure information. It’s an underutilised resource and there is an awful lot of information out there ripe for the picking.


In the meantime more of the Literacy Project here.

Monday, 21 April 2008

Data, identity and Microsoft

Kim Cameron, Microsoft’s chief identity architect, was over in the UK last week, talking to government, analysts, internal folk … and me. Most of his time is currently spent on the massive CardSpace project which Microsoft hopes will have the same effect as putting chip and pin on the internet – basically it is being touted as the answer to our online identity verification woes.



Up until now, solutions to the problems of online fraud and even enterprise identity management have been less than perfect. One-time passcode generating tokens work OK, but there comes a point when your “fistful of dongles”, as Cameron calls them, becomes too unwieldy. Cameron’s answer revolves around the “Identity Metasystem” - his vision for the underlying architecture on which CardSpace is built which is cross-technology, cross-provider and as such probably stands the best chance of living up to its own hype.



It involves interaction between three different parties: identity providers, such as credit card companies, government, or even the individual consumer/web site visitor; the relying parties, which require said identities, such as a web site; and subjects, which could be any party about which claims are made.



It can get rather complicated from here, but basically the CardSpace software stores references to a user’s digital identity and then presents them as so-called Information Cards. When a user visits a site that supports InfoCards, they will then be presented with the CardSpace UI from which they can select the appropriate card. Once chosen, the CardSpace software will contact that identity’s issuer to obtain a signed token containing all the relevant information. It’s all about trying to borrow concepts of trust and verification from the physical world and make it all as user friendly as possible.



There are obviously serious data protection issues to be faced here too – as Cameron observed, in the past privacy has often had to be compromised to ensure security. It’s an issue they are well aware of: “If it’s a spy machine then [this project] goes nowhere,” observed Cameron. Well, thanks to some clever algorithms – isn’t it always about algorithms these days - they’re able to do just that. Don’t ask me how, but it will be interesting to see if CardSpace proves to succeed wherever all other verification technologies have not.



IT services group Eduserv were represented at the meeting too, for the work the company is doing with CardSpace. It just announced last week that ten local councils are trialing the software – given the amount of data loss incidents in recent times, it’s reassuring that local councils are looking at innovative ways to tighten their security practices and ensure the secure sharing of and access to data. Practical, real world applications like this of the still juvenile technology will be vital in the coming months and years to hone the technology and processes behind it and win over the sceptics.


Thursday, 17 April 2008

Can computers really extract knowledge?

Knowledge management is theoretically impossible. Real knowledge sits between your ears, unseen until it is needed. As happened today. Someone mentioned Battenburg cake to me and all sorts of long forgotten knowledge about tea parties at my grandma's surfaced.


Not exactly a momentous bit of knowledge, but I joined a conversation on the subject on Facebook of all places. (The dyes in the cake are, apparently, dangerous.)


Recently, I visited a company that specialises in testing staff knowledge through questionnaires. The idea is to find out what an employee knows about their job and to determine whether there are any gaps that need filling or good results that need exploiting.


Boards of very large companies have rather taken to this system, a sort of asset register of the staff and their expected performance on the job. They can use it to correct weaknesses or develop strengths. And, should a crisis occurs in a particular department, they can quickly pull up staff information to help them figure out what went wrong.


Test results can also be measured against averaged results for other organisations in the same industry - a sort of performance benchmark.


It all sounds terrific in theory. The underpinning technology is fundamentally sound. But, as always, the acid test is in the implementation. And that involves humans.


By the time the strategy and raw information has found its way to the question designers, all intimacy with the subject matter will have been squeezed out. It's like speaking a foreign language. It doesn't matter how perfect your accent, a native will know you are a foreigner within a very short time.


I've just read a blog post by a member of staff at the receiving end of an assessment run by this particular system. Slightly tidied up and anonymised, he said, "The people who designed the questions and answers knew nothing about my line of work. The end result has been questions that don't make sense or which are so ambiguous that one needs to be a professor of English to understand them".


You can see why I've not mentioned the company name. I will return to it when I've tried the system myself and dug a little deeper into the particular circumstances around the above comment. But it seems clear that one important step was forgotten - did they try the questionnaires out on people who understood the subject before letting it out in the wild?

Tuesday, 15 April 2008

Understanding e-books and information behaviours

A typically well-attended e-book seminar on day one of the London Book Fair (LBF) raised some poignant questions on e-book growth. Speakers this year were familiar faces such as David Nicholas, Director of the School of Library, Archive and Information Studies at University College London (UCL), Sage Publishing’s Rolf Janke, and Mark Carden from MyiLibrary, more of that later.


When I blogged on last year’s LBF e-book seminar, the talk was of tipping points and a greater increase of e-book activity. Over the last 12 months we have certainly seen that from publishers who continue to march on with a plethora of digitisation initiatives and deals. Then there is the publishing of research from the Centre for Information Behaviour and the Evaluation of Research (CIBER) of which Nicholas played a key part. This came in the form of the joint British Library/JISC report on “Information behaviour of the researcher of the future”.


These changing research behaviours include horizontal, not vertical methods of searching by the ‘Google generation’ or viewing but not reading, onscreen sources of information. Both should be considered when thinking about e-books, learning and the library.


Nicholas opened his presentation by discussing the JISC National e-Books Observatory Survey, one of the biggest studies of its kind in the world. This ongoing research has seen the placement of a range of e-textbooks into 120 UK universities. Once the study has run for two years expect the wealth of e-book user information to raise some interesting findings.


Nicholas made some pretty honest points on what he thought needed to be considered. Users want ‘quick information wins’ they want to ‘bounce from one source to the other’ and ‘power browse’. e-Books, he said appeal to people wanting a bit of a book – not all of it, and everyone is just waking up to this”.


While e-books are supposed to circumvent the traditional logistic problems of supplying each student with their core textbooks, Nicholas asked what happens when students get all their content this way. What does that mean for the library? Will they even need to come to the library anymore?


Before I attempt to answer that, I should point out that the publishers presentations by both Janke and Carden had something to offer on this dilemma, albeit from their point of view.


Janke admitted how end-users; both faculty and students, will go to Google and Wikipedia first for information rather than the library and therefore e-books. His problem as a publisher was to ask, how you get them to your content.


There was talk of various initiatives, business models and marketing plans, which all involved the library and publisher making efforts in the attempt to address this. Both Janke and Carden admit that librarians complain of too many pricing models and collections, although in the experience of both, a one size fits all approach won’t be right for librarians either. As Jenke pointed out, “Librarians say they aren’t there to market publisher’s content” 


That’s interesting because as was said more than once during the seminar, users don’t care who the publisher is.


With e-books continuing to grow in popularity among both scholars and publishers the traditional academic library will face the challenges into how it works and what it should be there for. The way learning and the processing of information happens among scholarly circles has changed and will continue to do so.


There may be hard questions to ask about what the physical as well as virtual nature of academic libraries should be and could mean some big changes. But as Nicholas points out we have “seen a frightening dumbing down of information seeking”. There is a significant and serious role for the library still to play in all this. If there was ever a need for information professionals to take a leading role addressing these issues, the time is now.