Thursday 31 January 2008

Digital research data gets principled

Having recently met up with the Research Information Network (RIN) I had the chance to find out what the organisation is about and what its plans are for the coming year. If you are unfamiliar with this body, they are a relatively new outfit having been set-up in 2005. Backers include the British Library, JISC and the Higher Education Funding Council for England (HEFCE) and the Medical Research Council amongst others.


Their remit is to promote the interests of UK researchers and research information. Right now they are in the process of releasing various reports and guidance notes for the UK research community. I imagine they want some active engagement too, so if this sounds of relevance perhaps get in touch and let them know your input.


In terms of their own projects there have been a number of papers already released by the organisation. 2008 looks set to be a bumper year with an abundance of papers being publishing over the coming months. One of the more significant in principle is this month’s paper; Stewardship of digital research data: shared responsibility, mutual benefit.


The RIN say that as digital research data value increases as it is aggregated into collections, its true value isn’t being realised and won’t be until current research policies and practices are modified for the digital era.


With this in mind they have outlined a framework of principles; these call for clarification and structure for roles and responsibilities between researchers, funders and institutions, a standardisation and assurance of data quality across the board. Improved access, usage and credit, protecting the rights of creators, users and publishers alike. Investment into better data management and information access, improving efficiency will be a must with greater volumes of digital data being created. Finally the RIN calls for improved digital data preservation and sustainability, this includes developing suitable protocols and audit trails to determine the process of a piece of data’s history, indicating, they say, who has annotated or amended data, how and when, but optimised for digital works.


A more detailed summary of these principles can be found here, while the full report is detailed here.

Wednesday 30 January 2008

Will publishers go ACAP in hand?

At the tail end of 2006 I reviewed MPS Technologies BookStore platform. It looked like a good opportunity for publishers to exert a little more control over their intellectual property online. Especially when facing the unrelenting ‘googlisation’ of their IP from Google Book Search (a project to digitise works unless the IP owner opts out) or initiatives such as the Open Content Alliance.


BookStore, as an e-commerce online book repository, had the potential to offset the effects of Google. Publishers could tweak and brand and analyse user demographics to the hearts content. Search engines could be better utilised on the publisher’s own terms.


All very nice for them, but to really work well it seemed apparent that BookStore would need to garner enough support from fellow publishers and adopt a common set of standards.


Since then, the people at MPS Technologies have been engaged in getting just that by collaborating with a number of organisations looking to develop a set of workable principles.


Enter the Automated Content Access Protocol (ACAP) pilot. The year long ACAP scheme was an experiment designed to develop a new universal standard; “for the automated expression of permissions online” the control part in other words.


As a joint initiative developed by the European Publishers Council, the World Association of Newspapers and the International Publishers Association ACAP is nothing to be sniffed at.


Where BookStore came into it was they were the only participating publisher to focus solely on e-Books and as a part of Macmillan publishing that could mean a big boost for e-book usage from now on.


On a smaller scale, but just as poignant to the debate, is this story from the BBC which tells of ongoing row between Welsh writers and the National Library of Wales.


The library embarked on a £1m publicly-funded project to digitise modern Welsh writing. The writers want remunerating for what they see as the republishing of their work. The library has offered a compromise of allowing the writers to opt out of the scheme.


Sound familiar? Maybe ACAP is the way forward for IP creators as well as owners?

Tuesday 29 January 2008

Alfresco and SAP could (and should) cosy up

If you can’t beat ‘em, join 'em, goes the old dictum, and SAP could be joining forces with Alfresco to provide it with a missing component in its enterprise software stack.


SAP’s recent decision to help pump $9m into Alfresco coffers was an indicator that the ECM open-source startup company is highly thought of in Walldorf. Of course, the investment was made by investment arm SAP Ventures and does not commit the firms to working together but you don’t wave to be Gipsy Rose Lee to read a little into these tea leaves.


Some close to SAP say that the German giant was looking for an ECM strategic purchase in the days when Shai Agassi had power. Open Text was the company most often linked back then and a deal would have made a ton of sense as the pair had worked together before on many projects. It would also have had the effect of potentially weakening Oracle, before the database giant went out and bought Stellent and BEA to give itself a full house of ECM products.


Alfresco might well be the hottest property among new ECM companies. The company has plenty of managerial experience and engineering talent from old-world ECM but has none of the legacy code and is free to explore potential of the latest tools. Its approach of building systems by layers of web services, rather than through software breeze blocks, might well fit with SAP’s NetWeaver strategy.


I spoke to Alfresco chief marketing officer Ian Howells after the investment was announced and he was keen to reiterate that Alfresco’s plan is to IPO rather than sell out. The big vision is to build a “world-class” software company and there are rich pickings to be had among smaller companies and those happy to serve themselves rather than follow the ancient enterprise path of consulting-led engagements and long negotiating cycles.


Good for him and Alfresco, but that shouldn’t preclude the company from working closely with SAP so that when enterprise applications owners come to look at content management again, there is a good fit between this pair.

Friday 25 January 2008

IBM/Lotus struts its stuff

IBM/Lotus held its annual shindig in Orlando this week. Called Lotusphere, it was packed with devotees plus a quite a few analysts and journalists. Yours truly included.


Perhaps Orlando was deliberately chosen, because the event was by far the most interesting thing going on there. And, miracle of miracles, IBM pulled some interesting rabbits out of the corporate hat.


Let's start with mashups, or composite applications, which sounds a heck of a lot more enterprisey. The on-stage demo's showed how mashups could be created with a few drags and drops - exactly what Teqlo was trying to do before it realised it didn't have a decent business model.


Corporate information could be surfaced, through SOA or (I believe) through Websphere, external information could be picked up through widgets and onscreen information could be highlighted and used as input to the mashup. The end result was a little window which would deliver the information you wanted, presented in the form you most needed it. Graphs, charts, maps, whatever. Mashups can be catalogued and shared, making for ready re-use and potential time savings.


The only awkwardness is in how much you let users do themselves before having to push the mashups through to IT for approval. Something that hoovers masses of data every few seconds, or some logic that dives into a dead loop wouldn't go down too well.


What else? Well, IBM hasn't been best known for delivering software to small and medium enterprises. It's now going to push hard by packaging up a whole bunch of collaboration stuff and offer it as a service. Code-named Bluehouse, it will be a few months before it sees the light of day. Except you might want to investigate the beta version. It's based on aimed at users of Lotus Foundations, a software set that will also be provided as an appliance or straightforward installable software for organisations with between 5 and 500 connected users.


A unified interface might sound boring, but the idea of working on multiple applications through a consistent interface makes software a lot more attractive to users. If they switch applications without really realising it then the clunkiness and time wasting of conventional interfaces become all too obvious. It's been tried before but the difference this time is that the user can incorporate the elements they want rather than be stuck with someone else's idea of what a user needs.


Lotus went high wide and handsome on social networking tools. Its own Connections was made all the more interesting by the fact that it can integrate with external services. The integration potential of Socialtext and Atlassian wikis made the point, especially given that IBM has its own wiki engine. And this is another thing that came out of the conference: that IBM encourages openness and integration. While it has its own world, this does have very porous edges which could lead to an interesting software ecosphere which doesn't necessarily put IBM at the centre.


In some ways it doesn't matter if IBM takes over the world with its software innovations because, if it does nothing else, it helps to raise the bar. The focus of Lotusphere was a hundred percent on making the users' lives better and more productive. And, it has to be said, more enjoyable and fulfilling.


I'll drink to that. Off on holiday now. See you in two weeks.

Wednesday 23 January 2008

Nomadic information

A couple of weeks ago I asked if things could get any worse as information loss bungles seemed to hit a government department every week. With last Friday’s announcement that 600,000 people’s identities had been lost following the theft of a Royal Navy Officer’s laptop, I estimate that gives us about 48 hours until the next blunder emerges.


That is unless you count Monday’s news from Defence Secretary Des Browne that a further two laptops with similar data (but less names) had been stolen from 2005 onwards. An investigation is to be conducted by Sir Edmund Burton. We will wait and see whether his findings will address this fundamental flaw in information protection.


Online meanwhile, there are some calling for a greater portability of data (much of which is personal). The problem right now is that for every social network, online community, or e-commerce activity we engage in, the information you enter on that site or network belongs to that online island. As you hop around these (albeit connected) islands you have to re-enter that information each time. It’s a chore.


However a forewarning comes from Monday’s FT in Kevin Alison’s analysis article, Social networks may find that it does not pay to be possessive. The story heralds a growing call for a common set of ‘data portability’ standards and its not just about saving people hassle.


Alison points out that “it could change the economies of the web itself as companies rush to build new services that take advantage of the free flow of information.” The article also asks if there are significant implications for the likes of Facebook and co. who may see their competitive advantage slip if the social networks information structure is wide open for all to see.


Issues of privacy will also be massive, Jia Shen founder of RockYou who make Facebook applications asks, “What are the rights of users and what are the motives of social networks to share data between themselves?”


That’s a worthwhile question and getting users to trust a web company when they can’t even trust their own government may take a lot of persuasion.   

Monday 21 January 2008

Oracle’s ECM full house

I don’t know if the earth moved for you last Thursday but it was a pretty crazy day as merger and acquisition activity once again wrecked the old-look competitive landscape.


Sun’s deal to buy open-source database outfit MySQL probably eclipsed the much larger agreement for Oracle to buy BEA Systems in terms of media attention. It wasn’t a good day for anybody else to get a moment in the media spotlight so lots of us missed another significant purchase, that of Captovation by, that man again, Oracle.


All these deals have some significance for ECM. MySQL is the database of choice in LAMP stack deployments and a regular partner for open-source content management systems. BEA is best known for other middleware-related technologies but it also has strong portal capabilities.  Captovation adds another, relatively small element to Oracle’s ECM equation through document capture and imaging.


Confused by Oracle’s if-it-moves-buy-it strategy? You will be if it’s your job to explain to bosses what’s going on at the company. Oracle now has a full house (and more) of portals, document imaging programs, ECM system and developer capabilities, as well as CRM, business apps and integration tools. Its roadmap already looked like the M25 on a bad day, and with the BEA and Captovation contracts in place, things aren’t going to get any easier.


This is a very obvious period to ask for some face time with your Oracle rep, but give it a while yet. After all, the Redwood Shores executives will have to explain to themselves, and then their managers and staff, how having this pot pourri of technologies is going to help user organisations. You can't argue with Oracle's ability to extract stock value from its acquisitions but the jury is still out on whether it is doing anything for the people who bought the software.

Friday 18 January 2008

VortexDNA cuts web time-wasting

While researching a column to be published next month, I stumbled across a New Zealand-based company called VortexDNA. It believes that it can boil your driving characteristics down to a ten digit code. This code can then be used to assess whether the web pages or people you encounter online are likely to be of interest to you.


Imagine doing a Google search and then having the best results highlighted in some way. The PageRank method is astonishingly good, but it knows nothing about you and your life purpose and values. Unless you are a totally brilliant search term creator, you can still end up wading through masses of pointless (to you) results.


The interesting thing about VortexDNA is that it doesn't need to keep any information about you. Although you answer a short questionnaire (some of it badly worded, sadly), your answers aren't retained. Your DNA is calculated and that's all the company needs to know. Best, though, to tell the company who you are and where to find you when new software releases come out.


You can download a Firefox extension right now and start experimenting with the software. Somehow, it overlays the web pages you're viewing with little orange highlights for stuff it thinks might interest you.


Your code could be attached to web pages that you visit thus giving people an idea of the type of people who like this kind of page. The seven digit number will be repeatedly averaged as new people arrive. (Don't ask me for the mathematics, you'd have to grill  chief boffin Branton Kenton-Dau on that one.)


Your code could be used to refine the ads that get displayed in the sidebar or the books that Amazon recommends to you. All without knowing a shred of personal information.


So the VortexDNA techniques can be applied to personal benefit and business benefit. 'Click throughs' are very important to online advertisers and one way that Vortex hopes to make money is by selling its technology to providers of online services, so they can increase their ad revenues and continue to give us the stuff we want for free.

Thursday 17 January 2008

My generation

Some more news from the British Library this week, as they co-launched with JISC a report on “Information behaviour of the researcher of the future”. The spiel on the day was about revealing who the Google Generation actually were. This followed some much needed myth-busting on how research and search is conducted online and by whom.


Underpinning all this is the issue of information literacy or often lack of it, among searchers today.


The fundamental myth the report addressed was that the ‘Google Generation’; those born and raised in an internet world possess the best web-literacy. In fact what the report has found is that although younger generations are at ease with computers and the web, there is a heavy reliance on search engines and a tendency to avoid reading and analysing online. The preference is to view and view quickly before rapidly moving on.


The Centre for Information Behaviour and the Evaluation of Research (CIBER) from UCL who conducted the study explained that a main characteristic of the digital age is horizontal information seeking. This entails a form of ‘skimming’ information before ‘bouncing’ somewhere else. Something I would guess will ring true even for the most information literate of you.


What is also interesting is that this behaviour of the ‘Google Generation’ isn’t limited to those born from the early 1990’s onwards (although they may be faster adopters of such behaviour). It seems all age groups share these traits, and that doesn’t bode well for information literacy skills.


Of course the BL’s involvement in this is to address how libraries should modify themselves to these changing digital needs and behaviours of their users. They cited their example of embarking on massive digitisation projects as a means of opening up more of their content to more of the population. But, more than anything, they wanted to “send a strong message to the government and society at large” about those lacking in decent information skills.


There are a lot more insightful gems contained in the report here, certainly worth a good old fashioned read-through.

Tuesday 15 January 2008

Advocating openness

Today’s FT story on ‘super advocates’ in social networks got me thinking about how web 2.0 platforms are being used to sell all manner of wares. That line of thought inevitably drew me to consider whether my own information sources are subject to something similar. Likeminded ideas, opinions and politics can be worth a lot to an organisation looking for support.


It all led from a report conducted by Experian and Hitwise. They suggest companies looking to exploit social networks for sales should leave some funds from their marketing budgets aside to ‘court’ ‘super advocates’ on social networks. These are well known, respected and influential members of a particular social networking area.


By using the credibility of such advocates products, services or events can be recommended with credibility to readers. In other words, viral marketing.


The tale of caution the report pushes is that the very credibility of the ‘super-advocate’ and therefore the social network will be called into question if readers perceive they are being sold to. Users will jump ship – and jump rapidly if they smell a rat.


All cynicism to one side, I doubt that this kind of practice is just limited to the commercial sector. And that’s what got my attention; how do we know who is the real-deal and who isn’t? For the record, Incisive Media who own IWR, are themselves owned by venture capitalist giant Apax.


In our arena we use social networking sites for professional purposes; learning, making contacts or keeping up to date with the latest developments. On this very blog we often recommend product x or service y to our readers. Something we think that might help you work a little better; hopefully you trust our recommendations are genuine.


Cabinet Ministers (actual and shadow) seem to regularly be in the spotlight due to their own or department’s poor information practices. There are checks and balances in place to maintain credibility and quite rightly so. Meanwhile if the press step out of line there is the Press Complaints Commission to go to.


As far as I’m aware there isn’t an online equivalent (not that it would be workable) for bloggers and other social networking butterflies to adhere to. Keeping a credible reputation must be something every social networker strives for.


Credibility is king, so transparency is a must, but what should we be telling each other online to maintain this?

Monday 14 January 2008

Fast deal muddies Microsoft search strategy

A lot of people see Microsoft’s agreement to buy Fast as just another example of how mergers and acquisitions are leading to inevitable consolidation in enterprise software generally. I’m not so sure it’s as cut and dried as that.


Most M&A is done to fill a gap in functionality or to grab market share. The Fast deal does both but it would also appear to cut right across Microsoft’s strategy of late last year when it described a plan to develop its search capabilities by organic means with a product called Search Server 2008.


Microsoft now says it plans to integrate Fast with Search Server and SharePoint but, having just cost Redmond $1.2bn, the Fast technology is a racing certainty to be predicated.


It’s a slightly odd state of affairs as it’s only a few months since Microsoft was describing how Search Server would soon be able to compete at the top end of enterprise search but, like Newcastle United parting company with coach Sam Allardyce, one can only assume that Microsoft saw the light a little at an odd juncture.


One report suggests Microsoft also might have taken a close look at Endeca and Autonomy before deciding Fast was the pick of the bunch available. Of course, Autonomy, at perhaps twice the price of Fast, would be pricey given Microsoft’s relatively Scrooge-like attitude to acquisitions, but both of these companies will now be under more sale scrutiny than ever, of course. It will come as no surprise that the most likely buyers are IBM, Oracle and Google.


Incidentally, Fast, like Autonomy, has R&D in Cambridge. That’s Cambridge as in the great university, punting on the Cam and so on, not Cambridge, Massachusetts or some other Cambridge. In enterprise search, at least, there is a part of the tech world that remains forever England.

Thursday 10 January 2008

Social software implementers: read this

Thomas Vander Wal is a man with at least two claims to fame. One was his invention of the term 'folksonomy' and the other his coining of another term, 'infocloud' or 'info cloud'. Anyone who's involved in social computing will have bumped into each of these phenomena. They are the way that ordinary folk tag and access information, rather than having to work with conventional rigid taxonomies.


So the man's no slouch when it comes to information, its organisation and architecture, especially in the digital social world. He's a pioneer and a deep thinker. And, this week, he brought a ray of sunshine into my life.


I'd been working on a content management, collaboration and web-publishing project. Trying to nail all the inputs, outputs, tags, relationships, and so on was not easy. But it seemed right that I should do this mapping without any consideration of the actual software tools that would eventually be brought to bear on the problem. The approach owed much to the clear thinking Lee Bryant of Headshift, who explained that he always works on user needs long before thinking of what solutions to apply.


So, having got all my ducks in a row, I decided it was time to investigate the software and services that would best deliver the elements. But a quick diversionary visit to Twitter revealed a message from Thomas Vander Wal. He'd posted 'The elements in the social software stack' on his blog. And he explained in considerable, and convincing, detail how and why social software worked. He even had a diagram to clarify how the bits hung together.


It was a perfect reality check for my own thinking. It explained the issues way beyond my own articulation. I won't spoil his story but I'll give you a clue: The running order for the elements to work is: Identity, Object (social object), Presence, Actions, Sharing, Reputation, Relationships, Conversation, Groups and collaboration.


If you're involved in introducing social software into your organisation, read this single post, even if you read nothing else.

Wednesday 9 January 2008

Information professionals guiding you to the best bits of the blogosphere

He’s an award winner and an information professional at the leading edge of where the industry is going. Roddy MacLeod tells IWR about his involvement in trailblazing blogging.


Q. Where is your blog or blogs?
A. There’s the Heriot-Watt Library blog Spineless (at http://hwlibrary.wordpress.com) and News from TicTocs (http://tictocsnews. wordpress.com). TicTocs is a JISC-funded project to develop a service to transform journal current awareness. There’s also a private library staff blog I created to post details of anything of interest, a private TicTocs project blog for the project consortium, and a blog in Emerge, another JISC project (http://elgg.jiscemerge.org.uk/roddym weblog). I also write a fun pseudo-travel blog, which I’m too embarrassed to reveal to any but close friends.


Q. Describe your blog?
A. Spineless provides news, views, information and advice on Heriot-Watt Library’s resources and services. We try to create eye-catching subject lines and write posts from the reader’s perspective, explaining what they may gain from whatever is under discussion. There have been posts about e-book services, open access, useful web tools, and even what we did over the summer. In the future I hope to get some students involved in posting. News from TicTocs keeps stakeholders updated with the project’s progress. We’re currently busy with technical development, so most of the posts have been about journal table of contents’ RSS feeds.


Q. How long have you been blogging?
A.
Since 2005.


Q. What started you off?
A.
I discovered by chance that there were several hundred bloggers claiming to be located in the Heard and McDonald Islands, which are actually uninhabited. It seemed such a ludicrous yet free-thinking idea that people could locate themselves virtually, anywhere, and then write blogs (some of which supposedly describe life in those islands) that I got hooked. Later, I realised there was also a serious side to blogging.


Q. Do you comment on other blogs and what is the value of doing so?
A.
I have some RSS feeds for search terms such as TicTocs, which allow me to monitor any blog that mentions the project, and I comment on those posts, often just to thank them for their interest.
I also make occasional comments on one or two other library and information blogs, if I think I can add anything to the discussion. With respect to fun blogs, comments encourage the bloggers, so I regularly contribute to a handful.


Q. How does your organisation benefit from your presence in the blogosphere?
A.
The Spineless blog is one way to market the library, give it a higher profile, and make sure that expensive resources are exploited. News from TicTocs is, I hope, building interest in the future service.


Q. How does it help your career?
A.
I think bloggers are scratching the surface of what may be achieved in the future, and it’s exciting to be involved in all this. Also, the best way to develop and learn is at the coalface.


Q. What good things have happened to you solely because of blogging?
A.
Blogging has helped me keep up to date with new trends, and my circle of virtual friends has increased. It has also, I hope, made me aware of writing things that will interest readers, rather than just myself.


Q. Work aside, which blogs do you read just for fun?
A.
Blogs that are located on uninhabited islands, and Silversprite (www.silversprite.com) because its author is an eccentric with a good photographic eye.


Which bloggers do you watch, link to and why?
I hate to admit that I monitor 250 feeds via Bloglines, but some of these are RSS feeds rather than blogs. I regularly check Phil Bradley’s blog (at http://philbradley.typeapad.com) because he’s completely on top of things; Really Simple Sidi (http://rafaelsidi.blogspot.com) because Rafael Sidi often has new and informed angles on things of interest; CleverClogs (www.cleverclogs.org) in the hope that I can understand Marjolein Hoekstra’s posts (which recently included the following: “Twitter to Skype Mood Message using Twype”); UK Web Focus (http://ukwebfocus.wordpress.com) because Brian Kelly is informative and amusing; and Lorcan Dempsey (http://orweblog.oclc.org) because he’s a big thinker. Peter Scott’s Library Blog (http://xrefer.blogspot.com) contains lots of news items of interest. Maeve’s Blog (http://maeverest.blogspot.com) is interesting, and, of course, the IWR blog http://blog.iwr.co.uk) to keep up with industry news.

Tuesday 8 January 2008

First step taken in copyright consultation process

This morning the British Library (BL) hosted the Intellectual Property Office Copyright Exceptions Consultation launch.


The consultation is the next phase for the government following last year’s recommendations on copyright law from Andrew Gowers. The Gowers review was widely seen as encouraging from a library and scholarly perspective. It recommended exceptions for institutions like library’s and archives from certain elements of copyright law. This could include their ability to format-shift copyrighted works in order to continue their safe preservation.


Outlining his vision for the IP consultation, Lord Triesman, Parliamentary Under Secretary of State for Intellectual Property and Quality, repeatedly called for contributions from interested parties across the board.


Speakers hailed from the British Phonographic Industry, National Consumer Council and British Universities Film & Video Council. Opinions and questions from attendees and speakers alike were sharp but remained civilised.


The minister pointed out that it was necessary to understand that adjustments would need to be made to copyright. The process would be an organic one, as that would be the best way of “staying up-to-date with changes in attitude, culture and technology”   


BL CEO, Lynne Brindley, identified the eminent polarisation present in the debate during her speech. She highlighted to the audience that “Exceptions in copyright law are fundamentally important for the society, culture and economy of a democratic society as, depending on how and where exceptions are allowed.”


Brindley outlined four big questions she believed that the consultation process needed to address;


1. How can we best ensure the interests of rights holders are respected and protected, while at the same time respecting and protecting established exceptions that are present in copyright to engender knowledge, learning and creativity?

2. How can a complex area of law like copyright be simplified to the point of intelligibility, and therefore gain legitimacy amongst the new generation of digital natives who see the right to mix, mash-up and share as being exactly that, a right?

3. How can the democratically established public interest elements of our copyright law, as represented by exceptions, be translated and made relevant in the digital age, when they are being undermined by private contract?

4. To what extent will or should copyright law be harmonised internationally, and to what extent will national differences in law be defensible or desirable on the internet - a world with linguistic but not geographic borders?

If you want to join the debate (deadline is 8 April 2008), get in touch with the UK Intellectual Property Office and email them at copyrightconsultation@ipo.gov.uk, telephone +44 (0) 1633 814 912 or just add your comment on our blog. For a further run down of the morning's events there is an interesting post over at the Open Rights Group site.

Monday 7 January 2008

EMC’s Document Sciences buy points the way for ECM

Merger and acquisition activity has been the name of the game in enterprise content management for the last few years so it was no great surprise to see EMC adding to its content management and archiving division with an $85m agreement to buy Document Sciences in the dog days between Christmas and New Year.


However, whereas a lot of the buying and selling in ECM has been about consolidation, this is a “tuck-in” deal, meaning that it is a purchase EMC can easily afford, of a company that adds incrementally to its portfolio rather than changing the face of its strategy. That said, I think it’s indicative of the direction in which ECM is heading.


Documents Sciences’ Xpression suite helps firms automate document output and communications. If you want to create contracts, marketing correspondence or company policies, its software can help. More importantly, it has pre-built hooks into ECM software, as well as ERP and CRM programs, so output management is not a silo but an integrated part of your business infrastructure.


A lot of ECM 1.0 has been about taking a belt-and-braces approach to content. It has sprung from the “save everything” and “keep the CEO out of jail” period of post-Enron paranoia. The Document Sciences deal is a reminder that making use of that stored content through automated business processes should be the raison d’etre of ECM. A fully-functional ECM system is a system for saving time by creating a single, intelligent repository of content that can be reused and repurposed in many ways.


ECM shouldn’t be a glorified storage dump and it’s good to see EMC recognise this.

Thursday 3 January 2008

Egoless blogging?

I've spent (wasted?) a goodly chunk of the last month in Twitter. For those who haven't encountered it, it is a rapid-fire, miniblogging system which restricts each post to 140 characters. For those so-minded, it means they can post ten times as many items per day as when they were blogging. The posts appear on your computer screen or (selectively) on your mobile phone as text messages.


As you might expect, many of the 'in-crowd' hang out there - Scoble, Macleod, Le Meur and so on. A lot of the time they don't so much Twitter as witter. But then, every twenty posts or so, someone comes up with some really useful information. Obviously, you choose who to 'follow' so your diet should include some nourishment along with all the filler. The trick is not to eat everything that's put in front of you. Push it around the plate a bit and pick out the good bits.


A couple of months ago, these same people were all over Facebook. It was 'the' place to hang out. And, before that, of course, they were all busy trying to become 'A-list' bloggers. I don't think blogging's died for them but, judging from the current mood, Facebook is definitely falling from favour.


The great thing about all this is that if the ego-driven, time-wasting, blog postings are being shrunk and shifted to Twitter, then what's left ought to be a better, more thoughtful, blogosphere.


We can live in hope.


Happy new year.

Freedom of information – three years on

Unsurprising news today that MPs from the Commons Justice Committee have recommended that the government need to severely strengthen the Data Protection Act. The report followed numerous information security breaches that came to light, in particular the loss of 25 million personal information records at the end of last year.


The BBC points out that at present, “government departments cannot be held criminally responsible for data protection breaches”, if the committee’s recommendations are followed, that could certainly change.


Over the Christmas period it also emerged that nine National Health Service trusts had lost patient records, believed to include personal information. This bookended the loss of 3 million learner driver details a week before, when the information was sent to a US private contractor.


Information Commissioner Richard Thomas has warned that there is more to come.


Meanwhile, I came across this snippet of research conducted for his office which was released earlier in the week. The findings examined the positive impact and benefit the Freedom of Information Act has had, three years since its inception.


Polling 1,000 people, the results showed over 80 per cent of respondents now have more confidence in public authorities because of the act. This increased from a figure of 58 per cent, taken three years ago – just before the Act was put into place.


Essentially the respondents felt their public institutions were now more transparent and accountable than before.


There is no mention of whether confidence in their MPs who voted to place restrictions on the Act last year, citing necessary cost reductions as justification, would have had any effect on the results.


I do wonder if twelve months from now a similar piece of research would lead to positive figures like the “three years on” study seem to reveal.

If strict measures are put into place and action is taken quickly moves could be made to restore some faith. Hopefully such measures will motivate those with responsibilities of protecting sensitive information: to treat it with the respect it deserves. That will need to apply as much to business as to government.


Ultimately business leaders and civil servant bosses will need the guidance of information experts like our readers. Let’s see if rock-bottom public confidence in governmental data protection can be restored this year. The only way is up – isn’t it?