Content management no image

Published on July 15th, 2010 | by Rahel Bailie

4

Technology won’t fix a bad strategy

For a few years, after a particular rounds of a presentation on principles of component content management, a number of the audience members would inevitably hover around the stage, looking either excited or agitated. I assumed the latter, and would wait for the questions that were so obviously bubbling up for the writers and managers that milled about.

“Our IT department gave us VSS and we can’t figure out how to get components out of that. How do you do that?”

“We’re tearing our hair out with Sharepoint and versioning; what is the workaround?”

“Our website uses Documentum and it won’t do what we want. What do we do?”

“We have Interwoven and the interface is awful, so our staff won’t use it. What should we replace it with?”

Each set of circumstances was unique, yet eerily alike.  Each instance involved the acquisition of a software product which was then implemented for an operational unit, without regard to whether the software was suited to the task. The mismatch, in some cases, was painfully obvious; in other cases, the mismatch was more subtle. In many cases, certainly all the instances above, the software is popular, thriving software that has been implemented without a proper strategy. The results: generally some sort of fail.

Bad strategy or no strategy?

During the past decade, acceptance of content management has drastically increased. The idea that managing any significant volume of content requires some technology assistance has been demonstrated a multitude of times, and the adoption of a CMS (content management systems)  is no longer a novelty. Yet the instances of the tail wagging the dog – buying the software before determining the operational needs – continue to be far too familiar to ignore.

When I would encounter an audience member at a later event, I’d ask if they’d ever gotten the problem sorted out. Overwhelmingly, they would sheepishly admit that they had not. They continued to produce and publish content in ways that they acknowledged were highly inefficient and prone to operational risks <link> because they couldn’t convince their organizations of the need to make the changes that, to them, were obviously needed. So what went wrong?

Go cheap or go home. This “strategy” is when the technology group either already has some software – collaboration software, source code control software, or a Web CMS – that they insist be put to use because “we already own the software” or “the software is free.” Not only does this dooms a project to failure, but anecdotal reports show that the operational team is then blamed for the failure. The technology group refuses to take responsibility for having foisted upon them an inappropriate tool. In this case, a stalemate ensues, and everyone goes back to their previous kludgy way of work, with no movement forward, and the technologists smug in their political win.

Don’t get it, don’t care; just do it. This “strategy” is in play when a group has heavily invested in a software application, and is reluctant to investment more time or money to make it work for a different operational purpose. There is equal resistance to bringing in additional software that complements the original uber-application, and no impetus to understand why it is needed. There may have been a strategy developed for the initial implementation, but there is no acknowledgement that different operational needs will require further customization of the software. The idea is that the software should be one-size-fits-all, and if the customization has worked from one department, it should work for all departments. The department whose operational needs aren’t being met is sure to find inventive work-arounds, sometimes taking pains not to let on what is going on for fear of sanctions from the powers that be. Generally, the situation comes to light when a serious breach of protocol comes to light that can be traced back to a work-around that failed.

Connecting strategy to technology

The idea that technology can be implemented without strategy is naïve, at best. The idea that technology or strategy can be implemented without a deep understanding of the content lifecycle is a wanton mismanagement of corporate assets.

Understand your content. The entire CMS implementation is to support, with technology, the production, processing, and publishing of content. It is imperative to understand what the content needs are throughout the entire content lifecycle. Without this understanding, a technology implementation is sure to go wrong at some point because there will be a mismatch between the content requirements and the software assigned to support it.

Know your standards. For any technology to be effective, there needs to be an understanding of how the content can be leveraged. This generally involves connecting systems, whether that is as simple as providing an RSS feed or using microformats, to more robust standards such as implementing DITA <link> to make content system-agnostic or integrating content from one system into another through the magic of XSL transformations.

Understand pertinent technologies. The decision-makers who, with much eye-rolling, confess with some pride that they don’t even know how to use styles in their word processing are who allow bad software implementations to thrive. Get with the program or get someone who can, because the lack of understanding about how to leverage content through technology, more often than not, shortchanges the project or leads to disastrous results. The complexity of systems has grown exponentially over the past decade; it is imperative to understand, at least at a high level, what the various technologies can do and how that can benefit – or harm – your content and, ultimately, your brand.

The concepts I’ve articulated here are not entirely new, nor are they particularly rocket science. Consultants, software vendors, and their savvy clients have produced many case studies demonstrating successful implementations and the derived organizational value. Invariably, their successes all share a common denominator: a strong strategy.


Share this post:
These icons link to social bookmarking sites where readers can share and discover new web pages.

  • del.icio.us
  • StumbleUpon
  • email
  • Facebook
  • LinkedIn
  • TwitThis

Tags: , , , , ,


About the Author

Rahel Anne Bailie is a synthesizer of content strategy, requirements analysis, information architecture, and content management to increase the ROI of content. She has consulted for clients in a range of industries, and on several continents, whose aim is to better leverage their content as business assets. Founder of Intentional Design, she is now the Chief Knowledge Officer of London-based Scroll. She is a Fellow of the Society for Technical Communication, she has worked in the content business for over two decades. She is co-author of Content Strategy: Connecting the dots between business, brand, and benefits, and co-editor of The Language of Content Strategy, and is working on her third content strategy book,



4 Responses to Technology won’t fix a bad strategy

  1. David Hobbs says:

    Ooph. While helping clients select platforms to drive their web sites, I’m struck by how different their needs actually are. And how often they want to use what they already have in place because it’s free. Sometimes there may be a match, but I agree that figuring out the strategy is the first step. In fact, working out an understanding of the organization’s actual needs, goals, and actual direction is probably the most important part of a selection process. Also, the organizational issues, existing other systems, and other realities need to be considered to ensure a good match.

  2. I am quite convinced that the paradigm around software as a piece of equipment (or a service, even), is in need of a tune-up.

    Having spent a significant period of my life both designing and writing networked information systems, one thing has stuck out to me: there is not actually that much code. There is plenty of data (read: content), but the actual amount of bespoke computation is both scant and technically not very challenging. Translation: there is not a lot of code to write and the code that there is to write is easy, easy stuff.

    What is a challenge is getting that code to put the correct information in front of the correct people so that they can inform themselves and make wise decisions or otherwise enrich their experiences, with as little cognitive overhead as can be managed.

    The products on the market, be they open-source or effectively take a share of your revenue, operate as if they are a drop-in solution for a particular problem, as if it was generic and well-defined. But we know that the more expensive a software package gets, the more onerous it is to operate, and the more it costs to customize. I further submit that networked information systems like ERP/CRM/CMS/LOL/WTF risk costing more to integrate than writing a system from scratch, with the investment (and concomitant lock-in) asymptotically approaching infinity as the product approaches effective.

    I therefore propose an alternative way to think about software: It is an executable instance of its author’s opinion, to an extreme degree of specificity, of how a certain business process ought to happen (even if that is the business of, for example, playing a game). A software product, be it trivial or grandiose, is far from confined from affecting business processes other than the one it was intended to facilitate.

    Consider the aforementioned example of a writing instrument: no matter what an industrial designer does with a pen (without making it no longer a pen), it will scarcely affect your ability to write. Likewise a typewriter, to a slightly lesser extent. A word processor, on the other hand, will not only shape the way documents are created and managed within a business, but the businesses that surround it as well.

    The architects of enterprise software simply and inexorably suffer from a dearth of information relative to the complexity and nuance of their charge. The result is to overprescribe some behaviour and completely neglect others, then make up the shortfall with an extension interface.

    Again, the data is the most important consideration. This includes capturing the data, storing it safely and in its most consistent form, routing it to the right people, keeping it from the wrong people, arranging it to an arbitrary degree of granularity and extracting it in a way that is useful to other systems.

    The more expansive the product, however, the more arcane is its internal structure, barring an explicit consideration toward integrity and transparency. Likewise, for the reasons I mentioned above, it is entirely probable to commit to a solution that inhibits or outright prohibits certain necessary activity. To perform the necessary due diligence when evaluating a product of this kind is nine-tenths of the work of acquiring a bespoke solution. The question, then, is what are we paying these vendors for?

  3. (Oops, I forgot I had taken out the “aforementioned example of a writing instrument”. Re: “generic and well-defined”, like swapping a Pilot pen for a Bic pen.)

  4. Rahel Bailie says:

    When software consumers got accustomed to the install-and-use model for things like word processing and spreadsheet calculations, the assumption that all software could work like that. In other words, a generic tool for a generic company. But there are no generic companies, and hence, no generic tools will fit the bill when it comes to filling a specific business need.

    Circa 1979, a colleague of mine was using WordPerfect to do inventory control for children’s clothing. Why? That’s the tool he had in hand, and no one else in the company *got* what he really needed, so he made it work. Fast forward 30 years, and organizations are doing the same thing, just on a larger scale and with larger budgets.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Back to Top ↑