Taking Software Asset Management (SAM) seriously


Leaving aside the software licensing problem, surely no-one reading this will have problems finding out what software they use!

It’s a no-brainer, people aren’t allowed to run open source software; people certainly aren’t going to buy their own software for work; we know what we bought even if matching it up with licences might be difficult (but we keep all the original software disks in a fireproof safe, so how hard can it be); and if we do have any minor issues, we can just take a look at what people have from the network. That’s at least one issue we just don’t need to bother about.

At least, that’s what an IT group told me once when I was in internal control. So I asked to have a look at the copies of, say, PC Tools in the safe and said how impressed I was with how neatly they were stored.

I then put a simple directory search for the PC Tools executable onto removable storage, put it into a few IT group PCs (you need to be a bit careful with reputation risk when looking for governance issues—sometimes, success can bring its own problems) and found considerably more copies than the number of disks in the safe. I then sent the IT Director a draft of a report I was working on for the board, with the suggestion that it’d probably be out of date before I could publish it. And software asset management (SAM) suddenly got taken a bit more seriously.

I’m telling this story because a moment’s thought tells you how lucky I was to get a result. No-one was trying to hide what they were doing, because they didn’t think software asset management was a problem, and I only needed to find one example to debunk complacency, not document the whole problem. People did even buy (or copy) their own software to help them work better and thought that they were doing their employer a favour.

A simple file rename or two would probably have defeated me; I’m rather amazed I had access to those PCs (a bit of social engineering—I had been in the IT group); and I had no real idea whether I was finding lots of different copies of the software or simply previous versions which hadn’t been deleted after an upgrade. I had no real picture of the usage patterns of all this software and whether the problem was universal or just in a few groups.

When the IT group starts taking the Software Asset Management (SAM) issue seriously, it needs a much more sophisticated approach. As Steve Schmidt of Flexara Software points out: “Discovery via automation is important, for efficiency, continuous function and to avoid human error. Application recognition libraries are needed. For example, Flexera Software maintains an Application Recognition Library and a complementary SKU [Stock Keeping Unit] Library, with data for 11,000 publishers, 100,000 apps, and 200,000 SKUs”.

Flexara Software is apparently trying to occupy the place between software producers and enterprise consumers, and is involved with, and investing in, the ISO 19770 standards to help both manage application usage. According to Schmidt, it is both “helping to shape/create the standard and talking to our customers about their plans to adopt and their intended uses”. This, it seems to me, is vital as any software tagging scheme that isn’t end-user-led is unlikely to succeed.

However automation has its own issues, not least that some software may not be accessible from the network (highly confidential or potentially vulnerable systems may not be networked) and some software may be on platforms that don’t support your discovery automation tools.

In fact, I still believe that (software-assisted) manual audits are probably the most reliable option for initial discovery, partly because they can assess and deal with the nature of the problem (complacency and carelessness vs. deliberate malfeasance, for example) and can ask intelligent questions (such as “what about that PC over there with no network cable”). However, they are very expensive and slow—and not very useful for maintaining the asset register going forwards, as they are so costly and can be disruptive.

So, automatic discovery over the network is an attractive proposition but much software doesn’t make this easy—automated discovery is not facilitated by software design and it is easy to ‘discover’, say, two different software installations where there are simply two versions of one product in use.

And if you run two different discovery tools, what chance (without something like ISO 19770-2) that each discovers the same piece of software but calls it something different; or that one discovers a desktop suite as one piece of software where the other finds several pieces of software (the suite components)? It is these sort of issues which will make software tagging standards such as ISO 19770-2 (see the TagVault Software ID tags FAQ here) important, and ensure that your picture of your software estate isn’t tool-dependent.

I asked Schmidt where software tagging and associated standardisation initiatives have got to in practice. He said “Software tagging is a nascent technology, but we anticipate it will play a role in the future management of software assets. It will be another source of data about applications. It will be an input to the analysis first for identification, and then for the higher business value delivered through the comparison of usage data and entitlement data.

Using the data effectively can assist in achieving the unique business intelligence about an organization’s license position, and enabling license optimization on a continuous basis”. I interpret this as saying that the need for standardised software tagging hasn’t reached the mainstream yet (rather confirmed by the blog here), although software publishers like Adobe and Symantec are starting to take an interest and I think it will will rapidly be accepted by anyone who thinks seriously about the issues.

Schmidt points out another issue: “you also need to fully understand product use rights to get the full value of software license optimization efforts; so a company like Flexera Software maintains a Product

Use Rights Library for titles from key vendors such as Microsoft, Adobe and Symantec that contains product-specific business logic regarding upgrade rights, second machine use rights and other rules that can be applied to find the true license position of a software estate. This is because the most immediate business benefit comes with reconciling application usage with the product use rights, both from the library and captured from specific contract terms”.

This is where ISO 19770-3 comes in, I guess, although I do rather worry that a focus on license management will limit effort on software tagging to those vendors which see themselves as having serious licencing or piracy issues, and that difficulties in identifying software from other vendors (and even in-house and open source software) will impact the general business case.

Software tagging, in my opinion, must be thought of in terms of managing the software estate as a whole, and must be end-user driven, or it may become seen as a tool used by software vendors to maximise licence revenue—which won’t help general acceptance.

I think that once you are taking SAM seriously, you then have to maintain the asset register going forwards—you can’t just achieve Asset Management, tick the box, and move on to something else—and (most important) you must integrate it with a holistic Configuration Management Service and the rest of your IT governance environment. You don’t want to build a SAM silo. Schmidt’s reaction to this is: “Automated software discovery is a key function of basic software asset management, and a building block to achieve higher-order enterprise license optimization.

Flexera Software takes the approach of leveraging systems that may already be in place, e.g., an implementation of Microsoft’s System Center Configuration Manager (SCCM), by using data it has collected over time as input to the application identification process. This data can be augmented through additional collections by SCCM or by Flexera Software’s FlexNet Manager Suite”. Of course, SCCM may not be everyone’s choice of configuration management tool—as the BBC is always saying these days, “other tools are available”.

“Discovery,” Schmidt continues, “is only useful with a robust Application Recognition Library, which accurately matches the data recovered with known software titles. This effort, in turn, is only truly valuable if you can leverage it to drive business benefit by understanding product use rights and reconciling usage against those rights for the applications you have. This yields license optimization and can do so on a continuous basis”.

So, I think that we are in at the start of something here: standards-based software asset management automation, building on top of robust, resilient, standards-based software tagging. This is definitely an aspect of IT Governance: according to Schmidt, there’s a governance story around optimising or fully-leveraging the IT budget; and another around maintaining continuous compliance; both, as Schmidt says, “significant governance issues” today.

David Norfolk is Practice Leader Development and Governance (Development/Governance) at Bloor Research. David first became interested in computers and programming quality in the 1970s, working in the Research School of Chemistry at the Australian National University. Here he discovered that computers could deliver misleading answers, even when programmed by very clever people, and was taught to program in FORTRAN. His ongoing interest in all things related to development has culminated in his joining Bloor in 2007 and taking on the development brief. Development here refers especially to automated systems development. This covers technology including acronym-driven tools such as: Application Lifecycle Management (ALM), Integrated Development Environments (IDE), Model Driven Architecture (MDA), automated data analysis tools and metadata repositories, requirements modelling tools and so on.