(Updated this post 10/29/2004, as it is getting so much attention.)
The ITIL concept of the Configuration Management Database has provoked great confusion in enterprise IT circles. Research firms are weighing in, the trade press is trying to make sense of it, and the vendors are tripping over each other in their haste to brand whatever data stores they have as “CMDBs.”
Consider Axios and Troux, two companies that claim to have CMDBs. Axios has a system that can automatically discover much about the physical characteristics of your desktops and servers, and can give you a detailed inventory of packaged software and its configurations. It competes primarily with other desktop-centric vendors such as Altiris. While it claims to support dependencies mapping, the product has no provision for higher level, logical concepts such as process or service. They call their back end database a CMDB, as does similar desktop-centric player EZManage. (Competitor Altiris does not.)
Troux, on the other hand, has an offering (substantial services purchase required) that Gartner positions both in its enterprise architecture and metadata repository quadrants, yet is assertively marketed as “The only Fortune-500 class ITIL configuration database (CMDB).” It does not include an agent architecture or the low-level scanning of detailed hardware, firmware, and software config that Axios or Altiris provides, but rather is a sort of specialized data mart for IT information, with scanners to convert data from various sources and integrate it into a common repository supporting dependency mapping, financial analysis and other higher-level IT functions. (Tough data rationalization problem, which is why one needs to purchase services as well as tool from Troux.)
In addition to moving into the ITIL space, Troux competes with non-ITIL tools such as portfolio management vendors ProSight and Pacific Edge, and enterprise architecture/metadata repository vendors such as Popkin, Rochade, and Adaptive (a personal favorite of mine for their strong standards leadership). Cendura is starting to sell a very similar set of capabilities as well, with a product branded as a “service catalog.”
Both Cendura and Troux call their back end database a CMDB. Popkin, Rochade, and Adaptive use the more established term “repository,” as do pure play enterprise architecture and CASE vendors such as CA Advantage, Sybase, Embarcadero, and the rest.
I could go on. Opsware has a CMDB-branded system that focuses on core server capabilities with software, but doesn’t appear to cover the higher level business drivers. Yet other products have more of an asset management flavor (CMDB vs. AMDB being a recurring theme in some ITIL forums.) Centrata is moving into some of the same higher-level IT management space as Mercury and Niku, but calls their back end a “Service Configuration Management Database.”
So, how can we start to understand this terribly confused marketplace? How does the potential CMDB consumer avoid buying expensive redundancy? I would propose the following key criteria:
ITIL processes and others: Which processes does the proposed CMDB-based tool support? Change? Config? Incident? Problem? Application Management? The world does not end with ITIL. In particular, project portfolio management, demand management, and IT governance are all important process areas outside of the ITIL Red and Blue books.
Desktops versus data center: Probably one of the most basic questions to ask. Is the product primarily aimed at end user service level management with respect to the deployed PC? (This is a very well understood problem, moving rapidly to commoditization.) Or does the product claim to have a story in the data center? If so, the questions become much more complex.
Logical/physical: How mature is the product in representing logical consensus concepts such as application, process, and service? Does the tool only cover hardware, network, and discoverable executables? Or does it allow the user to define applications, processes, and services independently of discovered assets? If so, it is starting to become a modeling or portfolio management tool. For tools intended to “run” the IT operations, yet including high level logical concepts: how can those concepts be aligned with your enterprise architecture team? Your data center administrators probably are not the best positioned to define sticky concepts like “process” or “service;” such analysis is best left to enterprise architects. Can you define business processes in Popkin and then replicate them into the dependency mapping tool for further elaboration of service dependencies?
Discovery approach: Does the tool use agents installed on the devices to be managed? If so, how discovery-centric is the approach? Is the tool biased in favor of the discovered data, or does it provide some means to distinguish between “is” and “ought”? (Relying strictly on discovery is like relying on your bank statement as your only budget. At some point, you need to determine what the configuration SHOULD be. This is the distinction between budget and control.)
Within the discovery approach, there are further distinctions. There are a variety of means to extract metadata from servers, from less to more intrusive:
- Monitor published SNMP and other management data
- Query hardware abstraction layer APIs
- Inventory file systems
- Scan the Windows Registry and analogous Unix configuration files
- Introspect database catalogs
- Introspect component APIs (e.g. COM/CORBA/Web services)
- Interrogate more specialized element management subsystems (e.g. message queuing managers, other agent-based frameworks)
- Monitor file handles
- Monitor process table & lower level OS structures
- Monitor network sockets
- Intercept SQL calls
Beyond direct interaction with the managed resource, many tools also have input/output capabilities so that other solutions can be used for the sensitive initial extraction of configuration data from production systems. Evaluating the robustness of the product’s integration approach may be an important consideration.
An important related family of tools derives information about systems by parsing source code and deriving dependencies that way. Integrating both approaches would be very beneficial, and the ITIL spec actually calls for this.
Finally, one of the hardest aspects of discovery is dependency mapping. It's only a partial success to inventory all the elements on server A; the real challenge (and value proposition) is in discovering that other elements on server B are dependent on parts of server A. Relicore does this for example through sophisticated scanning of TCP/IP sockets.
This is an area where the industry truly is in its infancy. It is becoming recognized that manually populating CMDB dependencies is very time consuming and many products tout their “automated discovery.” Trouble is, without some extremely sophisticated programming, many types of dependency simply cannot be discovered.
For example, consider an end to end supply chain process that involves a manufacturing control system A, warehousing system B, order fulfillment system C, and general ledger system D tied together through file transfers, bulk database loads, distributed object calls, and message oriented middleware. There is simply no discovery tool out there that can comprehensively analyze the different dependency types used and automatically generate a map showing the data flow dependencies from A to B to C to D. If you think you have one, drop me a line.
So, given such a case as the impossible high end, what CAN your proposed vendor’s dependency analysis do? From what I’ve seen, Relicore, Collation, and Appilog (now owned by Mercury) are the leaders in the runtime space, with tools like Cast, Blue Phoenix, and Asetechs covering the source code analysis space. I expect convergence here.
One emerging approach in this space is the use of linguistic analysis to infer dependencies, using a confidence ranking approach (ala Web search engines.) It remains to be seen whether this non-deterministic approach will succeed in the marketplace, although for some classes of problems it may be the only way to generate the dependency information required.
Versioning: A classic question from the CASE tool market: what is the CMDB’s approach to versioning and audit trail? Can you identify configurations as of any point in time? When the data store’s audit trails become unmanageable, can you baseline to collapse some of this point in time data down and recover space/performance? This is one of the more architecturally challenging aspects of repository design.
Visualization approach: Does the tool generate dependency maps automatically, or does it allow their creation manually? If the latter, it again is moving into an architecture/modeling space.
Data approach: With the exception of Troux, most CMDB vendors have no story in data management. I predict this will change, as regulatory drivers such as Visa CISP, Sarbanes Oxley, and customer privacy increasingly require the cross-referencing of data dictionaries with configuration management. What servers have sensitive credit card data on them? You need a CMDB integrated with a data dictionary to answer such a question.
In sum: The term CMDB is getting overloaded to the point of uselessness, a problem we’ve seen before in the faddish technology market. Knowing that a product claims to have a “CMDB” is about as useful as knowing it’s based on a “relational database” or a “client-server architecture.” Ask the hard questions!
10/29/2004 note. If you are interested further in this and related topics, see the following:
ITSMF 2004 review
ERP for IT article
A System that Manages our Systems?
The rise and resurrection of enterprise metadata: repository as CMDB
ITIL Application Management Volume review, part one and part two.
DCML Framework critique
Metadata + management framework: the basis for ERP for IT
IT lawyer calls for "Data Map" to manage privacy
A CMDB rant
A metadata rant
IT portfolios, service catalogs, and enterprise architecture
IT reporting: Why it's the same, why it's different
A story of too many tools, part one and part two
Model Driven Configuration Management