My daughter asked me to explain my job for her class’s career day. I did some research, and made some interesting discoveries. There are a surprising number of jobs that carry this title. I’ve seen openings for a “CTO” whose responsibilities include maintaining servers and managing the help desk. One firm had a CTO that was chartered to run a group of developers. So, what should a CTO do, and when does a company actually need one?
Let’s begin by talking about what a CTO should not do. The CTO should not manage developers. The head of development spends his or her time working to keep the development team on track against a set of product plans. Inside the development organization, this Director attends to staffing, training, workload and productivity metrics, budget, and scheduling. Working with the customer organizations, the Director keeps up to date on shifting priorities, changes in product requirements, and new potential opportunities that the developers may need to supply. This is a full time job. The performance plan for the Director of Development is quite simple: Deliver high quality programs that meet or exceed customer requirements on time and within budget.
A CTO should not manage a hardware team or an infrastructure group. The CTO might have a lab (for test purposes, not production or QA). But the CTO does not own a production facility and should not be measured against that criterion. Functional strategies (productivity, headcount, floor space, training, power and cooling, etc.) should rest with a COO; the CTO is a research and ad tech discipline in the strategic planning domain.
The Chief Technology Officer matches new technological capabilities with business needs, and documents that match so the business can decide whether to use the new technology or not. The CTO is not an advocate, but a strategic planner and thinker. A business that sells information technology uses the CTO to articulate how the new technology can address business needs for its prospects. So the CTO needs to understand his firm’s capabilities and something of the business processes of his firm’s target market. A business that uses information technology needs its CTO to select potentially useful new technologies for use in its internal business processes. This CTO should understand a good deal about a broad range of new technologies and must have a deep sense of the business’s core processes and goals. The CTO should not be an advocate, but must be unbiased. The CTO needs to understand the abstract potential that a new technology might offer, and must know the underlying architecture of the firm’s business processes.
The CTO must have a high degree of professional integrity – there will be times when the CTO will be the only person that the senior leadership team can turn to for an unbiased and well-grounded assessment of a potentially valuable new technology. A vendor CTO whose primary function is outbound marketing does a disservice to the vendor for whom he or she works. A user CTO whose bias is towards always trying new things adds no value to the firm looking for a sustainable, cost-effective competitive edge.
Consider how firms today confront Web 2.0 – the combination of blogs, wikis, and social networking technologies sprouting up. A user organization that wants to interact with consumers may already be all in. Coca-Cola runs over 500 web sites for consumers, and sponsors videos on YouTube; even IBM has space on Second Life. Other firms may shy away from the uncontrolled side of these technologies. Publicly-traded firms and others facing regulatory scrutiny may fear the consequences of an unguarded comment on a quasi-official channel, and rather than manage that risk they opt to deny employees the ability to participate at all. Of course, this draconian measure does not work; employees can blog under another name, or contribute to a wiki pseudonymously. The CTO would have looked at the potential strengths and liabilities of each medium and present the firm a view of the potential benefits (closer interaction with customers and partners), costs (incremental IT investment, potential lost productivity on other tasks by bloggers), and risks (uncensored commentary reaching the public). The CTO’s performance plan is simple: to evaluate for the executive leadership team potentially useful new technologies – showing how they might fit in specific business processes to the firm’s benefit.
Could that job be done today by another function within the organization? The IT project office might render an opinion about investing in Web 2.0, but that could be characterized as self-serving. The marketing department might argue that Web 2.0 will give them a competitive edge, but that could be marginalized as just the goofy marketing guys wanting more toys to play with. Without a CTO, these organizations might choose to spend money covertly to test the technology, potentially placing the organization in jeopardy. The CTO alone must offer an unbiased, insightful analysis of the potential of the new technology.
How does the CTO improve? A good CTO isn’t just lucky, although never underestimate the value of good luck. Rather, a good CTO describes the environment in which the new technology may fit, and then defines how that fit might occur. If the projection is correct, the CTO celebrates. But if it’s wrong, the CTO has solid documentation to review. By using that documentation, the CTO can learn which element of the current environment he missed or mis-characterized, or what step in the chain of reasoning was flawed. Through this process of self-evaluation and learning, a good CTO gets better over time.
Some companies need a CTO more than others. Firms that tend to adopt leading edge technology not only need a CTO to understand the capabilities on offer (most vendors of leading edge tools don’t know what they are actually for), but they need other processes to manage that raucous environment. The firm’s purchasing department needs to understand how to negotiate with start-ups. The firm’s development team must be able to integrate primitive, early-stage technologies. The firm’s operations area may have to cope with poorly documented, unstable products. But the benefit could include being the first to open and capture a new market.
Companies that deal with established major vendors will spend much less time and effort dealing with these teething pains. But, they will have to wait. Microsoft’s Internet Explorer was years behind Netscape. Some of firms that jumped on Netscape early established dominance over their target market – eBay and Amazon.com, for instance. In both of those company’s cases, the CTO was the CEO. Sam Walton’s vision of a frictionless supply chain drove Wal-Mart’s very early use of e-commerce (predating the term by a decade or more) with its suppliers. Middle of the pack firms don’t leverage their CTO much, they use him for insurance, not strategic planning.
Lagging companies adopt technology after the market figures out its parameters. These firms try to grab a bit of profit by squeezing in under the dominant player’s margins – selling hardware more cheaply than Dell, or audit services at lower rates than the Big Four. Picking up nickels in front of a steam-roller is a dangerous game. Larger vendors will always be willing to sacrifice a few margin points to protect market share, so a successful laggard risks extinction. Trailing-edge firms don’t need a CTO; they need a sharp financial team.
So my daughter got more than she expected, and her class got a peek at how the various functions in a strong, self-aware corporation align with the firm’s goals and vision. How does your firm use its CTO? How might it?
Saturday, February 16, 2008
Friday, February 1, 2008
PCI DSS Class Thoughts
On Thursday, January 24, the New Jersey ISACA chapter held a class on the Payment Card Industry Data Security Standard (PCI DSS), which I taught. Thirty five people attended. Most were IT auditors, some were in information security roles, and a few were educators or administrative staff. The goal of the class was to give the attendees a clear understanding of the history of the standard, what it means now, what forces will most likely drive its development, and what it could become in the future.
The standard came about as a result of the efforts of the then-CISO at Visa, who I’ll name if he wishes. In the late 1990s he was concerned that merchants weren’t protecting their customer’s credit and debit card data suffficiently, so he floated the idea that merchants should follow a code of good practice: Use a firewall, use anti-virus software and keep it current, encrypt card data both when it’s stored and when it’s in flight, restrict access to systems that process card data, have a security policy that informs people that they should keep card data safe, and so on.
The idea caught on and in 2000 Visa announced its Cardholder Information Security Program (CISP). Shortly MasterCard, American Express, Discover, and the rest all launched their versions of the standard. At that point merchants became dismayed that they would have to follow a handful of similar standards with annual inspections from each, so the various firms providing payment cards banded together into the Payment Card Industry Security Council, which released its first standard in January 2005.
The threat landscape continues to evolve rapidly. In the 1990s merchants were worried that a hacker might capture a single card in transit. Now the bad guys can hire a botnet to scan millions of firms for vulnerabilities. The Atlanta-based start-up Damballa maintains statistics on botnets, and they are frightening. At present more than 1 in 7 PCs on the Internet is infected with some form of malware. The Storm botnet seems to have over 50 million zombies (Internet-connected PCs that are receiving and responding to commands from its control infrastructure). Estimates vary but there are now about 800 million PCs connected to the Internet, with the total expected to pass 1 billion machines by 2010.
Traditional information security measures are necessary but not sufficient. Someone once said that using basic information security was like putting a locking gas cap on your car. It may slow someone down, but it won’t keep a determined thief from punching a hole in your tank and draining the gas out. While that is true, for a long time we took a modicum of comfort in the thought that a thief in a hurry would see the locking gas cap and move on to the next car. But in this new threat model, the thieves use stealthy automation, have lots of time, and need almost no effort to undetectably siphon off sensitive data from everyone.
Now there is a whole industry around this standard: about 1,400 merchants globally are so large that they must have annual examinations. There are dozens of firms that are certified to perform those exams, and another slew of firms that are certified to perform the quarterly scans the standard requires. The PCI council certifies both examiners and scanning firms. Note that they don’t certify products; they certify a company’s skill and methodology. So if a scanning vendor uses tool A for certification and switches to tool B, they need to be re-certified.
Certification is valid for one year only. But certification doesn’t guarantee that a merchant won’t get ripped off. TJX suffered the largest breach known so far, with 94 million credit and debit cards stolen. During the 17 months that the bad guys were prowling around TJX’s systems, the firm successfully passed two full examinations and five quarterly scans, all performed by large and reputable vendors. The exam is an audit, not a forensic investigation. And the bad guys are more persistent, diligent, and motivated than the examiners. Some firms believe that since they passed an exam, they must be secure. All that passing the test means is that the firm is meeting minimum requirements. Creative, persistent, diligent information security measures, proactively applied by the firm itself, are the only way any firm will have a chance of finding the bad guys and shutting them down.
The class helps firms that handle credit and debit cards understand the obligations under the standard, but more importantly what additional measures they might take to avoid bad things happening. We look at the TJX breach in depth, reconstructing the apparent chain of events to highlight the tenacity and dedication of the bad guys. Remember that information security is entirely about economics: if the value of the information is greater than the cost of getting it, the information is not secure. For more information about the economics of information security, check out the Workshop on Economics and Information Security (WEIS).
If you use a credit card, be aware of small but unexpected charges. The thieves can get a million dollars just as easily by taking one dollar from each of a million users as they can from taking ten thousand dollars from each of one hundred users. The difference is that nobody complains about losing a buck. The thieves are evolving into endemic, chronic, annoying parasites. Being a 21st century cyber-crook may not be glamorous, but it is lucrative, low risk, steady work.
The standard came about as a result of the efforts of the then-CISO at Visa, who I’ll name if he wishes. In the late 1990s he was concerned that merchants weren’t protecting their customer’s credit and debit card data suffficiently, so he floated the idea that merchants should follow a code of good practice: Use a firewall, use anti-virus software and keep it current, encrypt card data both when it’s stored and when it’s in flight, restrict access to systems that process card data, have a security policy that informs people that they should keep card data safe, and so on.
The idea caught on and in 2000 Visa announced its Cardholder Information Security Program (CISP). Shortly MasterCard, American Express, Discover, and the rest all launched their versions of the standard. At that point merchants became dismayed that they would have to follow a handful of similar standards with annual inspections from each, so the various firms providing payment cards banded together into the Payment Card Industry Security Council, which released its first standard in January 2005.
The threat landscape continues to evolve rapidly. In the 1990s merchants were worried that a hacker might capture a single card in transit. Now the bad guys can hire a botnet to scan millions of firms for vulnerabilities. The Atlanta-based start-up Damballa maintains statistics on botnets, and they are frightening. At present more than 1 in 7 PCs on the Internet is infected with some form of malware. The Storm botnet seems to have over 50 million zombies (Internet-connected PCs that are receiving and responding to commands from its control infrastructure). Estimates vary but there are now about 800 million PCs connected to the Internet, with the total expected to pass 1 billion machines by 2010.
Traditional information security measures are necessary but not sufficient. Someone once said that using basic information security was like putting a locking gas cap on your car. It may slow someone down, but it won’t keep a determined thief from punching a hole in your tank and draining the gas out. While that is true, for a long time we took a modicum of comfort in the thought that a thief in a hurry would see the locking gas cap and move on to the next car. But in this new threat model, the thieves use stealthy automation, have lots of time, and need almost no effort to undetectably siphon off sensitive data from everyone.
Now there is a whole industry around this standard: about 1,400 merchants globally are so large that they must have annual examinations. There are dozens of firms that are certified to perform those exams, and another slew of firms that are certified to perform the quarterly scans the standard requires. The PCI council certifies both examiners and scanning firms. Note that they don’t certify products; they certify a company’s skill and methodology. So if a scanning vendor uses tool A for certification and switches to tool B, they need to be re-certified.
Certification is valid for one year only. But certification doesn’t guarantee that a merchant won’t get ripped off. TJX suffered the largest breach known so far, with 94 million credit and debit cards stolen. During the 17 months that the bad guys were prowling around TJX’s systems, the firm successfully passed two full examinations and five quarterly scans, all performed by large and reputable vendors. The exam is an audit, not a forensic investigation. And the bad guys are more persistent, diligent, and motivated than the examiners. Some firms believe that since they passed an exam, they must be secure. All that passing the test means is that the firm is meeting minimum requirements. Creative, persistent, diligent information security measures, proactively applied by the firm itself, are the only way any firm will have a chance of finding the bad guys and shutting them down.
The class helps firms that handle credit and debit cards understand the obligations under the standard, but more importantly what additional measures they might take to avoid bad things happening. We look at the TJX breach in depth, reconstructing the apparent chain of events to highlight the tenacity and dedication of the bad guys. Remember that information security is entirely about economics: if the value of the information is greater than the cost of getting it, the information is not secure. For more information about the economics of information security, check out the Workshop on Economics and Information Security (WEIS).
If you use a credit card, be aware of small but unexpected charges. The thieves can get a million dollars just as easily by taking one dollar from each of a million users as they can from taking ten thousand dollars from each of one hundred users. The difference is that nobody complains about losing a buck. The thieves are evolving into endemic, chronic, annoying parasites. Being a 21st century cyber-crook may not be glamorous, but it is lucrative, low risk, steady work.
Subscribe to:
Posts (Atom)