Sunday, March 30, 2008

Black Bean Soup

This soup is wonderful for a cold evening – and it is very easy to prepare. The crushed pineapple helps soften the beans, and the flavor blends in completely. The beans improve after a night in the fridge. To re-heat, melt a pat of butter in a heated pan and add some water, bring to a gentle boil. The butter adds a nice flavor and will keep the beans from sticking to the pan, as well. Remember that any starch absorbs salt, so correct the seasonings.

Ingredients:

Dried black beans, 1 # bag
Butter, 1 tsp
Olive oil, 1 Tbsp
Small sweet onion in small dice
Garlic, 1 Tbsp, minced
Salt, 1 tsp
Crushed pineapple, drained, 1 oz

Method:

Sort beans and soak overnight. (Some may think this is excessive but I find it makes the soup creamer.)
Heat sauce pan, then add oil and butter. Clarify garlic and onion. Add drained black beans and stir thoroughly over medium-high heat till any remaining water is evaporated. Add 1 tsp salt and crushed pineapple.
Just before beans begin to sizzle, add 8 cups water and reduce heat to medium. Let the beans cook until softened, about 2 hours, stirring occasionally. You can crush the beans a bit with a wooden spoon, but the stirring should be enough.

Serve with a dollop of sour cream or plain yogurt.

Saturday, February 16, 2008

Daddy, What Does a Chief Technology Officer Do?

My daughter asked me to explain my job for her class’s career day. I did some research, and made some interesting discoveries. There are a surprising number of jobs that carry this title. I’ve seen openings for a “CTO” whose responsibilities include maintaining servers and managing the help desk. One firm had a CTO that was chartered to run a group of developers. So, what should a CTO do, and when does a company actually need one?

Let’s begin by talking about what a CTO should not do. The CTO should not manage developers. The head of development spends his or her time working to keep the development team on track against a set of product plans. Inside the development organization, this Director attends to staffing, training, workload and productivity metrics, budget, and scheduling. Working with the customer organizations, the Director keeps up to date on shifting priorities, changes in product requirements, and new potential opportunities that the developers may need to supply. This is a full time job. The performance plan for the Director of Development is quite simple: Deliver high quality programs that meet or exceed customer requirements on time and within budget.

A CTO should not manage a hardware team or an infrastructure group. The CTO might have a lab (for test purposes, not production or QA). But the CTO does not own a production facility and should not be measured against that criterion. Functional strategies (productivity, headcount, floor space, training, power and cooling, etc.) should rest with a COO; the CTO is a research and ad tech discipline in the strategic planning domain.

The Chief Technology Officer matches new technological capabilities with business needs, and documents that match so the business can decide whether to use the new technology or not. The CTO is not an advocate, but a strategic planner and thinker. A business that sells information technology uses the CTO to articulate how the new technology can address business needs for its prospects. So the CTO needs to understand his firm’s capabilities and something of the business processes of his firm’s target market. A business that uses information technology needs its CTO to select potentially useful new technologies for use in its internal business processes. This CTO should understand a good deal about a broad range of new technologies and must have a deep sense of the business’s core processes and goals. The CTO should not be an advocate, but must be unbiased. The CTO needs to understand the abstract potential that a new technology might offer, and must know the underlying architecture of the firm’s business processes.

The CTO must have a high degree of professional integrity – there will be times when the CTO will be the only person that the senior leadership team can turn to for an unbiased and well-grounded assessment of a potentially valuable new technology. A vendor CTO whose primary function is outbound marketing does a disservice to the vendor for whom he or she works. A user CTO whose bias is towards always trying new things adds no value to the firm looking for a sustainable, cost-effective competitive edge.

Consider how firms today confront Web 2.0 – the combination of blogs, wikis, and social networking technologies sprouting up. A user organization that wants to interact with consumers may already be all in. Coca-Cola runs over 500 web sites for consumers, and sponsors videos on YouTube; even IBM has space on Second Life. Other firms may shy away from the uncontrolled side of these technologies. Publicly-traded firms and others facing regulatory scrutiny may fear the consequences of an unguarded comment on a quasi-official channel, and rather than manage that risk they opt to deny employees the ability to participate at all. Of course, this draconian measure does not work; employees can blog under another name, or contribute to a wiki pseudonymously. The CTO would have looked at the potential strengths and liabilities of each medium and present the firm a view of the potential benefits (closer interaction with customers and partners), costs (incremental IT investment, potential lost productivity on other tasks by bloggers), and risks (uncensored commentary reaching the public). The CTO’s performance plan is simple: to evaluate for the executive leadership team potentially useful new technologies – showing how they might fit in specific business processes to the firm’s benefit.

Could that job be done today by another function within the organization? The IT project office might render an opinion about investing in Web 2.0, but that could be characterized as self-serving. The marketing department might argue that Web 2.0 will give them a competitive edge, but that could be marginalized as just the goofy marketing guys wanting more toys to play with. Without a CTO, these organizations might choose to spend money covertly to test the technology, potentially placing the organization in jeopardy. The CTO alone must offer an unbiased, insightful analysis of the potential of the new technology.

How does the CTO improve? A good CTO isn’t just lucky, although never underestimate the value of good luck. Rather, a good CTO describes the environment in which the new technology may fit, and then defines how that fit might occur. If the projection is correct, the CTO celebrates. But if it’s wrong, the CTO has solid documentation to review. By using that documentation, the CTO can learn which element of the current environment he missed or mis-characterized, or what step in the chain of reasoning was flawed. Through this process of self-evaluation and learning, a good CTO gets better over time.

Some companies need a CTO more than others. Firms that tend to adopt leading edge technology not only need a CTO to understand the capabilities on offer (most vendors of leading edge tools don’t know what they are actually for), but they need other processes to manage that raucous environment. The firm’s purchasing department needs to understand how to negotiate with start-ups. The firm’s development team must be able to integrate primitive, early-stage technologies. The firm’s operations area may have to cope with poorly documented, unstable products. But the benefit could include being the first to open and capture a new market.

Companies that deal with established major vendors will spend much less time and effort dealing with these teething pains. But, they will have to wait. Microsoft’s Internet Explorer was years behind Netscape. Some of firms that jumped on Netscape early established dominance over their target market – eBay and Amazon.com, for instance. In both of those company’s cases, the CTO was the CEO. Sam Walton’s vision of a frictionless supply chain drove Wal-Mart’s very early use of e-commerce (predating the term by a decade or more) with its suppliers. Middle of the pack firms don’t leverage their CTO much, they use him for insurance, not strategic planning.

Lagging companies adopt technology after the market figures out its parameters. These firms try to grab a bit of profit by squeezing in under the dominant player’s margins – selling hardware more cheaply than Dell, or audit services at lower rates than the Big Four. Picking up nickels in front of a steam-roller is a dangerous game. Larger vendors will always be willing to sacrifice a few margin points to protect market share, so a successful laggard risks extinction. Trailing-edge firms don’t need a CTO; they need a sharp financial team.

So my daughter got more than she expected, and her class got a peek at how the various functions in a strong, self-aware corporation align with the firm’s goals and vision. How does your firm use its CTO? How might it?

Friday, February 1, 2008

PCI DSS Class Thoughts

On Thursday, January 24, the New Jersey ISACA chapter held a class on the Payment Card Industry Data Security Standard (PCI DSS), which I taught. Thirty five people attended. Most were IT auditors, some were in information security roles, and a few were educators or administrative staff. The goal of the class was to give the attendees a clear understanding of the history of the standard, what it means now, what forces will most likely drive its development, and what it could become in the future.

The standard came about as a result of the efforts of the then-CISO at Visa, who I’ll name if he wishes. In the late 1990s he was concerned that merchants weren’t protecting their customer’s credit and debit card data suffficiently, so he floated the idea that merchants should follow a code of good practice: Use a firewall, use anti-virus software and keep it current, encrypt card data both when it’s stored and when it’s in flight, restrict access to systems that process card data, have a security policy that informs people that they should keep card data safe, and so on.

The idea caught on and in 2000 Visa announced its Cardholder Information Security Program (CISP). Shortly MasterCard, American Express, Discover, and the rest all launched their versions of the standard. At that point merchants became dismayed that they would have to follow a handful of similar standards with annual inspections from each, so the various firms providing payment cards banded together into the Payment Card Industry Security Council, which released its first standard in January 2005.

The threat landscape continues to evolve rapidly. In the 1990s merchants were worried that a hacker might capture a single card in transit. Now the bad guys can hire a botnet to scan millions of firms for vulnerabilities. The Atlanta-based start-up Damballa maintains statistics on botnets, and they are frightening. At present more than 1 in 7 PCs on the Internet is infected with some form of malware. The Storm botnet seems to have over 50 million zombies (Internet-connected PCs that are receiving and responding to commands from its control infrastructure). Estimates vary but there are now about 800 million PCs connected to the Internet, with the total expected to pass 1 billion machines by 2010.

Traditional information security measures are necessary but not sufficient. Someone once said that using basic information security was like putting a locking gas cap on your car. It may slow someone down, but it won’t keep a determined thief from punching a hole in your tank and draining the gas out. While that is true, for a long time we took a modicum of comfort in the thought that a thief in a hurry would see the locking gas cap and move on to the next car. But in this new threat model, the thieves use stealthy automation, have lots of time, and need almost no effort to undetectably siphon off sensitive data from everyone.

Now there is a whole industry around this standard: about 1,400 merchants globally are so large that they must have annual examinations. There are dozens of firms that are certified to perform those exams, and another slew of firms that are certified to perform the quarterly scans the standard requires. The PCI council certifies both examiners and scanning firms. Note that they don’t certify products; they certify a company’s skill and methodology. So if a scanning vendor uses tool A for certification and switches to tool B, they need to be re-certified.

Certification is valid for one year only. But certification doesn’t guarantee that a merchant won’t get ripped off. TJX suffered the largest breach known so far, with 94 million credit and debit cards stolen. During the 17 months that the bad guys were prowling around TJX’s systems, the firm successfully passed two full examinations and five quarterly scans, all performed by large and reputable vendors. The exam is an audit, not a forensic investigation. And the bad guys are more persistent, diligent, and motivated than the examiners. Some firms believe that since they passed an exam, they must be secure. All that passing the test means is that the firm is meeting minimum requirements. Creative, persistent, diligent information security measures, proactively applied by the firm itself, are the only way any firm will have a chance of finding the bad guys and shutting them down.

The class helps firms that handle credit and debit cards understand the obligations under the standard, but more importantly what additional measures they might take to avoid bad things happening. We look at the TJX breach in depth, reconstructing the apparent chain of events to highlight the tenacity and dedication of the bad guys. Remember that information security is entirely about economics: if the value of the information is greater than the cost of getting it, the information is not secure. For more information about the economics of information security, check out the Workshop on Economics and Information Security (WEIS).

If you use a credit card, be aware of small but unexpected charges. The thieves can get a million dollars just as easily by taking one dollar from each of a million users as they can from taking ten thousand dollars from each of one hundred users. The difference is that nobody complains about losing a buck. The thieves are evolving into endemic, chronic, annoying parasites. Being a 21st century cyber-crook may not be glamorous, but it is lucrative, low risk, steady work.

Sunday, January 20, 2008

5, 6, 8, 12, 19, 23

The first problem with winning the Powerball lottery is figuring out who to tell first. My college buddy? An old girlfriend? My boss? Of course, since the grand prize is $312.5 million, it won’t be long before just about everyone who has ever met me, seen me, heard me, read something I wrote, or received my business card, will become one of my closest and dearest friends. Very soon I’ll start hearing about unique business opportunities. I’ll learn a great deal about the importance of having adequate life insurance. I’ll have to change my phone numbers, and not list the new ones.

It turns out there were twenty other winning tickets! So each ticket is worth only $15.6 million, or about $781,000 per year for twenty years. After taxes that’s about $470,000. The ex gets half, so I’m down to $235,000. It’s hardly worth turning the ticket in.

On the plus side, I’ll be able to pay off my debts, and get the car fixed. It’s time for a new car, anyway. And I’ll be able to get to St. Warm for a long weekend in the sun. I haven’t had a real vacation for years. I’ll bring the kids – they will have a great time. They both like fresh fish, and love to swim.

I can make up for the lame presents I was able to get them last Christmas. They both want computers, and now I can get them the laptops they’ve picked out on-line. Birthdays will be bountiful this year! Better, they will have their college all set.

I hope they don’t get spoiled.

[Postscript: This is a work of fiction. I have never won the lottery. In fact, I don't know anyone who has. Statistically speaking, I never will. This fantasy was intended to play with the idea of winning the lottery; and I hope it was enjoyable.]

Saturday, January 12, 2008

Shall I Check the Tires, Sir?

Some of us may recall the days of full service gas stations. For those who don’t, take a look at the scene in “Back to the Future” where Marty (Michael J. Fox) watches a car pull into the Texaco station in his home town in the 1950s. The attendants leap into action – one checks the oil, another pumps the gas, a third washes the windshield, and a fourth checks the tire pressure.

Why does the tire pressure matter? An underinflated tire experiences higher rolling resistance. This excess friction generates excess heat in the tread. This had three consequences. First, excess heat increases wear – the tire gets old faster. Second, excess heat compromises traction. Finally, underinflated tires use more gas. The difference is significant. By raising the tire pressure from 24 psi to 30 psi the car’s mileage will improve by 3% to 4%. See the US Department of Energy site on fuel economy here. And most cars are not running at the correct tire pressure. To verify this, check the pressure on the next rental car you use. You will find that the tires are usually low. This increases road comfort – most Americans like a soft squishy ride. The rental car companies don’t care – the cost of replacing tires is part of normal maintenance and already figured into their operating expense. Most users refill the gas tank rather than pay the high charge the rental car companies impose.

Three to four percent may not seem like much, but that matches the total contribution that the Arctic National Wilderness Refuge will provide should it be exploited to capacity. But more pragmatically, what can individuals do? By checking tires, each of us can benefit individually by spending a little bit less for fuel, driving with a little bit more safety and having the tires last a little bit longer. Could manufacturers do anything? Yes, and they already have. Many newer model cars have tire pressure sensors built into the rims, so the driver doesn’t have to get into a service station and scuttle around with a tire pressure gauge, getting road dirt on one’s fingers and clothing. Should newer models have a warning light to alert the driver? Should States require tire pressure checks as part of the annual safety inspection? Or should responsibility remain with the car owner, as sovereign?

This is a particularly interesting test case in that the benefits to the individual and to society are perfectly aligned. By keeping tires at the optimal running pressure, the individual gets a safer, longer lasting, more economical car, and society gets safer traffic and reduced fossil fuel consumption. The only losers in the bargain are the tire manufacturers, who sell fewer replacement tires, and the gas companies, who sell less gasoline. Tire manufacturers like being known for safe, long-lasting, economical tires, and all offer tips to improve these qualities, such as Goodyear and Michelin. Tire manufacturers grade their tires on three parameters: wear, traction, and temperature resistance. The US Department of Transportation describes this grading system here.

States are free to determine whether to inspect cars for safety, emissions, or neither, and how frequently – annually, on sale only, or at some other frequency. About ten states only require emission testing in metropolitan areas, such as Atlanta, GA, which helpfully summarizes inspection programs nationally here.


A tire pressure gauge is inexpensive. Serviceable models cost under $5 at any car parts store, top of the line digital models cost $15 or so. They fit in the glove compartment. Checking the tire pressure takes a few minutes and will save a few dollars.

Thursday, December 13, 2007

The Way of the Dinosaur

With the cry, “The mainframe is dead!” the PCs entered the mainstream corporate computing world. Mainframes were dinosaurs, and their inevitable extinction was the next page in their history. But let’s look at this analogy more deeply.

The dinosaurs became extinct following a global environmental catastrophe. There was no war between the mammals and the dinosaurs. The mammals did not out-compete dinosaurs in any ecological niche. An external event radically changed the environment, eliminating the dinosaurs, who had successfully dominated the planet for 180 million years. Dinosaurs suffered a huge environmental catastrophe 200 million years ago. Only half of all dinosaur species survived, while a species which showed both mammalian and saurian characteristics failed, leaving true mammals to evolve in the shadow of the dinosaurs.

And so how does this inform our understanding of the competition between the PCs and the mainframes? The mainframes did dominate the corporate landscape for generations; they were big and capital-intensive. PCs provided an alternative processing mechanism initially for spreadsheets and ultimately many traditionally host-based processes. PCs learned how to connect for client/server computing and eventually to use the Internet for browser-based information access and analysis. So PCs competed successfully for a presence in a series of ecological niches that mainframes had once dominated.

But there is another environmental catastrophe forming. This shock will transform the computing landscape. The transformation is the green revolution. Fossil fuels are becoming a costly and undependable energy source. While the transformation to alternative energy sources is underway, conservation is now an imperative. Business will consider any measure to reduce energy consumption. Building design, telecommuting, and outsourcing all shift the energy burden away from the core business. IT consumes significant power. Measures that IT can take to reduce energy consumption get high marks, are scalable, and have measurable impact on a business’s overall energy use.

Personal computers consume a significant amount of energy. A desktop computer consumes 200 watts, and generates additional energy costs in removing its heat from the building environment. A thin client workstation – and the energy in the data center to support its computing – consumes 25 watts. Converting from desktop computing to thin client computing cuts energy costs by a factor of eight. While one physical server can host five to ten virtual servers (based on typical CPU utilization), that same physical server can host a hundred or more virtual desktops. Virtualization can reduce energy consumption by an order of magnitude or more in a medium to large enterprise.

What other benefits and risks does the organization face when converting from desktop personal computers to thin clients? On the benefit side, the data remains in the data center, so data loss because of a desktop error does not happen. The firm can deploy comprehensive backup and disaster recovery mechanisms. No user’s data would be lost because they forgot to connect and run a backup job. Also, support and maintenance costs drop. The firm will not need to keep a spare inventory of parts for multiple generations of PCs, and the tech support staff will have all the diagnostic information in the data center. Users won’t have to bring their PCs to the help desk to get software installed, and they won’t have to run virus scans or software updates to stay secure or remain current. These processes can be built into the virtual desktop environment inside the data center. Information cannot be stolen from a thin client, since it does not leave the data center. No user can insert a USB drive and download files, or lose a laptop with a hard disk full of customer records.

What risks does a firm face when migrating towards virtual desktops? There are some applications that don’t play well when virtualized – heavy graphics and 3D modeling, for example. These need an unencumbered host with a huge amount of available capacity and may not render well across the link between the virtual desktop and the screen. If the user works at locations where connectivity is problematic, he may need the entire project on his laptop, a fat client device.

More importantly, when firms virtualize their networks they may not have as much visibility into the network activity between virtual desktops and servers. And, in this virtual network, they may not be able to track which users are where. Users’ virtual desktops may move to balance load or recover from an interruption in service. Most importantly, the traditional reliance on a perimeter, whether for security, systems management, capacity planning, or compliance, vanishes in the virtual world. This requires clarity in defining business service level objectives. Policy cannot be imbedded in network topology as it was in the 1980s and 1990s.

So the battle between the mainframes and the PCs may turn out a bit differently than that between the dinosaurs and the mammals. The impending environmental catastrophe threatens the power-hungry PCs, and the large hosts, which efficiently parcel out computing, storage, and bandwidth, across a broad population of users, may prove to be the more adaptable and responsive creatures in this cyber landscape.

Friday, November 16, 2007

Varieties of Social Networking: FaceBook, LinkedIn, and YouTube

As a user of FaceBook and LinkedIn for a while now, I’ve been struck by the distinctiveness of each form. I’ve done some surfing through YouTube as well (Muffins!), thanks to my kids. I’ve reached two conclusions, one specific to the technology and a second broader one relevant to the impact media have on individuals and society.

LinkedIn has the feel of a distributed corporate address book: pre-classified categories of information and a fairly high degree of architectural rigidity. There’s no API to attach additional applications, so there no LinkedIn ecosystem of third party vendors or open source initiatives building around the LinkedIn community. Rather, it serves as a stand-alone application with a well-defined purpose and rigidly scoped value proposition. This leaves LinkedIn vulnerable to a potential successor with capabilities that could extend and therefore obsolete it. It appears to be architecturally self-limited. I use LinkedIn frequently for a predefined purpose – locating colleagues, and most valuably tracking colleagues who have moved on to different positions. LinkedIn does a good job of maintaining a current address for professional colleagues. Note that Plaxo is getting stronger in this regard, although I do not use Plaxo for much more than e-mail address synchronization at the present.

In some regard this narrowly scoped architectural governance echos AOL’s model. By limiting the number of participants in any chat room, and by limiting conversations to brief text, AOL maintained a high degree of control over the political and economic potential that chat rooms offered.

At the other extreme, YouTube and MySpace seem to be about individual broadcasting and mass market initiatives, as opposed to strengthening pre-existing social relationships. Friends do notify me about new content on YouTube or MySpace, and I take a look, but there is little structure or support for dialog or community in those environments as far as I can see. Active participants internalize the architecture and complete the social governance functions through self-policing; either a member chooses to conform to the norms of the group or they face direct challenges from group members that can erupt into flames. It is hard to imagine a flame war on LinkedIn.

As an analogy, LinkedIn and Plaxo seem to function like electronic directories, while YouTube and MySpace seem to function like electronic bulletin board services.

But what of FaceBook? Membership is open. Users can elect which groups they wish to join, although some groups require approval. Users can also start groups of their own. So if a user doesn’t get into a surfing group, he can start his own and build a competing environment. Groups are unlimited in size, and can exchange messages and e-mail, post photo, video, and music content, and most importantly can build additional applications on the FaceBook platform. This makes FaceBook an OS in the cloud. There is a substantial ecosystem of applications surrounding FaceBook today. Users can decide which content they post, which groups they join, and which applications they use. This three-dimensional experience makes life much more interesting for FaceBook users.

Within groups there are capabilities for governance and structure that permit communities to coexist without interference, and allow individuals to participate in multiple communities without friction. From a competitive perspective, FaceBook is stickier than other modes of social networking. This makes it a more attractive platform for highly differentiated marketing. Both Microsoft and Research in Motion have strong interest in FaceBook, understanding the demographic of the FaceBook user. Microsoft sees a potentially valuable advertising channel, while in October 2007 RIM built a FaceBook application for the BlackBerry that is generating some buzz.

In contrast, neither Plaxo nor LinkedIn have a unique approach to integrating with BlackBerry – it’s just another phone – while MySpace and YouTube appear as undifferentiated video or internet content to any BlackBerry or other cell phone user. So FaceBook through its multi-dimensional capabilities adds value to other communications media.

From an advertising perspective, it makes sense that Coca-Cola would run 500 web sites and underwrite productions on MySpace and YouTube – the model is more of a mass broadcasting. It is hard to imagine Coca-Cola advertising effectively on LinkedIn.

The wider observation echoes Marshall McLuhan. While there are strong differences between the three categories of social networking, there are profound underlying similarities. Any new media attracts huge attention on its initial appearance. Once it becomes part of the fabric of media choices, it tends to become invisible. During the 1920s, a new word appeared in the English language: “Phony.” It meant “something you heard on the phone.” People first confronting the new device noticed its distinctiveness and reacted with caution and mistrust. By the 1950s most homes had phones, but the teenagers of that era lived on them – while their parents failed to understand how anyone could talk on the phone for so long. (I have a live recording of the comedian Shelly Berman from the late 1950s in which he gets a huge laugh from his audience when he describes his daughter: bobby sox, poodle skirt, and a phone growing out of her ear.) And today cell phones have become fashion statements.

Social networking is a new medium. The distinctiveness that we perceive in these early days will become invisible as we adjust our sensorium to accommodate this new medium. People who use social networking will have more in common with each other than those who don’t. Today the similarities between people who get their news from Fox, NBC, or The Daily Show are greater than the differences; just as the similarities between those who get their news from the New York Times, the New York Post, or the National Enquirer override the differences.

So the final lesson relates to media aggregation. As different media appeal to different types of people, media aggregation reveals much less synergy than might appear available. The work to translate content from one channel to another may yield no attractional value – no stickiness – when that revised content is eventually rendered in the new medium. If someone sponsored a full-length movie based on the “Muffins!” piece it might not win much share. The long range prospects for Cavemen (the sitcom based on characters from an ad campaign) seem grim.

Now if I can just figure out how to get news of this new post to my friends and colleagues on LinkedIn and FaceBook. So, tell me, how do you use different social networking technologies?

Tuesday, November 6, 2007

Food and Wine Service - Spaghetti Sauce Recipe

During my time with Gartner, some of us toyed with the idea of creating a subscription service to rate food and wine: Everything from restaurant reviews and wine tasting notes to recipes and memorable meals. I subscribed to Robert Parker's The Wine Advocate and found his taste to be flawless.

While that dream remains a future possibility, I've got some experience cooking, fortified by a few adult education classes at the Culinary Institute of America during the 1980s. In thanks to them, and as my first gift to the Internet community, here's my recipe for home-made spaghetti sauce. I learned it from my Mother, and have modified it with experience over the past forty years. It takes time. 


Sharpen your knives. 


Prepare the stock

Ingredients:


Marrow bones, 1#
Peeled turnip in medium dice, 1/2 #
Peeled parsnip in medium dice, 1/2 #
Carrots in medium dice, 1/2 #
Large Spanish or vidalia onion coarsely chopped
Two leeks, rinsed and coarsely chopped (white and green parts)
Three celery ribs, coarsely chopped

Method:
Roast vegetables under broiler until lightly toasted, five to ten minutes. 


Put marrow bones and roasted ingredients in large stock pot and fill with cold water. 


Slowly bring to boil.
Simmer 3 hours, skimming regularly.
Strain through a fine sieve and reserve stock.
Yield: 2 qts

Note that this stock can be reduced considerably and yields a wonderful consomme.

Spaghetti Sauce:

Ingredients:

Two large cans whole tomatoes

Large can tomato paste (I prefer Contadina) (12 oz)
Peeled turnip in small dice 1/2 #
Peeled parsnip in small dice 1/2 #
Peeled carrots in small dice 1/2 # 


Olive oil
Large Spanish or Vidalia onion chopped medium 


Three peeled garlic cloves, minced
Two leeks, washed and trimmed, chopped medium (whites only)



Two celery ribs, peeled, in small dice
Chopped meat (ground beef or pork) 1#
Sweet Italian Sausage 1 #
Bouquet garni



Method:
Bring stock to a medium boil 


To prepare  the tomatoes, pour off the liquid and reserve. Rinse the tomatoes under cool water, removing the skin, the ribs, the seeds, and the clear gelatinous material surrounding the seeds. These make the sauce bitter. 

Add rinsed tomatoes and reserved juice
Add root vegetables

Add bouquet garni (peppercorns, Bay leaf, parsley, oregano, basil) 


In a frying pan, heat olive oil and clarify onions, garlic, leeks, and celery. 

Add to boiling stock.
Brown meats and add to stock. 


Let simmer two to three hours.
Salt and pepper to taste.

Note that this sauce improves after being refrigerated overnight. It can be frozen, as well. I generally make four times this amount and freeze the extra for later dinners.

For dinner, serve with a leafy green salad, garlic bread, grated Romano or Parmesan cheese, and a red wine (good for the heart) - my preference is the Gaja Barbaresco  but even a simple Chianti will work well.