“The agile development philosophy says to release early and
release often, which is counterintuitive to the build-security-upfront kind of philosophy,” says WhiteHat’s Grossman.
According to Microsoft’s Howard, the only way an organization can get its programmers to write securely along
the way is to prioritize that process starting at the top of
the enterprise and then drive down those principles over an
extended period. He speaks from experience. Howard was
instrumental in developing Microsoft’s internal Security
Development Lifecycle and instituting it companywide.
Much of the SDL focuses on three main principles: never
trusting input, fuzz testing for validation that you’re not
trusting input and threat modeling.
The years-long project of instituting these principles took
a lot of developer hand-holding early on, he says, but that
eventually changed as programmers put the practices into
their daily routines.
“All of a sudden, people have grown up,” Howard remarks.
“They recognize that security is part of the job—something
you’ve got to do. There’s no longer the permanent requirement from our end to keep going back to the development
group to make sure they’re doing the right thing.”
Microsoft believes so strongly in the power of SDL that
Howard’s boss, Steve Lipner, senior director of security
engineering strategy, and the rest of the SDL team recently
announced that Microsoft would be making its internal
threat-modeling tool available to the industry.
“TOO OFTEN, SECURITY IS BOLTED ON AT THE END
of the software lifecycle as a response to a threat or
after an exposure,” warns Howard Schmidt, president of
the Information Security Forum and a board member of
(ISC)2, a not-for-profit global firm that educates and certi-fies information security professionals. “New applications
that lack basic security controls are being developed
every day, and thousands of existing vulnerabilities are
(ISC)2 is determined to do something about this situation. It recently announced a new certification, the
Certified Secure Software Lifecycle Professional (CSSLP),
to “validate secure software development practices and
expertise.” The goal is to establish best practices and validate a professional’s competency in addressing security
issues throughout the software lifecycle.
The CSSLP is code-language-neutral and is applicable
to anyone involved in the software lifecycle, such as
analysts, developers, software engineers and architects,
project managers, software quality assurance testers and
programmers. Areas covered by the exam include lifecycle vulnerabilities, risk, information security fundamentals and compliance.
“The CSSLP ensures that people—our first line of
defense in this war—have the tools and knowledge to
implement and enforce security throughout the software
lifecycle,” says W. Hord Tipton, executive director for (ISC)2.
Though sometimes difficult to quantify, the fruits of secure
coding labors can make a meaningful impact on the bottom
line. “I can’t give you hard numbers, but we have definitely
noticed that testing is not as expensive as it was before,”
Howard says, explaining that testing times have diminished
because programmers are addressing problems earlier in the
cycle and fixing them along the way.
And secure coding does more than cut down on testing
time. It also makes for a higher quality product, according
to Fortify Security’s Chess. “A lot of times, catching security
bugs early makes everything else you do more predictable,
and that predictability can be a big advantage,” he says. “It
also means that when you announce a ship date, you’re more
likely to meet it because you’re more in control.”
Most importantly, though, organizations are mitigating
potentially millions in losses by heading off security breaches.
Microsoft’s Howard says he helped one colleague at a different company justify an extra 200,000 in resources for
secure coding by bringing in the risk management department and showing them the value of the information the
investment was intended to protect.
The expenditures that executives initially balked at
seemed like chump change once the risk managers explained
that it would mitigate the risk against 30 million in assets.
By the time the presentation was over, Howard recalls, “They
said, ‘Where do we sign?’”
of procedures and practices to better design and verify code
through the development lifecycle.
“Clearly, you need some sort of a governance process on
top of it all,” Fortify’s Chess says, “to make sure that the
right people are talking to each other and that they are
coordinating appropriately, because this cuts across the
However, implementing a governance plan can take time.
“We don’t see anybody changing overnight,” says Greg
Hanchin, principal at DirecSec, a value-added reseller that
specializes in security. “You can’t just go in and establish a
secure coding process around five or 10 years of code.
“Here’s what you can do: Under the hamster wheel of
people, process and technologies, you can attempt to bring
technology in and get your people and processes around that
mission of making software better incrementally. It probably
takes a couple of years in the development cycle process to
actually make a meaningful change.”
In the meantime, there is at least one shortcut organizations can take to reduce their exposure.
“I recommend starting with some modern development
frameworks,” WhiteHat’s Grossman says. “A lot of time, the
security is already baked into new frameworks like .NET and
J2EE [Java 2 Platform, Enterprise Edition]. If you use them
properly, you can develop code really, really quickly—code
that also happens to be secure.” 3
Unfortunately, it isn’t just a matter of spending money and
rolling out a nifty new tool. It takes a thoughtful overhaul
Please send questions and comments on
this article to firstname.lastname@example.org.