Applications have become the technological underpinnings which enable employees to do their jobs faster, more accurately, and with greater ease. Applications have become so ubiquitous within organizations that most employees don’t even consider the tools with which they are working “applications” at all.
Let’s get it started
Applications have become the technological underpinnings which enable employees to do their jobs faster, more accurately, and with greater ease. Applications have become so ubiquitous within organizations that most employees don’t even consider the tools with which they are working “applications” at all, rather, that spreadsheet, that portal, that project tracking system is just a part of getting things done.
Even on the consumer side, when mobile apps are, themselves, called “apps,” it takes users’ minds off the fact that the technology in use is a developer-built piece of software instead of a useful button to connect to photos, online shopping, vacation planning, or the like. Behind every app, however, is important source code, written by a human being (or set of human beings), and is susceptible to flaws, just as humans ourselves.
Lose control of body and soul
The number one problem with code is human intervention. And in the cases of software and application development, the fact that the humans writing code are incentivized by things other than information security means that those other things are going to take precedence over security. Applications are, however, the collectors and holders of tremendous amounts of data—much of it sensitive—which means that failure to write secure code puts data at great risk of compromise. A recent Ponemon Institute report indicates that the average organization uses 1,175 applications, and 33% of those applications are considered “mission critical.” The report, based on a survey of 605 IT and security practitioners in the U.S., doesn’t define “mission critical,” but let’s assume it means the organization would not be able to function without use of those apps. This means that approximately 388 applications at each organization surveyed (which are, theoretically, representative of the general U.S. population of private companies) must run properly, without any disruption, including data leakage, theft, or loss.
The study (underwritten by F5) only defined respondent job titles vaguely; respondents were primarily director, managers, supervisors, and technicians practicing IT or IT security. Interestingly, application developers were not included in this study, even though the aim of the study was to better understand application security risk. If applications are “mission critical,” would it not make sense to talk to the developers of those applications too, and understand developers’ knowledge and perception of application security? Yet, to date, I’ve seen nary a survey published for the security community that includes developers. Security appears to be trying to solve the application security problem without application developers! With that in mind, and an average of 388 “mission critical” applications per company on the table, 44%, 47%, and 34% of respondents say they have “no confidence” that developers practice secure application design, development, and testing respectively.
Further, respondents, despite being the people responsible for ensuring developers understand the concepts of secure coding, feel that the #1 reason for vulnerable applications is that application developers (programmers) don’t understand secure coding practices.
Don’t move too fast, just take it slow
Says James Jardine, founder and principle of Jardine Software an application security services firm, “Most developers still struggle to use secure design principles because they have never had someone that appreciates their situation help them.” If this is correct, the “mission critical” apps running our businesses and the business with which we do business are being built by people who don’t fully understand how to make them secure. Further, if this survey is any indication, senior leaders are not getting actively involved in application security despite the criticality to business (only 4% of survey respondents were senior executives). Security has the proverbial seat at the table, and so organizations are “constantly looking to the security team to secure applications when it is the application teams that hold the key to successfully creating secure applications,” says Jardine.
What we’re seeing, and a big part of the problem with application security, is a true dedication to the problem. If businesses think that data security is a problem (and I’ve yet to see any data that contradicts the belief—or reality—that it is) but senior executives are not getting involved while the people in charge of the developers developing “mission critical applications” are saying there’s a secure development problem, the real problem is a lack of leadership and refusal to accept responsibility.
Don’t get ahead, just jump into it
Michael Santarcangelo, security leadership expert, says that security executives worked their way into a seat at the table due to the high profile and frequency of breaches, but many security teams are still in desperate need of real leaders. The industry—and security isn’t alone—lacks true leadership broadly, which means that organizations can’t focus on imminent security threats, getting in front of them before they impact our organizations. “We’re so busy chasing the next breach that we're not focusing on the right things. It's not about time to act. It's about lack of communication, misalignment, and the confluence of things that happen as a result.” In other words, if leadership starts to help the organization—all parts of the organization—drive towards the end game, which includes running an efficient, effective, profitable business free of significant disruption and growth barriers, organizations will also start to see security incidents decrease as a result. Security doesn’t operate in a silo, and it’s time for leaders to start helping their teams work towards a more collaborative, integrated environment.
If organizations want to improve the security of applications they build, buy, and use, it’s necessary to create a culture of secure development. Organizations run on applications, and security can’t be entirely responsible for that which it doesn’t build. Infosec is (unfortunately still) often tacked on to architectures, processes, and systems, and just like these other areas, application security needs to be baked in from the start.
You all hear about it, the Peas’ll do it
The solution starts with developers, but first developers must be convinced, incentivized, and trained to build applications with secure coding practices. The nudge—or shove, as most security practitioners would prefer—cannot come from security teams. DevOps, or even DevSecOps, has been around for a while, and yet the most ardent proponents are security practitioners and analysts. Talk to a developer and get her or his point of view. General agreement will most likely not be reached. The disconnect can be attributed to many things, but a lack of leadership and setting organizational priorities is definitely chief among them. Organizations with a clear sense of goals will (typically) march in the direction of those goals together. Today, it would be difficult to find a CEO or board of directors who would say it’s OK to put “mission critical” applications at risk of data breach, theft, or loss, resulting in business disruption and/or loss or disclosure of business-sensitive information.
Jardine laments, “Application development is hard. Security is hard. As a result, a lot of organizations struggle to understand and implement application security successfully.” While leadership doesn’t have to understand how to securely code, security and risk teams need to work with leadership to promote the idea that secure applications are better business. If the Ponemon study is any indication, everyone else seems to know it. We now need to translate it up the chain.