fbpx
Do you remember those times when IT companies used to release their Software or operating systems once in a while?
 
For example, there were considerable gaps of years between the releases of Windows 97, 98, NT, XP, Vista, Windows 7, 8, etc .
 
Similar practices were norm-of-the-day in case of all sorts of software/applications from most vendors...
 
It was so because it provided enough time for the code to go through quality assurance and security testing processes that were performed by separate specialized teams, whether internal or externally contracted. In those days, there were separate teams for 'development' of applications and 'testing' the security of those application, hence a longer software development lifecycle (SDLC).
 
But in last decade, there has been a considerable rise of public clouds, containers and micro-services model. The evolution of these, offered great opportunities for breaking large BIG applications down into smaller parts that can run independently.
 
As a result, this ability to break down the applications also had a direct impact on the way software is developed, leading to rolling releases and agile development practices where new features and code are continuously pushed into production at a rapid pace.
 
There is a growing use of 'automation' in these processes with the help of new technologies and tools. This is allowing software/application development companies to innovate faster and stay ahead of competitors.
 
By the time, a competitor is able to reengineer and replicate 1 of your new features, your development teams would bring 3-4 new features in the market. You competitor would be engaged forever in a chasing game...
 
This is nothing but what is called DevOps culture nowadays as seen in modern companies. Today most developers themselves can now provision and scale the infrastructure they need, without waiting for a separate infrastructure team to do it for them. All major cloud providers now offer APIs and configuration tools that allow treating infrastructure configuration as code using deployment templates. There is no need to go into those details here...
 

👉What is DevOps?

In order to simplify, let us understand what goes behind in software development lifecycle.
 
The developers team would plan something, e.g., a new feature. Then they would code it. Then the code will be built around (read, assembled). Then they will test the functionality of the application (feature). Next the new feature will be released.
 
The moment it is released it goes into hands of Operations team. They will deploy it. They will operate it. They will monitor it. And they will observe the issues and the feedback would be given back to developers team.
 
When developers were only focused on getting code out the door, and operations teams were solely responsible for monitoring and management, there was a blind spot in implementation — a no-man’s land of “Who’s responsible?” In this scenario, code was kicked back to developers for revision if it didn’t meet operational requirements, often without clear direction about what was needed to happen or why.
 
That's what was old traditional model of software development...
 
When the DevOps model was proposed by Andrew Clay and Patrick Debois in 2008. They wanted that these two separate teams and their works to be merged into a single workflow. You can see it in the infinite loop as shown in the graphic.
 
But still DevOps is not about technologies, it is more about tactical way of doing things and software culture has a lot to do with it.
 
The DevOps approach demands that companies consolidate their development and operations in a single team, and organize software delivery by 'feature' rather than by job function. This approach encourages individuals to develop cross-functional skills, folding testing and even application security practices into a seamless delivery life cycle.
 
👉 Practically, it is about directly focusing on 2-pipelines simultaneously, i.e., Continuous Integration (CI) pipeline, and Continuous Delivery (CD) pipeline
 

WHY?

 

1. Continuous Integration (CI) streamlines the development work!

 
When the practice of continuous integration (CI) is adopted, you are actually streamlining your internal process of creating software. You will take out (read, breaking down) different features or modules of the same application and assign them to separate teams. These individual teams would be responsible for working on the features or modules assigned to them only. Hence, no conflict...of coding or merging. They would then commit their updates to a shared code repository as they complete them, often many times a day. When they check in their code, the build management system 'automatically' creates a build and tests it. If the test fails, the system notifies the concerned team to fix the code. This practice helps software teams quickly detect and resolve any bugs that come up during the development process. You would NOT face a situation like when two or more developers inadvertently make conflicting changes that break the build when the lines are merged back into the master branch.
 

2. Continuous Delivery (CD) ensures code is always ready to deploy!

 
When the practice of Continuous Delivery is used, your DevOps teams develop and deliver the COMPLETE portions of software to a repository, e.g., GitHub or any container registry. All this is done in controlled cycles with the help of automation tools. The advantage gained is that since the COMPLETE portions (software, features) are released, they are in a deployable state and updates can go LIVE at a moment's notice.
 
Every time, when a particular build PASSES testing, it is placed into a deployment pipeline and can be deployed also automatically directly to actual production environments on daily, or even hourly basis. You have to choose it.
 
In short, your company would have multiple features or upgrades going on these 3-pipelines all the time. I hope by now, you have a very good understanding of what DevOps is.
-
 

👉👉 What is DevSecOps?

 
Indeed, modern DevOps culture brought a lot of innovation to software development, BUT security has often not able to keep up with the new SPEED at which code is being produced and released.
 
DevSecOps is an attempt to correct that!
 
Though they call it a 'shift to the left,' but you may not understand what does it mean. Right?
 
Basically it means that you bring security teams into the development process earlier than what it used to be. You bring the enhanced collaboration between these two teams very early in the SDLC, so as to identify and minimize the potential vulnerabilities very early than later.
 
In short, DevSecOps involves incorporating security as a natural part of the application development process itself.
 
The goals of DevSecOps (and DevOps also) are to release better software faster, and to detect and respond to software flaws in 'production' faster and with more efficiency.
 

A word of Caution:

---------------------------
Do NOT think that DevSecOps is about connecting two things: Development + Security
 
In fact, DevSecOps is a tactical integration of three disciplines: Development + Security + Operation
 
Hence, the name 'DevSecOps.'
 
It is an all-encompassing idea and aimed at fully integrating the SECURITY TESTING into:
 
[A] the continuous integration (CI) pipeline
[B] the continuous delivery (CD) pipeline
 
It is about bringing all sorts of tools and techniques that are needed to design and build software that resists attack, and to detect and respond to defects (or actual intrusions) as quickly as possible.
 
DevSecOps does one more important thing-- that is building up the (security) knowledge and skills needed, in the development team itself so that the results of testing can immediately be shared and the fixing of issues can also be done internally and simultaneously. There is nothing that is left to operate from silos.
 
REMEMBER
--------------
 
DevSecOps, summarizingly, enables security testing to occur seamlessly and automatically in the same general timeframe that other development and testing are happening.
 
For example, developers can run security-tests in the 'development phase' in near real-time to prevent wasting time context switching. They can also run security tests in the 'production phase' in near real-time so they can immediately discover all instances of a vulnerability running in production soon after the vulnerability is announced.
 
-
 

👉 Where does DevSecOps stand today?

 
Three key things make a real DevSecOps environment:
 
[1] Security testing is done by the development team.
[2] Issues found during that testing is managed by the development team.
[3] Fixing those issues stays within the development team.
 
However the above mention vision is currently a work-in-progress. First two aspects have almost shifted to development teams, but the third aspect 'FIXING' is still not with development teams, because companies still need separate security team to fix issues. They will achieve full and real DevSecOps, when they won't need any separate security team at all.
 
There are some key issues too. For example:
 
  • You need right people who can understand (or learn) the security best practices and know how to operate your new security tool-set.
  • In terms of culture, your teams need to truly adopt the mindset that they are responsible for the security of the software they build and deploy, just as much as they are responsible for feature, function, and usability.
  • Finding the right security tooling and integrating it into your DevOps workflow is another big challenge. However, the more automated your DevSecOps tooling is, and the more integrated it is with your CI/CD pipeline, the less training and culture-shifting you need to do.
  • Modern software development heavily relies upon open-sourced tools. Unfortunately, accurately detection of vulnerabilities in open-source software is not something traditional security tools are designed to do.
  • Similarly, modern cloud-native applications run in containers that may spin up and down very quickly. So many security tools are not capable of testing the risks of applications running in containers.
 
-
 
👉👉👉 In this post, I have attempted to bring forth a very complex topic to you. Only you can tell me whether I succeeded in helping you to understand it or not.
 
Kindly write 💚 your comments 💚 on the posts or topics, because when you do that you help me greatly in ✍️ designing new quality article/post on cybersecurity.
 
You can also share with all of us if the information shared here helps you in some manner.
 
Life is small and make the most of it!
Also take care of yourself and your beloved ones…
 
With thanks,
Meena R.
_____

This Article Was Written & published by Meena R,  Senior Manager - IT, at Luminis Consulting Services Pvt. Ltd, India. 

Over the past 16 years, Meena has built a following of IT professionals, particularly in Cybersecurity, Cisco Technologies, and Networking...

She is so obsessed with Cybersecurity domain that she is going out of her way and sharing hugely valuable posts and writings about Cybersecurity on website, and social media platforms. 

34,000+ professionals are following her on Facebook and mesmerized by the quality of content of her posts on Facebook. 

If you haven't yet been touched by her enthusiastic work of sharing quality info about Cybersecurity, then you can follow her on Facebook:

Click Here to follow her: Cybersecurity PRISM