Let’s start by looking at DevOps itself and what that means in practice.
The concept of DevOps was originally created by Andrew Clay Shafer and Patrick Debois in 2008. An event called DevOpsDays in 2009 made it even more known.
In the term ‘DevOps,’ ‘dev’ refers to the software development team that implements and tests the code for new software releases. ‘Ops,’ on the other hand, refers mainly to the IT operations, or more specifically, the team that installs and maintains the software product.
This can mean on servers, in a cloud environment, or on-premises. The IT operations also take care of possible customer-specific variants and security patches. Basically, security and QA (development testing as a function) are often added to the ‘Ops’ side.
According to DevOps pioneer Gene Kim, the main principles that guide DevOps — the Three Ways — describe the values that all DevOps practices and processes are built on. The Three Ways are flow/systems thinking, amplifying feedback loops and having a culture of continual experimentation and learning. Kim’s works The Phoenix Project and DevOps Handbook also offer valuable overviews of DevOps.
At Gartner, DevOps is described as follows: “DevOps emphasizes people (and culture), and seeks to improve collaboration between operations and development teams.”
This means that DevOps is more like a culture than simply a particular technology or tool, much like CI/CD (continuous integration and continuous delivery).
DevOps aims to combine the actual development of software with the operational part and is a way to decrease silos across teams.
It’s a holistic approach that aims to get rid of sub-optimization on ‘both sides’ — the development process as well as operations. The goal is also to accelerate the flow of value throughout the combined team. A team with better collaborative processes can create more value for customers.
DevOps automation means using automatic tools to perform DevOps tasks that generally require manual work.
Automation is done to:
DevOps, in general, aims to optimize the ratio of time to value: The core idea is to have development, operations, QA, and security working together at an agile pace, delivering value through software.
But what is DevOps automated testing? It's all about delivering value faster. That ‘value’ is realized only when you manage to deliver software with as good or better quality as what was in production already before the new release candidate.
Security is a significant part of quality, so complying with specific security requirements is always a must. No matter how fast your team could release new features, the software can and should only go to production when it meets the set quality criteria.
Now that we’ve covered the basics, let’s move on.
So, why should you be concerned with DevOps automation to begin with?
Well, in addition to creating more value faster, automation enables continuous value delivery in DevOps. It also helps you avoid silos, facilitates feedback loops, enables faster, continuous deployment and better QA, and gives you a lot of improved security.
Feedback loops are a crucial part of DevOps, as they help you see problems as they occur. Picture this: a programmer makes a mistake that ends up causing a defect. In old-school projects, they may have continued building more and more code on top of the buggy code without knowing there is an issue underneath.
Waterfall projects (i.e., non-agile) used to have a long implementation phase followed by an integration and testing phase. When the code is integrated only after a long time, it is hard to isolate bugs and fix them as months may have passed since the bug was created. DevOps embraces fast feedback.
Continuous integration is one practice driving that: software is built incrementally (small batch size), and typically developers integrate their new functionalities daily. The developer gets feedback in minutes, not after several weeks.
DevOps automation will also free up resources for value-creating work. Instead of having developers do things that can be automated, just automate them!
When you dive straight into the DevOps process, test automation is not a nice add-on anymore. It becomes a necessity. Think about it: Every time you have a developer patching their scripts and updating their tooling, the whole team loses time.
When automation frameworks are integrated into the company DNA and culture, things are no longer dependent on individual people, which simplifies things whenever there are changes in teams, for instance. When you enforce best practices through automation, your entire company will be better.
Find out more
Once we’ve established that DevOps automation is fantastic, we need to discuss the practical side a little bit. Simply put: What kind of tools should you use, and why?
There are a couple of things that are good to consider before you go all-in with test automation.
What applications will you be testing?
Not only now, but also in the future. What is under development? Is it behind firewalls?
What is your business software architecture like?
What is it like today, and what are your plans for the future?
Who are your testers?
Are your testers distributed over the world, or are they in a centralized location?
Ever since the COVID pandemic, the entire IT world has been working remotely, so most teams are more or less distributed.
What are your in-house capabilities?
Do you want to build your own testing tool, or do you want to outsource your testing? Or do you want to buy a product and use that instead?
Do you want to train your full-stack engineers to use the testing tools? How can you ensure that you can continually test everything?
How long will it take for your team to learn the tools you pick?
Months, weeks, days?
Can this solution be integrated into your other tools?
Also, make sure that whatever you choose has a functioning ecosystem around it, so you can continue using the tool in the future without issue.
Even after these questions, the fact remains that there are many tools out there. It’s sometimes tricky to pinpoint which DevOps tool is the right one for your company.
Eficode ROOT is generally a good source for tools related to DevOps automation, but here are a couple of our favorites:
Azure DevOps, which is a set of DevOps services. You can either decide to use all the DevOps services they offer or pick and choose which ones you need.
BitBucket for version control. It’s a Git-based code hosting and collaboration tool built for teams. Your whole team can collaborate on code from concept to cloud, build quality code through automated testing, and deploy code through it.
Jenkins automation server to discover useful plugins. Jenkins is a hub of hundreds of plugins that help you support the building, deployment, and automation of any project you have on hand.
JIRA for issue tracking and project management. Jira is a project management tool at its core, but it’s built for software teams so you can easily plan, track and release software.
Terraform for building infrastructure-as-code. With it, you can have “heterogeneous infrastructure, frequently provisioned, short-lived, and automated provisioning on-demand.”
And, naturally, Qentinel Pace. Qentinel Pace is the all-in-one platform for automated software testing. It’s cloud-based, super convenient, and quick to set up — and it provides you a holistic view of your DevOps health and analytic predictions.
In addition to just picking the right tool, an important thing to consider is: Should you go with open source or a commercial product?
The simple answer is that open source tools offer a lot of value in terms of ecosystem and community, as well as flexibility.
Qentinel Pace is an example of a tool that combines the best of both worlds, however. No need to worry about maintenance, but you can still leverage all the power of open source.
Find out more
So, when should you automate your DevOps testing, and when should you stick with humans doing the heavy lifting?
Some tasks can just as well be performed by a computer. This saves your employees’ time for things that actually require manual work.
Here are some guidelines on when to automate testing and when not to automate. However, it’s good to keep in mind that all projects are unique, and thus, it’s good to see case by case.
When to automate testing?
When should you instead use humans over robots?
So, why do we want to talk about the pros of cloud-based test automation?
We love cloud-based test automation because the scale of testing keeps growing, and so test automation platforms must scale, too. This is made possible with cloud infrastructure.
Speed is also an important thing: everything needs to be done faster. Simply put, cloud-based solutions allow a lot of flexibility. In the cloud, updates are quickly implemented, and there can be tens of updates a week without the end-user ever noticing they’re taking place.
The opportunities for scaling are basically unlimited, and you can always build more and more test capacity.
With the limitless scalability of the cloud comes the practical benefit that infrastructure is automatically part of the deal. You can have an endless number of robots completing tasks simultaneously, and just add them as you please.
When everything is in the cloud instead of physical servers or engine rooms, everything is super easy to implement without having to actually install anything on a computer. And all maintenance of the infrastructure is taken care of elsewhere.
Many people are concerned about cloud security, but in fact, the security is often better in a cloud-based product environment than on-premises. This is due to the simple fact that building the physical data security infrastructure that data centers have is extremely expensive.
Cloud services also often come with advanced AI services that will help you with data analytics. A great benefit of the cloud is that it can be scaled both up and down: If you suddenly need a lot of capacity for analytics but won’t need it next week, it’s no problem, and you don’t need to bother with the infrastructure management.
In general, software is moving to the cloud increasingly fast, and many solutions are already linked to a lot of different cloud platforms (such as Salesforce, AWS, and SAP). To ensure that everything works correctly, cloud-native testing environments are needed — to interact with not just your app but with all the underlying platforms.
Finally, cloud services have a high availability level, and they are usually fault-tolerant. Qentinel Pace, for instance, runs on three different availability zones simultaneously. If a physical server was destroyed for one reason or another, a backup instance in another availability zone would be instantly activated.
A good rule for everything is: Base everything you do on data and analytics.
DevOps accelerates the pace of value deliveries by allowing more frequent delivery, but you need to make sure that quality is never compromised.
In fact, one core value in DevOps is focusing on data-driven decisions to maintain high quality — even with all the pressure related to frequent production deployment.
To be able to make good decisions, you should keep your focus on relevant metrics. These include:
✔️ Release quality: Is the current release candidate ready for deployment, and if not, what should you fix?
✔️ Production quality: What do I need to improve to avoid service outages and maintain a good user experience?
✔️ Customer satisfaction: How can I improve the service or product in use, and customer satisfaction?
✔️ Velocity: How can I accelerate the speed of delivering value?
So, in short: Everything needs to be measured, but the metrics need to make sense. Luckily, we have the perfect solution: Quality Intelligence for DevOps.
Quality Intelligence is measured, actionable information about the quality of your solution or tool. It was created so we could understand both the value creation of digital services and the processes around them.
It helps you make more data-driven release decisions and find the right levers to turn to produce more value with software faster.
Find out more
Frankly, we don’t know. But we do know that cool things are coming for sure.
Maybe AI and robot coaches will be a thing of the future. We already have agile coaches and DevOps coaches, so it would be a natural way forward.
Since analytics is already heavily relying on machine learning, maybe in the future, DevOps coaches will be able to make recommendations based on things learned through automated DevOps testing.
In the future, code will be increasingly shared. The code you write will no longer be your own: And even now, most of the code in applications is from libraries circulating on the internet.
Due to this, the testing of your own codebase for regression grows more and more important as you control and know less and less of it. This will mean more automated processes and testing will be needed.
Additionally, the code might not be generated by other humans at all: We might have an AI-powered DevOps team, or maybe some of your team members will be robots instead of other human beings.
An increasing amount of frameworks will generate source code, which means that testing frameworks hooked to this code will start breaking more and more.
The trend of raising the development abstraction will continue into the future, and maybe someday we will see what comes of it.