DevOps and the Boundaries of Test Automation
Many restrictions on test automation imposed by DevOps are literally restrictions on test automation generally.
- Tests don’t prove that anything is true
- Tests aren’t finished
- Requirements are frequently incorrect or unclear
- Tests are costly
- Organizational politics may cause test results to be abused
The majority of testing issues in DevOps organizations stem from these limits, which are already well-documented elsewhere. Consider test restrictions that are unique to DevOps, or a minimum more prevalent in DevOps:
- A concentration on automated tests at the expense of other types
- Especially quick cycling
- Sensitivity to production flaws
- Complete reliance on rigorous testing
- More openness and less separation between functions
Let’s Examine the Sources of DevOps Test Restrictions and their Solutions
A Fixation on Automation
DevOps culture is characterized by an absolute attitude toward automation, like the necessity that QA automate all of their test cases. The truth, though, maybe a little more complicated.
Even the foremost aggressive DevOps organizations can take pleasure in the moderation of their attitude toward automation with the assistance of sophisticated testing practitioners. They’ll try this by acting as a source of data for automation techniques, reminding the complete team of the importance of manual testing in DevOps, and demonstrating active management of individual tests, particularly their progression from being manually executed to automated.
The last one is the most important. When knowledgeable testers join a DevOps team, they shouldn’t skip any tests besides automated ones. All test assets should be categorized instead.
A few tests are often quickly automated, and they fit nicely into the continual integration infrastructure utilized by the business. While some tests are also automated, they’re too slow or expensive to run on standard workflow systems. Some tests cannot be automated or haven’t been. All of those assets must be maintained in condition, and active review is important. Risk profiles, technology, or perhaps availability, may vary because the organization ages. a selected test may begin as a manual-only procedure, and be revised to become a completely automated version executed with each source commit, so “graduate” to automated operation on a slower pace independent of developer milestones. A deliberate, gradual approach contains a lot to supply the testing industry.
This type of categorization at the implementation level ends up in several test directories in my very own work. A number of them are accessible directly by Jenkins because they’re completely automated and suitable for execution upon check-in. Others require some reasonably human interaction or are even run less frequently merely to lower licensing costs.
DevOps may be seen as a change within the risk-reward thresholds. As an example, a corporation that doesn’t use DevOps relies on a group of partially automated GUI tests. When that very same company embraces DevOps, it should determine that the GUI tests are too costly to automate immediately but that it can incorporate enough hooks to supply functional tests that match each of the GUI tests’ outcomes instead. By that point, DevOps had automated every functional component of the initial GUI testing; limiting each check exposure to merely the GUI and hook correspondence. It’s conceivable to verify that correspondence outside of the DevOps framework.
Chronological Skew
DevOps is all about speed: quick development, quick outcomes, quick failure, etc. One specific repercussion is that even when some tests are entirely automated, they nevertheless need special handling.
Let’s say, for instance, that a selected DevOps team contributes a replacement source file on average once every hour, and its continuous integration is ready up to run 3,000 tests, with a completion time of 100 milliseconds for every test. That indicates that the complete set of automated tests is finished in five minutes. That’s a decent predicament: Although a replacement error isn’t immediately detected, it’s discovered in time after check-in for a developer to adequately react to and address failure reports.
Imagine that one new test has been completely automated and takes five hours to try and do. The validation of every commit shouldn’t include that test together with all the opposite tests. While five minutes of latency during validation is suitable, five hours significantly worsens the event experience. Keep this from happening in your company.
Having two test suites be launched with each commit could be a simple solution. One is the typical “quick” one, which provides an initial lead to the jiffy before human engineers start to lose interest in the current product. The second collection is “slow,” and it could take days or maybe hours to end. Even while it is difficult to analyze failures that return from the slow side, it’s still preferable to find out about errors in this fashion instead of from a client.
Run the slow collection on an outlined, recurring schedule as another to running it with each commit as a management strategy for handling such circumstances. While this method makes it even tougher to pinpoint the precise source modification that caused the matter, it reduces the number of lengthy tests run every day, which can be more in line with the organization’s testing resources.
Production Testing
Platform reliance is the third feature of testing for DevOps teams. DevOps’ frustration with the “it works on my machine” fallacy is one of all its benefits.
Tests must travel easily through all environments, from development desktops to integration hosts to staging and production, per healthy DevOps teams. A testing expert can effectively lead the team ahead during this situation because the development of portable tests is probably going to present challenges for a few team members.
When testing is finished in production, this demand for portability takes an extreme form. The thought is that intelligent testers make explicit all requirements and expectations for the portability of tests, while testing in production may be a significant enough topic to warrant separate discussion.
DevOps depends on testing. Not all test experts, nevertheless, have had this epiphany. Some companies prepare their testers for cyclical loads by teaching them that until released; nobody pays attention to them which work must be completed quickly. Testing support is often required for DevOps in any respect stage of its cycles.
Extra Sharing
Whereas more conventional teams are structured per ownership, DevOps places stress on transparency, sharing, and cooperation. Tests aren’t any longer the exclusive property of testers in a very normal DevOps environment; anyone can alter the source of a test. In situations where testers are the only readers, assumptions must likely be specified more thoroughly than is important.
DevOps will undoubtedly involve adjustments for testers from their daily lives, a number of which can be difficult initially. But leadership still pays dividends, and DevOps teams will always gain from testing knowledge.
About Enteros
Enteros offers a patented database performance management SaaS platform. It proactively identifies root causes of complex business-impacting database scalability and performance issues across a growing number of clouds, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Leveraging Enteros and AIOps to Optimize Data Lake Performance in the Manufacturing Industry
- 15 May 2025
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Optimizing Banking Infrastructure with Enteros: Enhancing Database Performance and Cloud Resource Efficiency through Cloud FinOps
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Driving Tech-Sector Efficiency with Enteros: Cost Allocation, Database Performance, RevOps, and Cloud FinOps Synergy
- 14 May 2025
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Enhancing Manufacturing Efficiency with Enteros: Forecasting Big Data Trends Through AIOps and Observability Platforms
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…