Assessing and Assuring Quality in Digital Products
Read Full Report Interviewee: Jeremiah Jacquet
Jeremiah Jacquet has over a decade of experience in quality assurance working with digital products, websites, and mobile apps. Experienced as director, QA engineer, QA automation engineer, and technologist with such notable companies as, Wag, Ring, Whisper, LinkedIn, Lynda.com, CitySeatch, Napster, and Myspace. Jeremiah is also the co-founder and a blockchain developer at Bliss Information Technology Consulting, an IT solution provider, focusing on fortune 500 consulting in biotech, finance and entertainment industries. I’ve chosen Jeremiah for this quality focused interview given his long quality focused career specializing in the high-tech field. Developing digital products is heavily reliant on the assurance of quality along every step. With so many lines of code, the numerous stakeholders, and the use of these products in various digital environments, precision quality is necessary to insure client and user satisfaction and ultimately the overall success of the product.
Assessing and Assuring Quality in Digital Products.
In conducting an interview with Jeremiah Jacquet, the Director of Automation at Wag, regarding controlling and measuring quality in the development of software products, we find that quality is measured and assessed in various ways in different industries. However, it seems the principles behind assessing and assuring quality are somewhat universal. Lean, a methodology which aims to remove anything that does not add value for the customer is a group of techniques focused on optimizing quality processes (Foster 2013). Six Sigma focuses on process improvement (Foster 2013) and along with Lean share principles that can likewise found in methodologies utilized in the technology sector, such as Agile. Quality is a vital part of any industry, be it manufacturing of cars or the development of software. Assuring and measuring quality are paramount to the overall success of the development process and the satisfaction of the end user.
Quality in software development has been a challenge since the early days of computing. The virtual environments software can reside in, such as various mobile interfaces, different operating systems, numerous viewing ports, multiple languages, amongst other variables can make ensuring seamless quality across the different platforms a challenge. These challenging circumstances can be compared to a manufacturing a car that must function as intended on highways, offroad, in the snow, in the desert, at 200 MPH, in traffic, in Siberia, in India, should seat 20, and make everyone happy.
Jeremiah’s adventures in quality assurance began with testing mobile games. The need for a mobile game, which is software like any other, to perform uniformly across various platforms is paramount to its success. As Jeremiah points out, ‘quality is defined by user experience. A consistent user journey across multiple platforms, with correct graphics, dimensions, and behaviors.’ A game, or any other software, must look, feel, and perform as the user expects no matter where and how it’s accessed. The issues of physical environment, task environment, and social environment take on a whole new dimension when it comes to software development (Foster 2013). Theses environments in the cosmos of software are fluid and can change day to day, and developers who wish to survive have no choice but to adapt.
Jeremiah takes a ‘pragmatic approach’ to quality assurance, from ‘following specifications of software development, removing bugs, preventing bad releases, to working with developers, project managers, and product managers.’ This process assists in the development, preventing defective releases, and at times working with product teams to add new features.’ Jeremiah’s thorough process echoes principles of excellence programs, such as the Baldrige Performance Excellence Program typically not found in software development. These principles include strategic planning, customer focus, work focus, operation focus, results, measurement, analysis, and knowledge management (Foster 2013). This crossover shows that no matter the industry, the ideas the make comprehensive processes, good products, excellent services, and ultimately happy customers are somewhat universal. In my own experience in the film industry, web development, and wholesale of consumer goods, undoubtedly diverse sectors, the fundamental ideas that lead to great results, great products, and happy customers are the same as those mentioned by Jeremiah and in programs such as Baldridge.
Let us break down the processes and how they’re applied a little further. How do technology companies deal with the complexities of their ever-changing environment? How do their approaches resemble or differ from that of companies in more traditional spaces, such as manufacturing? Jeremiah now sits at what can be considered the top of the QA ladder in software development. As a Director of Automation, Jeremiah oversees mid-level and junior members of a QA team. Senior levels of QA teams in software development are involved in automation, mid-level members communicate between teams wearing many hats but don’t directly affect code, while junior members are typically conducting acceptance testing, as highlighted by Jeremiah. What does this all mean? Automation seems to be confusing for those who are not in software development. Jeremiah points out that, ‘this process entails debugging, adding breakpoints, tracing, and monitoring output.’ Automation can be described as special software, other than the software being tested, that controls the execution of tests, and makes comparisons of actual outcomes against expected outcomes ("Test automation" 2018). A few other terms to define for clarity are breakpoints and defect tracing. “In software development, a breakpoint is an intentional stopping or pausing place in a program, put in place for debugging purposes. It is also sometimes simply referred to as a pause. More generally, a breakpoint is a means of acquiring knowledge about a program during its execution.” ("Breakpoint" 2017). Tracing defects is a process by which programmers trace the program execution step-by-step by evaluating the value of variables and stop the execution wherever required to get the value of variables or reset the program variables ("Debugging - What is Debugging ?"). Automation itself, as explained by Jeremiah, is literally automating this process using software. Jeremiah further explains, that when deploying automation, specifications for software requirements are taken, unit tests are written, function tests are coded, performance tests are created, and edge tests are automated and placed into a CC/ID. Okay, more explaining is in order. Edge tests or edge case tests, take into consideration extreme situations. For example if the age range you can enter into a form is between 20 and 80, what happens when you enter 200? In this situation, the software may not function correctly, and this would be caught in an edge test ("Software & Finance"). Opensource.com defines CI/CD in the following manner and it best illustrates how this process relates to an assembly line. “An assembly line in a factory produces consumer goods from raw materials in a fast, automated, reproducible manner. Similarly, a software delivery pipeline produces releases from source code in a fast, automated, and reproducible manner. The overall design for how this is done is called continuous delivery. The process that kicks off the assembly line is referred to as continuous integration. The process that ensures quality is called continuous testing and the process that makes the end product available to users is called continuous deployment,” this is the CI/CD pipeline (Laster Feed "What is CI/CD?"). Jeremiah gives the following example of how the pipeline can progress, “compilation, unit test, smoke test, integration test, security test, and deploy to production.” Some common softwares used in CI are Jenkins, Gitlab, Circle CI, and Bamboo, as referenced by Jeremiah. A final point on automation is regression testing. When developers change any portion of their code, even the smallest of changes can have substantial unexpected consequences. Regression testing tests existing software to make sure the changes or additions haven’t broken any existing functionality. Jeremiah explains, ‘that each times code goes to branch, automation runs regression to make sure that new code hasn’t broken any existing code. This ensures the user has a consistent experience.’ You may need to read that paragraph a few times.
How do these seemingly complex processes compare to QA in other industries? In service industries, service blueprinting creates a flowchart that isolates potential fail points in the process. Fail points are points in the process where things have the potential of going wrong. Furthermore, in ‘moments of truth’ the user expects something to happen. A process called Poka-yoke also acts as a fail-safe, where designs are made not to fail. Gap analysis likewise has its parallels in software development, where gaps between where things are and where they should be are analyzed (Foster 2013). These process are manifested in various forms in software development. In the changing landscape software resides in, perfection is hard to achieve. Methods such as Six Sigma which aim to create a near perfect process, do not translate well for software development. Not directly utilizing Six Sigma doesn’t mean that developers give to sloppy work. The methods employed in QA automation take into consideration many factors that address not only quality but also process, production, and deployment. The feedback loop created ensures that each iteration is improved over its predecessors.
Lean methodology, as mentioned previously is practiced and implemented in software development to a certain extent. As Jeremiah had mentioned, “in data environments perfections is impossible.” He further illustrates that similar to lean in agile methodologies, a system of frequencies, weekly deployments, tickets, and point systems are utilized to obtain and identify waste. This acquisition process can be compared to how metrics are obtained through the lean process. The Lean Government Metrics Guide points out several metrics that clearly relate. Time metrics is how long it takes to produce a product, quality metrics is how often the process leads to mistakes, output metrics is how many products are completed in a given time, and process complexity metrics is how many steps or how complicated is the process (EPA 2009). This crossover demonstrates that software development methodologies owe much of their principles to methods used in older industries, be it manufacturing or service. Jeremiah mentioned similar processes used as ways of defining success. ‘How quickly we’re building, number of defects, the internal process, and how things look externally....How many issues do we catch before deployment and how many are caught by the users after deployment.’ These are just some methods of gathering metrics that assess quality as highlighted by Jeremiah and crossover into other industries.
The metrics reported and collected, must be organized to become useful to the various stakeholders. What happens with collected metrics is perhaps even more important than metrics themselves. In Lean and Six Sigma, these metrics are used to “target the right problems, evaluate potential process improvements, and provoke appropriate actions for implementation” (EPA 2009). A framework often used in lean is the “SMART model --metrics should be Simple, Measurable, Actionable, Relevant, and Timely” (EPA 2009). Simple - refers to transparency and understandability of the metrics by all stakeholders, Measurable - refers to metrics for which you can easily collect performance data without assumptions or estimates, Actionable - refers metrics that provide information which can be used to make improvements to operations or outcomes, Relevant - refers to metrics that support the organization's strategic objectives and relate to the process at hand (EPA 2009). Relevant metrics, in software development, are typically gathered from softwares communicate with each other or other hardware. Jeremiah illustrates some techniques developers use to gather these metrics; ‘All machines and codes make calls to other systems. These calls and subsequent responses are intercepted. We’re essentially looking at network traffic and looking for malformed responses. This is done with proxy smithing tools and on an iOT level (Internet of Things) we’re looking at calls from the code to the chipset and back. Charles is a program often used in this process.’ These metrics as noted by Jeremiah are comprised of such data as fail report and time-to-pass. This data is gathered on either proprietary dashboards or test case managers, then compiled over time and analyzed. The process as illustrated by Jeremiah is reminiscent of DMAIC, a data-driven improvement cycle, which is described in the following manner: “Define the problem and what the customer requires; Measure defects and process operations; Analyze data and discover the causes of the problem; Improve the process to remove the causes of defects; and Control the process to make sure defects don’t recur” (Foster 2013). Software developers are utilizing many of these same time-tested principles and tools, adjusted to better suit their products and environments.
Quality, as illustrated, is assessed in many of the same ways across various industries. Tools, mechanisms, and terminology may change, but the basic principles of quality assurance remain. A significant difference, even within the same industry, is how individual companies view quality. This brings us back to the Relevant component of the SMART model (EPA 2009). What is relevant to each company and how they define success can vary drastically. Jeremiah states, ‘everyone looks at quality differently. It comes down to company need.’ He further points out ‘that it’s often a difference between a need for quality or a desire to quickly deploy. CI/CD with automation supports rapid deployments while quality focus involves automation and manual testers.’ Further elaborating, ‘robots still have a hard time with certain judgments, they’re still not able to fully replace the human touch, but we're pushing hard to change that.'
Evolution of quality assurance in software development is rapid. These changes are more easily seen in the mobile space, as mentioned by Jeremiah. He further points out that automation in mobile is often proprietary, but some tools have come along that assist in the process, such as XCTest for iOS and Espresso for Android, while Watir and Selenium have progressed web automation. He also mentions that YAML has cleaned up writing instructions for CI/CD. YAML stands for "YAML Ain't Markup Language". It's basically a human-readable structured data format. At its core, a YAML file is used to describe data. ("YAML Syntax"). Containerization and containerization software such as Donker are another technique which help developers and IT Ops work better together. Geoffrey Shenk in his article The Fully Containerized Testing Strategy points out, “Containers optimize testing of distributed apps by enabling multiple tasks and processes to run in tandem. Strictly speaking, containerization is operating system level virtualization. ... Containers enhance the development and release cycle by guaranteeing a standard of environment behavior.” (Shenk 2018). Jeremiah adds that before containerization testing was done on virtual machines and was less efficient. He further states that ‘with advancements in OpenCV, ML, and Ai manual testing jobs may soon be eliminated.’ Although the replacement of manual testers may not be at hand as of yet, the possibility is not far off. Jeremiah suggested that a way for personnel in all units to stay relevant is to be sure they have ‘technical knowledge about their industry. The more technical knowledge across all units the more efficient the quality assurance process.’
In this journey through quality assurance, we’ve seen that in there are many similarities amongst diverse industries. As enumerated by the Eight Critical Steps for Lean Champions, in (EPA 2010), the following steps are critical to success: Choose where to focus your improvement efforts, define process excellence and set clear goals, actively participate in process improvement events, assign staff and resources, provide visible support for process improvement efforts, monitor progress and hold people accountable, clear obstacles to successful implementation, and recognize and celebrate accomplishments (EPA 2010). These same principles were seen to apply in the development of software. Data may come from differing sources and be gathered in differing ways, but the goal remains the same, to eliminate waste, have timely deployments, reduce defects, and increase customer satisfaction. As the landscape changes with ever-evolving technologies, it’s important not to forget why quality assurance exists. The foundation of quality assurance was built long ago, and it makes much more sense to build on that foundation. These founding principles, like that of our own country, can provide a framework to develop tools and techniques with purpose and continue to improve the entire process for ever better experiences, products, and services.
2009. Lean Government Metrics Guide.
2010. Leading Process Excellence.
“Breakpoint.” Wikipedia, Wikimedia Foundation, 11 Oct. 2017, en.wikipedia.org/wiki/Breakpoint.
“Debugging - What Is Debugging? Debugging Meaning, Debugging Definition.” The Economic Times, Economic Times, economictimes.indiatimes.com/definition/debugging.
Foster, S. Thomas. Managing Quality: Integrating the Supply Chain. Pearson, 2013.
Laster Feed, Brent. “What Is CI/CD?” Opensource.com, opensource.com/article/18/8/what-cicd.
Pojasek, Robert B. “Risk Management 101.” The Canadian Journal of Chemical Engineering, Wiley-Blackwell, 21 Mar. 2008, onlinelibrary.wiley.com/doi/abs/10.1002/tqem.20180.
Shenk, Geoffrey. “The Fully Containerized Testing Strategy.” Functionize.com, 19 Feb. 2018, www.functionize.com/blog/the-fully-containerized-testing-strategy/.
“Software & Finance.” Software Quality Assurance - Edge Case Testing, www.softwareandfinance.com/SoftwareTesting/SoftwareQualityAssurance/EdgeCaseTesting.html.
“Test Automation.” Wikipedia, Wikimedia Foundation, 13 Aug. 2018, en.wikipedia.org/wiki/Test_automation.
“YAML Syntax.” Atom, learn.getgrav.org/advanced/yaml.