A NOTE ABOUT GENERAL BUG COUNTS
A NOTE ABOUT GENERAL BUG COUNTS
Often there is a big push to lower the total bug count. This is obviously a good idea if the time is available. But a key focus is on making sure that the right bugs are fixed. Recognizing that there will always be bugs, you want to make sure you get rid of the bugs that customers are most likely to find. Therefore, use general bug counts as a guide but work to understand which bugs are the highest priority and fix those first.
Measuring General Bug Counts Let’s look at the Total Bug Count. Are you capturing it right now? Are you sure? You may not be because the Total Bug Count includes bugs found in pro-
Metrics 279
Bug Count per Feature has a different focus. The goal of this metric is to identify weak features—those features likely to produce more bugs in the future. The effect of gathering the number of bugs per feature enables you to classify the highest risk features and to implement additional testing against those features. There should be a correspondingly high number of Test Cases against a feature that has a high number of bugs. This enables teams to get ahead of the problem. Maybe this also includes doing additional code reviews and being proactive about the feature quality. This information can
be captured in one of two ways: by using areas to classify bugs as belonging to a feature or by simply linking bugs to the requirement. (I would argue for the latter.)
Regression bugs are the last of the bug count totals. Regression bugs indi- cate two items: The testing process is inadequate when dealing with bug fixes
280 Chapter 9: Reporting and Metrics
Formal Reviews
Formal reviews, or the lack of, are one of the key issues when looking at qual- ity. So how do you track reviews and what does this have to do with testers? The MSF for CMMI template includes a Review work item type. One thing that works is to attach one or more Review work items to each feature that needs to be reviewed (maybe a design review, in-progress review, and final review). This enables you to schedule reviews. But how effective are your reviews? A review requires a couple of key items. A formal agenda is absolutely required; if you perform informal reviews, they cannot provide as many benefits. You also need to provide a list of items that will be reviewed and what the expectations of those items are. You must have the code metrics, the results of a static code analysis run, code comments, and so on. You also must give individuals who will be involved in the review some time to read
find things during reviews, the developers’ work should not be “complete” at this point, so don’t penalize them. And the third rule is that the review is to provide constructive criticism—not beat the person who made the mistakes over the head. This is where the old axiom of not throwing stones in glass houses applies. But what should you do? You can do a few things that give you the ability to report on reviews. The first is to simply create child tasks for the review—one for each issue discovered. This enables you to track the sta- tus of all the items found during the review and does not penalize the devel- oper but provides a list of what needs to be fixed. This also enables you to determine the effectiveness of the review process. If you hold a review and no tasks are created from the review, you aren’t doing the review correctly. The reality is that you will almost always find something to fix. So a lot of reviews with no associated tasks mean the process needs to be examined. The second
Metrics 281
282 Chapter 9: Reporting and Metrics How do you track this coverage? The easiest way is to add an area for
Normal Path and an area for Alternative Path and simply classify the Test Cases in the appropriate area. This may not always be acceptable because areas may be used for some other specific purpose. Another way to handle this is to modify the Test Case work item type and add a field that enables you to specify normal or alternative paths. Either of these simple changes can enable you to run reports that determine what percentage of the normal path tests have been executed against a requirement and what percentage of alter- native path tests have been executed against a requirement. You can then cor- relate bugs found with test coverage of the requirements and make an informed decision about when to increase the test coverage of alternative path tests.
Index
A Agile Testing: A Practical Guide for
Testers and Agile Teams (Crispin and Acceptance Test Driven Development
Gregory), 20
(ATDD), 26
284 Index Attachments section, detailed test results, BlogEntryHTMLBasicTestCodedUI-
98-100 TestMethods, 167 attributes, 186
boundary cases, 21 automated builds, creating, 191
Browser Window class, 150 automated builds (Test Plans), 40-41
bug count per feature, 11, 279 automated test settings, 221-222
bug count per phase, 11, 278 Lab Management workflow, 222-231
bug reactivations, 276 automated testing, 24
comparing measurements, 277 automated testing framework, 139-141
lowering, 277
automated tests, 9-10
measuring, 276
creating from manual tests, 141-142 Bug Status reports, 244-245 coded UI tests, 144-157
Bug Trends reports, 245-246 examining generated web application
Bug work item type, 107-110 coded UI tests, 142-144
customer reported bugs, 110 executing, 183-184
reactivations, 111 from command line, 190
test team reported bugs, 110
Index 285
build reports, 232
code coverage, 11
Build Result Count Trend, 257
Coded UI Test builds, 159
Build Success over Time reports, 248-249
Coded UI Tests, 24, 144-147
Build Summary reports, 249-250 associating with Test Cases, 178-181 build warnings, 204
maintaining, 154
building quality at the beginning of
recording from scratch, 165
projects, 17 running through MTM, 196-199 builds, 82
searching for controls, 148-157 assigning, 127-129
CodedUITestMethods, 170, 175 automated builds, creating, 191
combining tests, 178
dual purpose, 233 command line, executing automated lab builds, executing, 231-232
tests, 190
quality, 125-127 communication, improving, 5-6 retention, 130
$(ComputerName_), 228
work items, 125 Computer Science Corporation, 4 builds (Test Plans), 40-41
configurations (Test Plans), 41
286 Index customer reported bugs, 110
development of Microsoft Visual Studio customizing
Test Professional 2010, xvii-xix process templates, 115, 270
diagnostic data, gathering, 235 work items, 61
diagnostic data adapters (Test Plans), 40 differences, bugs, 112-116
D documentation, MSDN, 253 done, definition of, 18
cashboards, 254 dual purpose builds, 233 data, gathering diagnostic data, 235 dynamic values, 172-178 data driven test cases, 77
data sources, Test Cases, 168 database unit testing, 22
default diagnostic data adapters (Test
edge cases, 21
Plans), 40 editing test steps, 73 defect cost, 11
encrypted passwords, 148 defect root cause, 11
end of projects, building quality at, 17
Index 287
exploratory testing, 23
generated control class, 174
MTM, 101-104 generated web application coded UI external software quality, 13-14
tests, 142-144 generating
F reports from work item queries, 255-256 ValidateHTMLInfo code, 171
failures, failure categories, 42
goals of software testing, 19
FBI’s Virtual Case File system, 4
gray-box testing, 22
FDD (feature-driven development), 65-66
Gregory, Janet, 20
feature-driven development (FDD), 65-66
Group Policy Editor, 206
filing bugs, 88-89 finding bugs, 88-89 first-time defect rate, 273
causes of, 273
Heinlein, Robert, 269
comparing measurements, 275
How Found field, 114
lowering, 274-275
288 Index Internet Explorer Document Object
metrics, 268-271
Model (IE DOM), 136 bug reactivations, 276-277 iterations, moving from one iteration to
capturing, 272
another, 67-68 explained, 10-12 first-time defect rate, 273
causes of, 273
Kelvin, Lord, 269 comparing measurements, 275 Kristensen, Mads, xxiv
lowering, 274-275 measuring, 273-274
related metrics, 276 lab builds, executing, 231-232
general bug counts, 277 Lab Management, 194, 209
measuring, 278-279 development of, xviii-xix
reducing, 279-282 Lab Management workflow, 222-231
what to measure, 271-272 Links section, detailed test results, 100
Microsoft Active Accessibility. See MSAA Microsoft Environment Viewer, 218
Index 289
MSI packages, 217
MSTest.exe, 190
Original Estimate field, 114
MTBF (mean time between failures), 16 MTM (Microsoft Test Manager), 8
connecting to Team Foundation Server,
33-34 parameterized Coded UI Tests, 166-168 executing automated tests, 191-192
inconsistency issues, 168-169 setting up physical environment,
resolving inconsistency issues, 169-170 193-196
parameterized test cases, creating, 78 explained, 30
parameterized tests
exploratory testing, 101-104
best practices, 88
managing virtual environments,
executing, 87
210-216
passwords, encrypted, 148
navigation controls, 30
pausing test runs, 89-90
navigation layout, 31
PeopleSoft, 4
running automated tests, 233-234
physical environments, 198
290 Index
Q reporting with Microsoft Excel, 254
creating generated reports, 255 quality testing measures, 256-257 as team effort, 18 reporting structures, 240-242 building at beginning of project, 17
reports
builds, 125-127 built-in reports, 242-244 business value, 14 Bug Status, 244-245 cost of poor software quality, 3-5 Bug Trends, 245-246 definition of done, 18 Build Quality Indicators, 246-248 expectations, 15 Build Success over Time, 248-249 impact of process on, 19 Build Summary, 249-250 internal versus external, 13-14 reactivations, 246 maintainability, 16 Stories Overview, 250-251 reliability, 16 Test Case Readiness, 251-252 requirements, 14 Test Plan Progress, 252-253 security, 16 Excel Services, 253-254
Index 291
running
software quality
automated tests through MTM, 233-234
as team effort, 18
Coded UI Tests through MTM, 196-199 building at beginning of project, 17 tests, 79-80
business value, 14
Test Runner, 80-84
definition of done, 18 expectations, 15
impact of process on, 19
SAP, 3 internal versus external, 13-14 Scheduling Test Cases, 64-65
maintainability, 16
Science Applications International
reliability, 16
Corporation, Virtual Case File
SCVMM (System Center Virtual Machine software testing, need for testers, 4 Manager), 210
speeding up testing, 234
search conditions, 176 SSAS (SQL Server Analysis Services), 239
292 Index
T importing, 77
moving from one iteration to another, Tcm.exe, 190
67-68
Team Build, executing automated tests, parameters, creating, 78 202-203 relationship with team projects, test Team Foundation Server. See TFS
suites, and Test Plans, 36 connecting to, 33-34
scheduling and tracking, 64-65 Team Project Collections. See TPCs
static suites, 80 teams, involvement in building software
testing workflow, 55-56 quality, 18 analysis and initial design, 56-61 Technical Debt, 246 construction, 61-62 templates user acceptance testing, 62-64 built-in, 224 User Stories Report, 257-268 process templates, customizing, 115, 270 test code, deploying, 127 Test Approach Word template,30 Test Configuration Manager test approach, 30
accessing, 49
Index 293
Test Plan Status section (Test Plans), 42 Test Results, test attachments, 119 analysis categories, 42-43
test run results, 93-94
failure categories, 42
Test Runner, 80-84
Test Plans, 55. See also testing workflow bugs, finding and filing, 88-89 builds, 40-41
pausing and resuming test runs, 89-90 configurations, 41
replaying test steps, 90-91
Contents section, 43-44. See also Test
Test Runner (TR), 71
Suites test runs, pausing and resuming, 89-90 static suites, 46
Test Scribe tool, 33
creating, 37 test settings, creating, 199-201 default diagnostic data adapters, 40
Test Step Details section, detailed test properties, 38
results, 96-97
relationship with team projects, test
test steps
suites, and test cases, 36
editing, 73
Run settings, 38-40
replaying, 90-91
Selecting, 35
Test Suites, 43-44
294 Index testing workflow, 55-56
time, 241
analysis and initial design, 56-61
Tool Center, 33
construction, 61-62 tools, Test Controller Configuration user acceptance testing, 62-64
tool, 193
tests total bug count, 11 automated tests. See automated tests
TPCs (Team Project Collections), 240 cleaning up, 207
tracking Test Cases, 64-65 Coded UI Tests. See Coded UI Tests
transparency, 6
combining, 178 triaging bugs, 110, 116 executing, 85-86, 159
parameterized tests, 87
generated web application coded UI UAT (User Acceptance Testing), 21, 63 tests, 142-144
UI test files, 154-155 impacted tests, 131-132
UIA (User Interface Automation), 136 manual tests in virtual environments,
UISigninDocument class, 150-152 234-238
Index 295
verifying bug fixes, 129-131
videos, 97
zero defect releases, 3
Virtual Case File system (FBI), 4
Zumdahl, Steven S., 269
virtual environments configuring, 217-218 managing with MTM, 210-216 manual tests, 234-238 options, 215 snapshots, 219-221 versus virtual machines, 213
virtual machines, 194 versus virtual environments, 213 virtualized testing, 90 visibility of projects, increasing, 6 Visual Studio Test Professional 2010,
development of, xvii, xviii
This page intentionally left blank
Microsoft .NET Development Series
978-0-321-15489-7
978-0-321-19445-9
978-0-321-37447-9 978-0-321-38218-4
For more information go to informit.com/msdotnetseries/
978-0-201-73411-9 978-0-321-22835-2