This wiki is locked. Future workgroup activity and specification development must take place at our new wiki. For more information, see this blog post about the new governance model and this post about changes to the website.

QM Scenario: Traceability

STATUS: WORKING DRAFT This is a working draft of the scenario. Its contents are expected to change.

This scenario focuses on the needs of consumers to create and follow traceability links across multiple resources and OSLC domains

Scenario

Traceability links are established across RM, CM, and QM domains

  1. Architect creates a requirement using RM tool
  2. Developer creates a change request using CM tool
  3. Developer requests a requirement selection dialog
  4. RM provider provides selection dialog
  5. Developer selects the requirement
  6. CM tool links the requirement with the change request
  7. Tester creates a test case using QM tool
  8. Tester requests a change request selection dialog
  9. CM provider provides selection dialog
  10. Tester selects change request
  11. QM tool links the test case with the change request
  12. Architect requests a traceability report for the requirement using RM tool
  13. RM system generates a traceability graph following the links described above

Requirements validation

  1. Traceability links are established as described in previous scenario
  2. Developer implements plan item and notifies tester
  3. Tester creates execution record for the plan item
  4. Tester executes test and creates an execution result
  5. Architect searches for test cases associated with the requirement
  6. Architect navigates to a test case and examines its execution records
  7. Architect examines execution results associated with execution record

Regression test

  1. Tester finds a bug and creates a defect
  2. Developer delivers a fix for the defect and resolves it
  3. Developer requests a test case selection dialog
  4. QM provider provides selection dialog
  5. Developer selects a regression test case
  6. CM tools links the defect to the regression test
  7. Tester executes the regression test
  8. Regression test succeeds
  9. Tester closes defect
-- alternate
  • Regression test fails
  • Tester reopens defect
  • Go to step 2


Pre-conditions:

Post-conditions:

Alternatives:

Resources

  • QM system
  • RM system
  • CM system

wrt to Traceability links I think it is worth emphasizing that not all changes need an associated test (or at least not a new test). I think it is worth allowing for the possible situations - (1) developer change doesn't require a test - this might be a risk and so therefore RM tool needs to capture this assessment and allow for sign off (2) developer change doesn't require the creation of a new test case (for example an internal change with API fixed), so the QM tool must allow for the association of an existing test with the change.

-- NigelLawrence - 06 May 2010

wrt to Requirements Validation - some extensions to the approach above: (1) Architect asks for summary view of all test cases associated with a requirement or requirements where the last execution result is a certain state (normally a failure). (2) Architect wishes to know which tests associated with a requirement have not been executed within a the last period of time (iteration, milestone, calendar month etc.) (3) A combination of (1) & (2) - for example show me all test cases associated with these requirements where the last results is within the last week and it failed

-- NigelLawrence - 06 May 2010

wrt Regression test: Normally the tester is going to find the bug running a particular regression test - so to a certain extent verifying the fix to the bug is simple... re-execute the test (ie. dialog with QM to select the test is a no-op). But it might be helpful if the QM could provide other associated tests that would be pertinent to the nature of the change to check that the fix not only stops the test from failing but also provides you with confidence that it hasn't regressed anything else.

Another scenario that isn't really accounted for is updating the regression suites. What is the work flow for filling holes in a regression bucket? Suppose a customer problem or some adhoc investigation exposes a hole in your regression testing that must be fixed. the QM tool should allow for this eventuality and possible link through to other resources employed by a service organisation to confirm that the hole has been plugged.

Finally I'm interested in the process that goes before step (1) above. How has the tester (in my case automated test execution engine) executed the the regression test that has resulted in the failure. Is it blindly running through all the resgression tests? Ideally it would be a nice scenario if the tester was guided by correlations between the underlying developer changes and the regression tests, so that only relevant tests are executed in the order of relevance.

-- NigelLawrence - 06 May 2010

 
Edit | Attach | Print version | History: r7 | r5 < r4 < r3 < r2 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r3 - 06 May 2010 - 17:26:26 - NigelLawrence
 
This site is powered by the TWiki collaboration platform Copyright � by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use
Ideas, requests, problems regarding this site? Send feedback