HistoryViewLinks to this page 2012 August 30 | 09:53 am

The following scenarios from PramodChandoria are an application of the generic Automation Execution Scenario to the domain of automated test execution.

Scenario 1A: User requests automated execution of a test

  1. User Queries for available list of automated Test Cases (OSLC: consumer requests list of Automation Plans, or presents an Automation Plan picker dialog)
  2. User Selects one of the automated tests. (OSLC: consumer selects a test from the picker)
  3. User chooses one of the automated test scripts associated to the test - a test may have multiple scripts. (OSLC: User begins creating an automation request, perhaps with a delegated creation UI).
  4. User query for live registered adapters for the type of Script selected in above step and selects one. (OSLC: User continues with creation of the request by specifying the desired execution environment )
  5. User clicks Execute button to start the execution (OSLC: automation request/job is created and POSTed to the automation provider)
  6. Test automation provider internally creates an Execution Request to track this execution started (OSLC: automation request is translated to a provider-specific representation and tracked).
  7. An Execution viewer page is opened which displays information about Execution Request created in above step. (OSLC: the consumer queries the automation request in progress and provides a UI to display current status.)
  8. Automation tool (adapter) queries the test server for any available job for it. (OSLC: automation tool specific server/agent interactions)
  9. Automation tool (adapter) receives a job assigned to it (OSLC: test tool specific server/agent interactions)
  10. Automation test is executed on the chosen machine. Machine chosen by user or chosen automatically by the automation provider at run time. (OSLC: test tool specific execution)
  11. User observes the progress of the Execution and any status messages about execution. (OSLC: the consumer queries the automation result in progress and provides a UI to display current status and contributions.)
  12. Upon completion of execution, an Execution result is created (OSLC: the automation result is final and is potentially translated to another OSLC test artifact or a provider-specific artifact with additional information)

Note: There are two types of results: automation results and test execution results. The automation could run successfully, but the test fails. The test execution result could be an OSLC QM artifact but does not have to be.

Scenario 1B: User creates a scheduled automated execution of a test

This is a repeat of Scenario 1 with the following variations:

  1. User uses the test tools schedule editor to schedule the creation of automation requests and provides the input (script, environment) data. For discussion - this could be creating the OSLC automation request with an attribute which specifies when execution is to occur. Or, it could be up to the test tool to create the requests at the appropriate time
  2. The test execution runs “in the background”. Progress might not be watched by a user, or a user may be watching the job status on a test execution console which displays jobs in progress
  3. The automation results and test execution results are made available and are associated with the test case executed

Scenario 2: An automated test is executed upon Build Completion

  1. A test case (or group of test cases/test suite) is scheduled to execute upon completion of a Build (OSLC: Test tool is configured to create a scheduled test as in scenario 1B)
  2. User also configures information like Test machine where test to be executed and the Script to be executed. (OSLC: script and environment data specified as in scenario 1B).

    2 a. There are other environment variables which may be configured to be passed between the various test cases during execution (like execution variables)
    2 b. The test machines may also be a runtime argument and be selected at the time of execution based on availability.

  3. A new Build is completed (OSLC: A new automation result is produced by a build tool automation provider)
  4. Test tool is notified for new build event (OSLC: The test tool, as an automation consumer, detects a new automation result produced by the build tool automation provider - this could be via polling or notification)
  5. Test tool searches all related schedules to be triggered by detection of the new build. Note: Some schedules may choose to run only on passed build whereas others may be to run always
  6. Test too Executes all tests scheduled by build completion (Follows Scenario 1 and 1B)
  7. The schedule may optionally have deployment, configuration and/or tear down jobs to be executed as well. They will also be executed as part of the schedule. (OSLC: test tool invokes deployment automation providers before initiating execution)

Note: In step 1, there is a configuration scenario where the user in the test tool requests a list build definitions from the build tool. This could be done via a delegated picker from the build tool or programmatically via queries to the build tool. Where process flows cross domain, it is critical delegated UIs and OSLC query are supported.

There is at least 1 additional scenario which expands step 7 in this scenario. It is the interaction between a test tool and a tool which automates deployment of test environments

Scenario 3: Test execution agent registers with Test Tool

  1. User launch a small utility adapter corresponding to the Test execution agent or sometimes the test execution agent is embedded within tool itself
  2. User provides information required for execution agent to connect to the test tool
  3. Execution agent registers with the test tool providing information like tool type, capabilities supported and other agent metadata
  4. Test tool, upon successful registration provides a URL where adapter should poll for any work

Scenario 4: Test execution agent polls for any work available

  1. An registered execution agent polls (GET) the test tool for any work available
  2. Test tool searches any Execution Request which is not executed until now and is assigned to the polling agent.
  3. The test tool, upon finding Execution Request assigned to the polling execution agent, provides URL to the Execution Request in the response

    3 a. Alternatively, the execution agent queries for jobs, receives a list of jobs and picks up the job details from there. The execution agent informs the test tool it has picked up an execution request.

  4. Execution agent, upon finding URL to the new work (Execution Task), reads (GET) Execution task.
  5. Execution agent interpret the Execution task resource to carry out the execution. Typical information is like Path to the Automation Script to be executed, arguments and the location where the results should be updated.
  6. Execution agent commands Automation tool to carry out execution of the script specified.
  7. Execution agent updates test tool that execution request is taken and keep on updating the progress (0-100%) as the execution proceeds.
  8. At the completion of execution, execution agent creates Execution Result based on the execution. (Typically automation tools have it’s own output log, execution agent will convert tool specific results to test tool Execution Result)
  9. Execution agent uploads Automation Result and Execution Result and link Execution Result back to Execution Request and mark Execution Request state completed with progress 100%
  10. Execution agent can optionally link the output files, error logs to the Execution result created in the test tool