AbeachA
 

 

The report generated by the automation environment is the most visible aspect of automation.  As confidence in automation increases, the number of individuals requesting the automation report will increase.  As such, it is necessary for the automation report to contain accurate information, be unambiguous, be complete, point out new failures versus existing failures, and assist with troubleshooting.

The AbeachA AI21 automation report meets and exceeds these standards.  The report below provides a window into the power and versatility of the AI21.

Reviewing the below report, primarily from top to bottom, we see:

  • The software image/build (BetaImage_b5.bin) installed on the test bed and tested.
  • Similar test cases are grouped into test suites, and test suites are grouped into master test files (AI_TEST_BAT) according to their purpose (Automation Infrastructure Basic Acceptance Tests).
  • Test cases can be grouped into levels (Daily (1)) and levels can be assigned to be run at different times (Daily, Weekly, Monthly, Sanity, etc).
  • Multiple test beds are supported, along with the ability to restrict images to specific test beds.  Additionally, email distribution lists can be defined based on test beds, or software image tested.
  • An explanatory note can be added if the automation job is started manually via the web.  The note supports HTML to allow emphasis (bold, color, etc.), or a link to be added to the note.
  • Test suites can be grouped by similarity of function, and an explanatory report header added.  The header supports HTML to allow links to test plans, etc.
  • Each test suite has a brief description to allow the reader to know what is being tested.
  • Pass, Fail, and Null results are recorded, with the Fails listed in red for emphasis.
  • If more than 3 (configurable) new failures in a suite, run a special test to verify test bed integrity.  If this special test passes then continue with the suite, otherwise exit the test suite.
  • A test case that exceeds the run-time timer value is considered hung and is stopped, if it has not already crashed, and is flagged with red emphasis.
  • If a test case hangs the ability to run a special test suite (for example, reset the test bed to a default state) is provided.
  • All non-pass test cases are listed by test suite after the summary section.
  • Each test case returns a 75 character (max) description of its purpose, which allows the reader to have a general idea of the failure.
  • Clicking on the test case description will display the log file for that test case.
  • Bug IDs, or just an explanatory note, for each non-passing test case can be displayed.  This allows the reader to know which failures are new, and which are known failures.
  • The start and stop time for the automation job is included.
  • The name and location of the results directory is included.
  • All reports are archived on a unique web-page for future reference.
  • A link to a test bed diagram is supported.

 

 

 

+-------------------------------------Automated Test Results-------------------------------------+
                                          Name: BetaImage_b5.bin
                                    Suites Run: AI_TEST_BAT
                                    Test Level: Daily (1)
                                      Test Bed: TB_4_01

Note: An automation job manually started via web, with a note.  Note supports HTML. 

+--- Section 1 - The AI21 reliably runs any test case, in any order.
1.   Initialize the test bed                     PASS (2)    FAIL (1)    NULL (1)     TOTAL (4)
2.   Reset the Test Bed                          PASS (1)    FAIL (0)    NULL (0)     TOTAL (1)    

+--- Section 2 - Test suites, of related test cases, provide a high level of test control.
3.   Test cases return PASS result               PASS (2)    FAIL (0)    NULL (0)     TOTAL (2)    
4.   Test cases return NULL result               PASS (0)    FAIL (0)    NULL (3)     TOTAL (3)    
5.   Test cases return FAIL result               PASS (1)    FAIL (5)    NULL (0)     TOTAL (6)    
6.   3 minute test cases return PASS             PASS (0)    FAIL (0)    HUNG (1)     TOTAL (1)    
7.   Reset the Test Bed                          PASS (1)    FAIL (0)    NULL (0)     TOTAL (1)    

+--- Section 3 - Test suites can be grouped by function, and given a descriptive report header.
8.   Test cases return FAIL result               PASS (1)    FAIL (5)    NULL (0)     TOTAL (6)    
9.   Test cases return PASS result               PASS (2)    FAIL (0)    NULL (0)     TOTAL (2)    
10.  Test cases return NULL result               PASS (0)    FAIL (0)    NULL (3)     TOTAL (3)    

+--- Section 4 - No matter the test purpose, the AI21 can run the test and record the result.
11.  Test cases return NULL result               PASS (0)    FAIL (0)    NULL (3)     TOTAL (3)    
12.  Test cases return PASS result               PASS (2)    FAIL (0)    NULL (0)     TOTAL (2)    
13.  Test cases return FAIL result               PASS (1)    FAIL (5)    NULL (0)     TOTAL (6)    

+--- Section 5 - The test report includes a test case description for each non-passing test case.
14.  Test case return values                     PASS (3)    FAIL (1)    NULL (0)     TOTAL (4)    
15.  Verify MESSAGE functionality                PASS (2)    FAIL (0)    NULL (0)     TOTAL (2)    
16.  SANITY tests at all levels                  PASS (1)    FAIL (0)    NULL (0)     TOTAL (1)    

+--- Section 6 - Clicking on the description displays the log file of the non-passing test.
17.  Test cases that verify user return value    PASS (1)    FAIL (6)    NULL (6)     TOTAL (13)   



+----------------------------------Test Suite Failed Test Cases----------------------------------+

For test case information go to the Test Suites Info Web Page
Legend:  F: FAILed, N: NULL, X: eXtended, C: CRASHed, H: Hung, Z: No such test case, S: Skipped

Test Suite Header: Initialize the test bed (aiBootup)	Start: 23:04:11
BAT Level Executed: Daily (1)   

   F: 1.0    Boot strap test case.  Get MSG and log all lines in message. 
   N: 3.0    Test case, AI_EnvVars.tcl, verifies ENV vars are present. 

Test Suite Header: Test cases return NULL result (aiNull)	Start: 23:04:52
BAT Level Executed: Daily (1)   

   N: 1.0    AI test NULL return code (2): AI_NULL1.tcl 
   N: 2.0    Verify SANITY level: AI_SANITY3.tcl 
   N: 3.0    A PERL program, PE_NULL1.pl, that returns a NULL(2) 
	Bug(s): CR85692

Test Suite Header: Test cases return FAIL result (aiFail)	Start: 23:05:22
BAT Level Executed: Daily (1)   

   F: 1.0    AI test FAIL return code (0): AI_FAIL1.tcl 
	Bug(s): CR1234 "Intermittant Problem"
   F: 2.0    A PERL program, PE_FAIL1.pl, that returns a FAIL(0) 
	Bug(s): CR5663 "Bug in perl interpretor"
   F: 3.0    Verify SANITY level: AI_SANITY1.tcl 
   F: 4.0    HTML Log File with a FAIL return code (0): AI_HTML_LOG.tcl 
   F: 5.0    Big Log File test with a FAIL return code (0): AI_BIG_LOG.tcl 

Test Suite Header: 3 minute test cases return PASS (aiTimeOut)	Start: 23:06:24
BAT Level Executed: Daily (1)   

   H: 1.0   Test Case is HUNG
   	TIMEOUT: AI_3MINPASS1.tcl is HUNG.

Test Suite Header: Test cases return FAIL result (aiFail)	Start: 23:09:34
BAT Level Executed: Daily (1)   

   F: 1.0    AI test FAIL return code (0): AI_FAIL1.tcl 
	Bug(s): CR1234 "Intermittant Problem"
   F: 2.0    A PERL program, PE_FAIL1.pl, that returns a FAIL(0) 
	Bug(s): CR5663 "Bug in perl interpretor"
   F: 3.0    Verify SANITY level: AI_SANITY1.tcl 
   F: 4.0    HTML Log File with a FAIL return code (0): AI_HTML_LOG.tcl 
   F: 5.0    Big Log File test with a FAIL return code (0): AI_BIG_LOG.tcl 

Test Suite Header: Test cases return NULL result (aiNull)	Start: 23:10:56
BAT Level Executed: Daily (1)   

   N: 1.0    AI test NULL return code (2): AI_NULL1.tcl 
   N: 2.0    Verify SANITY level: AI_SANITY3.tcl 
   N: 3.0    A PERL program, PE_NULL1.pl, that returns a NULL(2) 
	Bug(s): CR85692

Test Suite Header: Test cases return NULL result (aiNull)	Start: 23:11:26
BAT Level Executed: Daily (1)   

   N: 1.0    AI test NULL return code (2): AI_NULL1.tcl 
   N: 2.0    Verify SANITY level: AI_SANITY3.tcl 
   N: 3.0    A PERL program, PE_NULL1.pl, that returns a NULL(2) 
	Bug(s): CR85692

Test Suite Header: Test cases return FAIL result (aiFail)	Start: 23:12:17
BAT Level Executed: Daily (1)   

   F: 1.0    AI test FAIL return code (0): AI_FAIL1.tcl 
	Bug(s): CR1234 "Intermittant Problem"
   F: 2.0    A PERL program, PE_FAIL1.pl, that returns a FAIL(0) 
	Bug(s): CR5663 "Bug in perl interpretor"
   F: 3.0    Verify SANITY level: AI_SANITY1.tcl 
   F: 4.0    HTML Log File with a FAIL return code (0): AI_HTML_LOG.tcl 
   F: 5.0    Big Log File test with a FAIL return code (0): AI_BIG_LOG.tcl 

Test Suite Header: Test case return values (aiVariety)	Start: 23:13:18
BAT Level Executed: Daily (1)   

   F: 2.0    A unit test of ATI with a sub-test return values 
      FX: 2.2   The result of the second sub-test
	Bug(s): CR2345 "Fix next release"

Test Suite Header: Test cases that verify user return values 20-29 (userReturnCode)	Start: 23:14:19
BAT Level Executed: Daily (1)   

   N: 1.0    Test case, UserTestCase21.tcl, return value 21.  Run custom test. 
   F: 2.0    Test case, AI_CUSTOM21.tcl, runs if previous test case returned 21. 
   N: 3.0    Test case, UserTestCase20.tcl, return value 20.  Run custom test. 
   F: 4.0    Test case, AI_CUSTOM20.tcl, runs if previous test case returned 20. 
   N: 5.0    Test case, UserTestCase26.tcl, return value 26.  Run custom test. 
   F: 6.0    Test case, AI_CUSTOM26.tcl, runs if previous test case returned 26. 
   N: 7.0    Test case, UserTestCase27.tcl, return value 27.  Run custom test. 
   F: 8.0    Test case, AI_CUSTOM27.tcl, runs if previous test case returned 27. 
   N: 9.0    Test case, UserTestCase28.tcl, return value 28.  Run custom test. 
   F: 10.0   Test case, AI_CUSTOM28.tcl, runs if previous test case returned 28. 
   N: 12.0   Test case, UserTestCase29.tcl, return value 29.  Run custom test. 
   F: 13.0   Test case, AI_CUSTOM29.tcl, runs if previous test case returned 29. 

+------------------------------------End of Failed Test Cases------------------------------------+



Test Run Started:  Mon Oct 10 23:04:11 PDT 2005
Test Run Finished: Mon Oct 10 23:16:31 PDT 2005

Test Results Dir: budd_job34.12111_LEVEL_1_BUILD_BetaImage_b5.bin_on_Mon_10-10-05_at_23:04
Results Dirs at:  192.168.0.166 /home/guest/AI/jobResults
Report archives:  budd
Testbed diagram:  TB_4_01

Job_Run: _job34.12111