Final Exam Review
- Usability Testing
- Security Testing
- Beta Testing
- Acceptance Testing
- very formal, often performed at customer site, "witness" test, contractual obligations, may be unit/stress/load tests
- may be developed by customer or vendor (or both). watch out for bias if vendor develops tests.
- rehearse well before going to customer site
- Installation Testing
- tests of installation procedurs, aspects of usability testing, ties into configuration testing
- May need DOE to decide which configurations to test
- Testing Services (SOA)
- No IEEE or ACM papers, look for whitepapers
- hard to test manually, no GUI to look at, client is another computer, must examine XML
- need to test scalability and security of services
- Proof of Concept for functional correctness, then load test it for performance
- Regression testing is very important, services get replaced
- Performance measures: time to connect, time to first byte, time to last byte
- SOA apps are layered and networked
- Individual services need thorough unit testing, a bad service can break the entire app
- Interoperability concerns
- Regression Testing
- must keep testing the old features to make sure they still work
- risk in upgrading, customers know there's a chance things will stop working
- bad fixes -- every 3 fixes introduce 1 new error (ripple effects)
- factors: complexity, feature interaction, experience of programmers, structure, time pressure, docs
- system testing team usually does regression testing, needs to be multi-level unit/integration/system tests
- test changes/fixes. test 'around' the fix. re-run confidence tests.
- Development/test lifecycle, bug reports feed back into next build
- Full regression testing is normally not practical
- Selective Regression Testing
- testing of code deltas, coverage tool maps between test cases and code, requires good configuration management so you know exactly what has been modified
- ripple effect analysis, map test cases to capabilities, requires traceability of requirements to test cases
- Confidence Tests
- high frequency use cases, critical functionality, functional breadth, test cases that previously failed
- Revalidation issues (care and feeding of tests)
- lots of tools to automate recording & replaying tests
- failures could mean that something is broken or that the tests need to be changed
- smart tools, context sensitive menu items rather than click the x/y position
- Tests require maintenance!
- Regression test selection
- Software Reliability/Availability?
- Operational Profile Testing
- 1. Find the customer profile
- 2. Establish the user profile
- 3. Define the system mode profile
- 4. Determine functional profile
- 5. Calculate the overall operational profile
- difficult for general purpose systems
- Error detection & recovery testing
- Serviceability testing
- MTBF of a software component?
- Predicting reliability
- Software reliability modeling
- models have a a lot of assumptions; read the fine print
- direct -> models
- indirect -> use metrics like size, complexity, test coverage
- Paper - Critique of Software Defect Prediction Models
- size/complexity metrics 'there is a perfect size' (doesn't work)
- testing metrics like fault data or coverage info (doesn't work)
- quality of dev process (CMM, CMMI) (maybe... still depends on many other factors)
- reliability models
- defects (problems in code) vs. failures (failure intensity, MTBF, what a customer experiences)
- reducing defects does not translate directly to MTBF, depends on how software is used (operational profiles)
- Paper - Towards a More Reliable Theory of Software Reliability
- models work in well defined domains (telecom, aerospace) because it's easier to do the operational profile
- models tend to ignore complexity, effectiveness of tests, defect repair (fixes)
- application complexity goes beyond structural metrics like McCabe?
- test effectiveness. mutation testing (intentionally seeding bugs to see if the tests find them.)
- need operational profile ++. current op focuses on user/customer and not enough on environment
- Test Planning, Tracking, Coordinating
- Test Plan Outline
- Objectives. "why?"
- dependencies and assumptions
- test strategy
- test environment specs
- entry and exit criteria
- schedule. estimate of testing time. may be blocked waiting for fixes. must be agile. new features will show up.
- risk management. risk mitigation. think about what can go wrong.
- Context Driven Testing
- different strategy for video game vs. flight mgmt system vs. bank ATM system
- objectives are based on what you're testing
- Seven principles
- Test Plan - Dependencies & Assumptions
- assume s/w is completed on time
- assume 80% of tests will pass on first attempt (POFA)
- assume 3% requirements changes
- human and equipment resources
- Test Plan - Testing Strategy
- Test Plan - Entry Criteria
- Test Plan - Exit Criteria
- Test Plan - Test Schedule
- Risk Based Testing
- risk exposure. Probability of an adverse event occurring, weighted by the severity of the event
- not all problems are equal
- CRUD - customer reported unique defects (typically 1/1000 LOC)
- defects tend to cluster. usage patterns. some code is exercised more than other code. same person wrote the buggy code. code related to difficult requirements. complexity of code.
- test the potential problem areas more heavily
- time pressure, you may have to choose what to test
- Pareto principle, 80/20 rule
- 80% of the problems are in 20% of the modules
- inspections, unit testing, integration testing, system testing
- project history can indicate where to focus.
- priority field in UML
- R (required) in requirements docs vs. D (desired)
- Test high risk features *early*
- Risk catalog. what could go wrong.
- Paper - Troubleshooting Risk Based Testing
- Estimation of Test Effort
- what happens if you over- or under-estimate?
- House cleaning example
- size. scope. how good is the code (POFA). how much regression testing. waiting for fixes. technology. experience/motivation of testers. complexity. 'will the customer be home'. desired quality level. process.
- customer may embed engineers in the test team to answer questions
- Paper - Estimate testing effort needed to ensure a desired level of field quality
- nice model, but it only works for this project
- Tracking Test Activity
- Defect Density
- Defect Seeding
- Trend Analysis
- Reliability Model
- Testing Progress
- Earned Value
- BCWS, BCWP, ACWP. schedule vs. budget
- TODO: example!
- Test Process Improvement
- process: CMMI -> test maturity model
- people: education, motivation, tends not to be a respected area
- Test Team Roles
- team leader, test architect, test env specialist, testers
- Motivation of Test Team Members
- Paper - High Performance s/w Test Teams
- defined role, diversity, organizational types
- Paper - Test Manager at Project Status Meeting
- report the right info, report the info right
- Paper - Software Cost Estimation for Large Projects
- Paper - In process metrics for s/w testing
- look for defect arrival rate to slow down
- defect backlog (must fix bugs remaining)
- Paper Learning from Mistakes, Causal Analysis
- Test Process Improvement
- faster, cheaper, better
- Six Sigma. current, analyze, goal, process re-design, implement
- Testing Retrospective / Postmortem / Lessons Learned
- Causal Analysis
- select a set of defects, identify probably causes
- categorize common causes & find solutions
- communication failure, requirements problem, oversight, education, transcription (typo)
- Root cause, keep asking 'why?' (fishbone diagram)
- defect leakage, defect containment
- TCE (Test Case Effectiveness) metric: defects found by test cases / total defects found
- CMMI (Capability Maturity Model Integrated)
- Test Process Maturity Models
- Test Documentation
- Test Plan
- Test Case
- Incident Report
- Test Summary Report
- Inspections
- Inspection Process
- Outsourcing
- Patterns
- Testing Career Paths