SN | Glossary | Details | ||||||
1 | Software Testing | is to check application that it meets required specifications, to add any missing functionality, to find any bug or error, to check application integrated units / modules work in order, to check Performance like response time, stability, reliabilty, scalability of the software and many other things specific to the Test requirments | These checks are the Quality Parameters for software & the person doing these checks are called Software Quality Assurance Engineer Or Software Quality Analyst. | |||||
2 | Importrance / Need of Testing | To provide best user experience & customer satisfaction | To provide reliable & authentic information | For builtup cost effectivness & to save time from breakdowns | To avoid any mis happeining which occurred due to any bug in the software | To avoid any loss of revenue generation from the software | ||
3 | As per ANSI/IEEE 1059 | Testing in Software Engineering is a process of evaluating a software product to find whether the current software product meets the required conditions or not. The testing process involves evaluating the features of the software product for requirements in terms of any missing requirements, bugs or errors, security, reliability and performance. | {Required, Missing, Not Working, No Theft, Ture Info, Working Fine} | |||||
4 | Main Approaches to Software Testing | Manual Testing & Automation Testing | ||||||
Manual Testing | is the software Testing where test cases executed by a Human Intellect manually. It covers mainly following types > | Design Testing [UI Testing], Functional Testing (Black Box Testing [UX], White Box Testing [API]), Non Functional Testing [Performance Testing] & Maintenance Testing [Regression Testing] | ||||||
Testing Strategies covered under these Testing Types and are applicable to Automaton Testing as well > | Design Testing – Check for Layout & visible web elements of the Software wrt to mockup designs & also check the Responsive design of the UI | Functional Testing: Black Box Testing & White Box Testing | Black Box Testing: System Testing – Complete Software compiled & tested as whole / check end to end complete system specifications, Compatibility Testing – check functionality of software in different software & hardware environments, Acceptance Testing – beta testing of the product done by the actual end users to get it Accepted, Smoke Testing – check Critical Functionality like Application Launches & GUI is responsive, Sanity Testing – Assuring Bug fixed and that fixes not creating new bugs. Globalization Testing – Testing for multi languages, Types :- International Testing & Localization Testing, Adhoc / Monkey / Gorila Testing – Doing Testing randomly beyond requirements, | Non Functional Testing: Performance Testing – checks for response time, load, stability, reliability, scalability & security of the software. [Security Testing – (VAPT) Vulnerability Assessment & Penetration Testing] | Maintenance Testing: Regression Testing – Assuring New funtionality introduced in software does not breaks the existing functionality. Recovery Testing – Check that Software can be recover from possible crashes, Migration Testing – Software can be easliy install on New/Updated server | |||
White Box Testing: Unit Testing – Check Individual Part of the Software, Integration Testing – focus on the construction of software like check Integrated Units/ different modules working in unity | ||||||||
Automation Testing | is an executing actual software program with test case data to analyse application behaviour to test data & finding errors in the application. This is also called Program Testing. | Same Testing Types & Testing Strategies applicable to Automation Testing as applicable to Manual Testing which are listed above | ||||||
5 | Principles of Software Testing | Exhaustive testing is not possible – means everything not to test. Optimal amount of testing done based on the risk assessment of the application i.e. Which operation is most likely to cause your system to fail?. | ||||||
Defect Clustering – states that a small (out of all) number of modules contain most of the defects. Identify & test those | ||||||||
Pesticide Paradox – If the same set of repetitive tests are conducted, the method will be useless for discovering new defects. To overcome this, the test cases need to be regularly reviewed & revised, adding new & different test cases to help find more defects. | ||||||||
Testing shows presence of defects – Testing talks about the presence of defects and don’t talk about the absence of defects. i.e. Software Testing reduces the probability of undiscovered defects remaining in the software but even if no defects are found, it is not a proof of correctness. | ||||||||
Absence of Error – fallacy. It is possible that software which is 99% bug-free is still unusable. This can be the case if the system is tested thoroughly for the wrong requirement. Software testing is not mere finding defects, but also to check that software addresses the business needs. | ||||||||
Early Testing – Testing should start as early as possible in the Software Development Life Cycle. So that any defects in the requirements or design phase are captured in early stages. It is much cheaper to fix a Defect in the early stages of testing. | ||||||||
Testing is context dependent – which basically means that the way you test an e-commerce site will be different from the way you test a commercial off the shelf application. All the developed software’s are not identical. You might use a different approach, methodologies, techniques, and types of testing depending upon the application type. | ||||||||
6 | Test Policy | It is a high-level document which describes principles, methods and all the important testing goals of the organization. | ||||||
Requirements Traceability Matrix | This is a document which connects the requirements to the test cases. | |||||||
Test Plan | Test plan holds the complete information like the scope of Software testing, resources, milestone, details of test suites and test cases. It contains the detailed understanding of eventual workflow. | |||||||
Test Suite | Test suite is a collection of test cases that are intended to be used to test the Software program to show that it has set of specified behaviours. It contains list of related Test cases (like for one module). This is also called validation suite. | |||||||
Test Case | Test Case is set of test inputs, execution conditions and expected result to compare it with actual result. | Test Case format :- | ||||||
Test Group | is a way to tag individual test cases and assign them to groups. With grouping, you don’t need to maintain a list of test cases which are to run during Testing. | Like Regression group | ||||||
Test Analysis | is a process of checking and analysing the test artifacts (components / resources) in order to base the test cases. The goal of test analysis is to gather requirements and define test objectives to establish the basis of test cases | The source from which you derive test information could be :- | SRS (Software Requirement Specification) BRS (Business Requirement Specification) Functional Design Documents | |||||
Test Scenario | Test Scenario is an item or event of a software system which could be verified by one or more Test cases. | |||||||
7 | Test Management Tool | A tool like TestMonitor can be used for tracking all the test cases written by your team. | TestRail, TestMonitor, Google Sheet etc | |||||
Defect Tracking Tool | Error Reporting & Tracking Tool | Jira, Mantis, Trello, Basecamp etc | ||||||
Automation Tool | Software tools performing testing automatically as per set instructions | Selenium, Webdriver (use with Java), TestNG, Cucumber + JUnit for Web App Testing/ RestAssured + Hamcrest is for API Automation / Appium for Mobile App Testing & GIT + Jenkins | ||||||
Performance Testing Tool | used for testing the speed/response time, stability, reliability, scalability and resource usage of a software application under particular workload. The main purpose of performance testing is to identify and eliminate the performance bottle necks in the software application. It is a subset of performance engineering and also known as “Perf Testing”. | Jmeter, Postman | VAPT (Vulnerability Analysis & Penetration Testing) > | Realted to Cyber Security | ||||
8 | Software Testing Certifications | ISTQB and CSTE | (ISTQB) The International Software Testing Qualifications Board is a software testing certification board that operates internationally. Founded in Edinburgh in November 2002, the ISTQB is a non-profit association legally registered in Belgium | The Certified Software Tester (CSTE) certification is intended to establish standards for initial qualification and provide professional level of competence in the principles and practices of quality control in the IT profession. | ||||
9 | Types of Software Testing | Design Testing > | It is the testing of User Interface. Check for Layout & visible web elements of the Software wrt to mockup designs & also check the Responsive design of UI | |||||
Functional Testing > | Testing each & every component thourougly that it work in intended way is called Functional Testing. | 1st Check for valid Data & if no bugs then go for Invalid Data | Types > Black Box Testing [UX Testing] & White Box Testing [API Testing] | Black Box > | testing the functional behaviour of the software application with focus on input and output data. Functionalities of software application are tested according to specifications. | Boundary Value Testing can be done under black box testing. It focused on the values at boundaries. This technique determines whether a certain range of values are acceptable by the system or not. | ||
White Box > | White box testing is the testing of the internal structure working of code of a software | |||||||
Gray Box > | Gray-box testing is a combination of white-box testing and black-box testing. The aim is to search for defects if any due to improper structure or improper usage of applications. | |||||||
Non Functional Testing > | Usually called Performance Testing | |||||||
Maintenance Testing > | Usually called Regression Testing & Recovery Testing | |||||||
10 | Testing Strategies | is an outline that describes the Testing approach of the Software Development Life Cycle > | ||||||
Design (UI) Testing > | Check the Look & feel of the Software program. Check for Layout & web elements placement in the UI wrt to mockup designs & also check the Responsive design of UI | |||||||
Unit Testing > | To test the individual unit of the code perform as expected. Unit Testing is done during the development (coding phase) of an application by the developers. | Unit Tests isolate a section of code and verify its correctness. | Unit testing is commonly automated. A developer writes a section of code in the application just to test the function. They would later comment out and finally remove the test code when the application is deployed. | Junit is a free to use testing tool used for Java programming language. It provides assertions to identify test method. This tool test data first and then inserted in the piece of code. | ||||
Integration Testing > | Focus on contruction of Software. To verify integrated units like data exchange b/w diff. modules of software is in order. It is also Called String Testing or Thread testing. | Send the Data from 1 module and check with other that data received or not and check the type of data getting in response | ||||||
Approaches :- | 1) Big Bang Approach | is an approach in which all software components (modules) are combined at once and tested as a unit. This unit is considered as an entity while testing. | ||||||
2) Incremental Approach > | In Incremental integration testing, the developers integrate the modules one by one using stubs or drivers to uncover the defects. | Stubs & Drivers are dummy modules. | ||||||
Top Down Approach | ||||||||
Bottom Up Approach | ||||||||
Sandwich / Hybrid Approach | ||||||||
System Testing > | Software is compiled as a whole and then tested as a whole. This testing checks the functional completeness, usability, amongst Hardware & Software compatibility. It falls under the black box testing category | This is complete Software Testing for End to End User Experience. Test Environment Similar to Production environment. | ||||||
Types of System Testing :- | 1) Usability Testing > | mainly focuses on the user’s ease to use the application | ||||||
2) Load Testing > | is necessary to know that a software program will perform under real-life loads conditions | |||||||
3) Regression Testing > | involves testing done to make sure none of the changes made over the course of the development process have caused new bugs. It also makes sure no old bugs appear from the addition of new software modules over time. | |||||||
4) Recovery Testing > | is done to demonstrate a software solution is trustworthy and can successfully recoup from possible crashes. | |||||||
5) Migration Testing > | is done to ensure that the software can be moved from older system infrastructures to current system infrastructures without any issues. | |||||||
6) Functional Testing > | Also known as functional completeness testing, Functional Testing involves trying to think of any possible missing functions. Testers might make a list of additional functionalities that a product could have to improve it during functional testing. | |||||||
Compatibility Testing > | to check whether the software is capable of running on different hardware, operating systems, applications (Browsers), network environments or Mobile devices. | Findings from Compatibity Testing > | 1.Object Overlapping 2. Scattered Content 3. Broken Frames/ Tables 4. Scroll bar issues like Horizontal or Vertical Scroll bar is not displayed or if it is displayed we are not able to move it (because it is an image) 5. Images with certain formats may not be displayed in certain browsers 6. Certain objects may not function | |||||
Performance Testing > | is a software testing process used for testing the response time, stability, reliability, scalability and resource usage of a software application under particular work load. This Non Functional Testing Parameters are :- | |||||||
1) Availability / Stability Testing | It measure the ability of a software application to continuously function for a long period of time with Full operational capacity (software can handle the expected load over a long period of time). It also called load and endurance testing. Also, Stability Testing is done to check the efficiency of a developed product beyond normal operational capacity, often to a breakpoint. Stability Testing Check/report Points > | Transaction Response Times > | The average time is taken to perform transactions means time taken in processing the request by the web server and sent the response to the application server/user. Types > | Load (Test Stability & Response Time for designed no. of users), Stress (Test Stability & Response Time for more then designed no. of users [software’s reaction to sudden large spikes in the load generated by users]), Volume (Test Stability & Response Time for large data transactions), Soak Testing (Applying Load for specific period of time contineously). Spike testing: It is used to check the behavior of a system by increasing a load of a system instantly. Load > Total number of instances it is creating and running simultaneously to give the response to users is called load. We can also say total number of users using the application simultaneously. How > | 1) Write a program in JMeter 2. Run the program 3. Enter the number of users 4. Click on run. Request goes to server, runs the program and gives the response time to the tool. 5. Tool analyses the results and gives in the form of graphs 6. We manually analyze whether the test is passed or failed | |||
Hits Per Second | The number of hits made on the server by users (Request made to server). These statistics benefits to determine the number of load users generate, with respect to a number of hits. | |||||||
Throughput | Throughput – rate a Server or network receives requests per second. (Response receive by user) | Throughput means the amount of data that the users received from the server at any given time. This statistic helps to evaluate the amount of load that users generate. | ||||||
Transaction per second | These are the total number of completed transactions (both successful and failed) performed during a test. This statistic helps to check the actual transaction load on the system. | |||||||
CPU | CPU percentage utilization spent during a test. Usually its 40% to 70% for less to more demanding Job | an amount of time processor spends executing non-idle threads. | ||||||
Memory | Memory usage during a test. | amount of physical memory available to processes on a computer. | ||||||
Disk | Utilization of disk spaces spent during a test. | Disk time : amount of time disk is busy executing a read or write request. | ||||||
Bandwidth | shows the capacity to handle max. bits per second used by a network interface | Bit Rate > | Number of Bits send / second | |||||
Maximum active sessions – the maximum number of sessions that can be active at once. | ||||||||
Thread counts – An applications health can be measured by the no. of threads that are running and currently active. | ||||||||
Garbage collection – It has to do with returning unused memory back to the system. Garbage collection needs to be monitored for efficiency. | ||||||||
2) Reliability > checks whether the software can perform a failure-free operation for a specified time period in a particular environment and providing accurate data every time | Types > | Software reliability testing includes Feature Testing, Load Testing, and Regression Testing | ||||||
3) Security > that uncovers vulnerabilities, threats, risks in a software application and prevents malicious attacks from intruders. The purpose of Security Tests is to identify all possible loopholes and weaknesses of the software system which might result in a loss of information, revenue, repute at the hands of the employees or outsiders of the Organization. | Types > (There are seven main types of security testing as per Open Source Security Testing methodology manual) | Vulnerability Scanning > | This is done through automated software to scan a system against known vulnerability signatures. | |||||
Security Scanning > | It involves identifying network and system weaknesses, and later provides solutions for reducing these risks. This scanning can be performed as both Manual and Automated scanning. | |||||||
Penetration Testing > | This kind of testing simulates an attack from a malicious hacker. This testing involves analysis of a particular system to check for potential vulnerabilities to an external hacking attempt. | |||||||
Risk Assessment > | This testing involves analysis of security risks observed in the organization. Risks are classified as Low, Medium and High. This testing recommends controls and measures to reduce the risk. | Sample Test scenarios to give you a glimpse of security test cases :– | A password should be in encrypted format Application or System should not allow invalid users Check cookies and session time for application For financial sites, the Browser back button should not work. | |||||
Security Auditing > | This is an internal inspection of Applications and Operating systems for security flaws. An audit can also be done via line by line inspection of code | Methodologies/Techniques :– | Tiger Box: This hacking is usually done on a laptop which has a collection of OSs and hacking tools. This testing helps penetration testers and security testers to conduct vulnerabilities assessment and attacks. Black Box: Tester is authorized to do testing on everything about the network topology and the technology. Grey Box: Partial information is given to the tester about the system, and it is a hybrid of white and black box models. | |||||
Posture Assessment > | This combines Security scanning, Ethical Hacking and Risk Assessments to show an overall security posture of an organization. | Security Testing Tools :– | Teramind, Owasp (Open Web Application Security Project), WireShark, W3af | |||||
Ethical Hacking > | It’s hacking an Organization Software systems. Unlike malicious hackers, who steal for their own gains, the intent is to expose security flaws in the system. | VAPT :– Can be Manual or Automated. Tools > | Teramind, Nmap, Nessus, Pass The Hash | |||||
4) Survivability / Recovery Testing | It verifies software’s ability to recover from failures like software/hardware crashes, network failures etc. The purpose of Recovery Testing is to determine whether software operations can be continued after disaster or integrity loss. Recovery testing involves reverting back software to the point where integrity was known and reprocessing transactions to the failure point. | |||||||
5) Usability | The ease with which the user can learn, operate, prepare inputs and outputs through interaction with a system. Parameters like Useful, Findable, Accessible, Usable, desirable | |||||||
6) Scalability | measures performance of a system or network when the number of user requests are scaled up or down. The purpose of Scalability testing is to ensure that the system can handle projected increase in user traffic, data volume, transaction counts frequency, etc. It tests system ability to meet the growing needs. | Same Parameters checks as in Stability Testing | ||||||
7) Interoperability | The purpose of Interoperability tests is to ensure that the software product is able to communicate with other components or devices without any compatibility issues. | |||||||
8) Efficiency | The extent to which any software system can handles capacity, quantity and response time. | |||||||
9) Flexibility | The term refers to the ease with which the application can work in different hardware and software configurations. Like minimum RAM, CPU requirements. | |||||||
10) Portability | The flexibility of software to transfer from its current hardware or software environment. | |||||||
11) Reusability | It refers to a portion of the software system that can be converted for use in another application. | |||||||
Smoke Testing > | is a software testing technique performed post software build to verify that the critical functionalities of software are working fine (Like software launches successfully & GUI is responsive). It is executed before any detailed functional or regression tests are executed. | |||||||
Sanity Testing > | Sanity testing is a kind of Software Testing performed after receiving a software build, with minor changes in code, or functionality, to ascertain that the bugs have been fixed and no new issues are introduced due to these fixes. | |||||||
Acceptance Testing > | This testing stage carried out to get customer sign-off of finished product. A ‘pass’ in this stage also ensures that the customer has accepted the software and is ready for their use. | |||||||
Regression Testing > | Assuring New funtionality introduced in the software does not breaks the existing functionality | to confirm that a recent program or code change has not adversely affected existing features. | Regression Testing is nothing but a full or partial selection of already executed test cases which are re-executed to ensure existing functionalities work fine. | |||||
Ad-hoc / Monkey / Gorila Testing > | Doing Testing randomly beyond requirements to see the behaviour of Software | |||||||
Recovery Testing > | Checks the Software that it can be recover easily from crashes | |||||||
Migration Testing > | To check the Software that it can be easliy install on New / Updated server | |||||||
Globalization Testing > | Testing for multi languages > Types :- International Testing (I 18N) & Localization Testing (L 10 N) | I 18N Testing contains 2 files > Program Files (contain source code) & Property Files (contains languages) | L 10 N Testing > Test the application according to country standards like Date, Time, Price, Symbols, Pin code, National flag etc | |||||
11 | SDLC | Software Development Life Cycle | It is the sequence of activities carried out by Developers to design and develop high-quality software. | SDLC Phase’s > | 1. Gather the requirement 2. Feasibility study or requirement analysis, Planning 3. High level design 4. Low level design 5. Coding 6. Testing 7. Deployment 8. Maintenance | |||
12 | STLC | Software Testing Life Cycle | It consists of a series of activities carried out by Testers methodologically to test the software product. | STLC Phase’s > | Requirment Analysis, Test Planning, Test Suit & Test Case development, Environment setup, Test Execution & Test Report Analysis, Test Cycle Closure / Resolving Issues | Entry Criteria: Entry Criteria gives the prerequisite items that must be completed before testing can begin | Exit Criteria: Exit Criteria defines the items that must be completed before testing can be concluded | |
Requirments: could be either functional or non-functional. Specifications for what is expected from software from all stake holders, Identify types of tests to be performed. Gather details about testing priorities and focus. Prepare Requirement Traceability Matrix (RTM). Identify test environment details where testing is supposed to be carried out. Automation feasibility for the testing project is also done in this stage. | Business Requirment Document (BRD) > | |||||||
Planning: Preparation of test plan/strategy document for various types of testing, Test tool selection, Test effort & Cost estimation, Resource planning and determining roles and responsibilities, Testing schedule/milestones are also determined. | Technical Requirment Document (TRD) > | |||||||
Designing/Development: Test Cases} Create TC’s, Review and baseline test cases and scripts (if applicable), Cretae Test Data, Actions (Test Instructions – Step by Step Execution of Test Data), Output (Expected Result), Create Test Suits: Collection of similar test cases to keep TC organized > | Combined Test Cases Table > | |||||||
Test Environment Setup: Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. Setup test Environment and test data. Perform smoke test on the build & Analyse Smoke test results | Requirement Traceability Matrix (RTM) > | |||||||
Execution Test Cases: Execute tests, Document test results, and log defects for failed cases, Map defects to test cases in RTM, Retest the Defect fixes, Track the defects to closure & complete RTM | ||||||||
Test Cycle closure: Evaluate Test coverage, Cost, Critical Business Objectives, Quality. Prepare Test closure report – Qualitative and quantitative reporting of the working product to the customer. Test result analysis to find out the defect distribution by type and severity. | Requirement Estimation > | Resources: Resources are required to carry out any project tasks. They can be people, equipment, facilities, funding, or anything else required for the completion of a project activity. Times : Time is the most valuable resource in a project. Every project has a deadline to delivery. | Human Skills : Human skills mean the knowledge and the experience of the Team members. They affect to your estimation. Cost: Cost is the project budget. Generally speaking, it means how much money it takes to finish the project. | |||||
13 | SDLC Model :- | Waterfall | is a sequential model divided into different phases of software development activity. Each stage is designed for performing the specific activity. Testing phase in waterfall model starts only after implementation of the system is done/development done. | Requirement Gathering stage, Design Stage, Build Stage, Test Stage, Deployment Stage, Maintenance Stage | ||||
V Model | in the V model, testing is done in every phase, in the each Software Development life cycle there is a corresponding Testing phase | In the figure ahead, Left side is the SDLC & Right side is the STLC | ||||||
Agile Development Model > | Work divided into small iterations/Phases called Sprints last from 5 to 10 days. Each iteration added functionality to the software. Each phase comprises its independent set of development and testing activities | CI / CD Approach | Video for Models: https://youtu.be/An7HC1LolDM?si=B2g2nK9rV8fr0nHB | |||||
14 | Manual Testing Procedure | 1) Read and understand the software project documentation. Also, study the Application Under Test (AUT) if available. | ||||||
2) Draft Test Plan & Test cases that cover all the requirements mentioned in the documentation. | ||||||||
3) Review and baseline the test cases with Team Lead, Client (as applicable) | ||||||||
4) Execute the test cases on the AUT | ||||||||
5) Report bugs. | ||||||||
6) Once bugs are fixed, again execute the failing test cases to verify they pass. | ||||||||
15 | Manual Vs Automation Testing | Automation Testing | Manual Testing | |||||
1) | Automation Testing is use of certain tools to execute test cases | Manual testing requires human intervention for test execution. | ||||||
2) | Automation Testing saves time, cost and manpower. Once recorded, it’s easier to run an automated test suite | Manual testing will require skilled labour, long time & will imply high costs. | ||||||
3) | Automated testing is recommended only for stable systems and is mostly used for Regression Testing | Any type of application can be tested manually, certain testing types like ad-hoc and monkey testing are more suited for manual execution. | ||||||
4) | The boring part of executing same test cases time and again is handled by automation software in Automation Testing, hence always accurate | Manual testing can become repetitive and boring and thus error prone | ||||||
5) | Automation does not allow random testing | Exploratory testing is possible in Manual Testing | ||||||
6) | The initial investment in the automated testing is higher. Though the ROI (Return On Investment) is better in the long run. | The initial investment in the Manual testing is comparatively lower. ROI is lower compared to Automation testing in the long run. | ||||||
7) | Automated testing is a reliable method, as it is performed by tools and scripts. There is no testing Fatigue. | Manual testing is not as accurate because of the possibility of the human errors. | ||||||
8) | For even a trivial change in the UI of the AUT, Automated Test Scripts need to be modified to work as expected | Small changes like change in id, class, etc. of a button wouldn’t effect on the execution of a manual tester. | ||||||
9) | Investment is required for testing tools as well as automation engineers | Investment is needed for human resources. | ||||||
10) | Not cost effective for low volume regression | Not cost effective for high volume regression. | ||||||
11) | With automation testing, all stakeholders can login into the automation system and check test execution results | Manual Tests are usually recorded in an Excel or Word, and test results are not readily available. | ||||||
12) | ||||||||
13) | This testing can be executed on different operating platforms in parallel and reduce test execution time. | Manual tests can be executed in parallel but would need to increase your human resource which is expensive | ||||||
14) | You can Batch multiple Test Scripts for nightly execution. | Manual tests cannot be batched. | ||||||
15) | Programming knowledge is a must in automation testing. | No need for programming in Manual Testing. | ||||||
16) | Automation test requires less complex test execution set up. | Manual testing needs have a more straightforward test execution setup | ||||||
17) | Automation testing is useful when frequently executing the same set of test cases | Manual testing proves useful when the test case only needs to run once or twice. | ||||||
18) | Automation testing is useful for Build Verification | Executing the Build Verification Testing (BVT) is very difficult and time-consuming in manual testing. | ||||||
19) | Automated Tests have zero risks of missing out a pre-decided test. | Manual Testing has a higher risk of missing out the pre-decided test deadline. | ||||||
20) | Automation testing uses frameworks like Linear, Data Driven, Keyword Driven, Behaviour Driven, Hybrid to accelerate the automation process. | Manual Testing does not use frameworks but may use guidelines, checklists, stringent processes to draft certain test cases. | ||||||
21) | Automated Testing is suited for Regression Testing, Performance Testing, Load Testing or highly repeatable functional test cases. | Manual Testing is suitable for Exploratory, Usability and Adhoc Testing. It should also be used where the AUT changes frequently. | ||||||
16 | Automation Testing Feasibility | Which Test Cases to Automate ? | Test Cases which are not suitable to Automation :- | Test Automation is the best way to increase the effectiveness, test coverage, and execution speed in software testing. | Automated software testing is important due to the following reasons: | |||
1) High Risk – Business Critical test cases | 1) Test Cases that are newly designed and not executed manually at least once | Manual Testing of all workflows, all fields, all negative scenarios is time and money consuming so automation is required | ||||||
2) Test cases that are repeatedly executed | 2) Test Cases for which the requirements are frequently changing | It is difficult to test for multilingual sites manually | ||||||
3) Test Cases that are very tedious or difficult to perform manually | 3) Test cases which are executed on an ad-hoc basis. | You can run automated test cases unattended (overnight) | ||||||
4) Test Cases which are time-consuming | Manual Testing can become boring and hence error-prone. | |||||||
17 | Automation Process | 1) Test Tool Selection > | Select Tool which supports that language in which AUT built (in case of Unit Testing only) | |||||
2) Define scope of Automation > | Area of AUT will be automated, The features that are important for the business, Scenarios which have a large amount of data, Common functionalities across applications, Ability to use the same test cases for cross-browser testing | |||||||
3) Planning, Design & Development > | 1) Automation tools selected 2) Framework design and its features 3) In-Scope and Out-of-scope items of automation 4) Automation testbed preparation 5) Schedule and Timeline of scripting and execution 6) Deliverables of Automation Testing | |||||||
4) Test Execution > | Automation Test Scripts are executed during this phase | |||||||
5) Maintenance | Maintenance in automation testing is executed when new automation scripts are added and need to be reviewed and maintained in order to improve the effectiveness of automation scripts with each successive release cycle. | |||||||
18 | Automation Framework’s | 1) Linear Automation Framework | Test data hardcoded in Test File | |||||
2) Data Driven Automation Framework | Data Set & Test case code kept separately | Like Data Set read from excel file | ||||||
3) Cucumber / BDD Framework (behaviour driven development) | Behavior Driven Development is a software development approach that allows the tester/business analyst to create test cases in simple text language (English). | Steps written in the order in which a Feature in an application is intended to work | ||||||
4) Keyword Driven Automation Framework | ||||||||
5) Modular Automation Framework | ||||||||
6) Hybrid Automation Framework (Keyword + Data Driven) | ||||||||
Automation Framework Design Pattern | Page Object Model | All the Web elements on one page is defined in one class | These web elements access thru object of that class in which class it is extended to perform action on these web elements | |||||
Test Environment > | It is a setup which is used to test the application or software. It consists of hardware, software, network and the server. | Production Environment > | It is a setup which is used to run the software for real business and end users use the software. It is also consist of hardware, software, network and the server. Configuration of production environment is different from development and test environment. | |||||
19 | Build | A ‘build’ is an attempt to compile all of the source code for the project into a set of executable binaries and compress them into a final product. | ||||||
20 | One Test Cycle | The duration or effort or the time spent to start and finish complete product testing is called one test cycle. | ||||||
21 | Blocker Bug | It is a type of bug due to which we can’t use the major part of the application or can’t access the features at all, therefore we are not able to continue the testing. It needs to be fixed in the turnaround time of 2-4 hours. | ||||||
22 | Respin | It is the process wherein we get one more build within one test cycle is called respin. | ||||||
23 | Patch | Patch is a modified program which may contain: 1. Modified programs 2. New programs 3. Record of deleted programs | ||||||
24 | Build Engineer | A person who: 1. Manage the source code (has the access writes) 2. Compile and compress the source code 3. Install/ Uninstall the build | Release Engineer | Since this person releases the final build to the production, he is called release engineer. | ||||
25 | Contineous Development / Contineous Integration | Continuously development environment and test environment gets integrated by using CI tool (continuous integration). This process is known as continuous integration. This a used to install the patch using CI tool. We can install the patch overnight. | ||||||
26 | Responsiveness :- | View Port Width ( W ) * Height ( H ) in Pixels | Device | Screen Size in Inches | Device Resolution ( W * H ) in Pixels | |||
Desktop > | 1920 * 1080 | Big Screen | 24 | |||||
1536 * 960 | MacBook Pro 16″ | 16 | 3072 * 1920 | |||||
1680 * 1050 | Laptop 15″ | 15 | 2600 * 1700 | |||||
1440 * 900 | MacBook Air 15″ | 15.4 | 2880 * 1800 | |||||
1366 * 768 | MacBook Air 11″ | 11.6 | 2736 * 1824 | |||||
1280 * 800 | MacBook Air 13.3″ & Pro 13.3″ | 13.3 | 2560 * 1600 | |||||
iPad > | 1024 * 1366 | iPad Pro 12″ | 12.9 | 2048 * 2732 | ||||
834 * 1194 | iPad Pro 11″ | 11 | 1668 * 2388 | |||||
768 * 1024 | iPad | 9.7 | 1536 * 2048 | |||||
Phone > | 428 * 926 | iPhone 12 Pro Max | 6.7 | 1284 * 2778 | ||||
414 * 896 | iPhone 11 | 6.1 | 828 * 1792 | |||||
390 * 844 | iPhone 12 & 12 Pro | 6.1 | 1170 * 2532 | |||||
375 * 667 | iPhone SE (2020) | 4.7 | 750 * 1334 | |||||
360 * 780 | iPhone 12 Mini | 5.4 | 1080 * 2340 | |||||
320 * 568 | iPhone SE | 4 | 640 * 1136 | |||||
Watch > | 197 * 162 | Series 6 | 44 mm | 448 x 368 | ||||
224 * 184 | Series 6 | 40 mm | 394 x 324 | |||||
Pixel > | A one tiny portion on screen out of which lights come out. There are lots of such tiny portion on screen like dots out which White or RGB light comes out which makes UI on screen, those all combined called pixels | Each pixel emit light as white or Red/Green/Blue or Dead as black | Each pixel displays light at different intensities and RGB color components make up the gamut of different colors that appear on a display or computer monitor. | Devices Resolution :- | ||||
Standard Resolution | 1280 * 720 | iPad Pro 11 Resolution – 2366* 1680 | ||||||
Full HD | 1920 * 1080 | Mackbook Pro 14 Resolution – 3024*1964 | ||||||
QHD Resolution or 2K | 2560 x 1440 | iPhone mini Resolution – 2340 * 1080 | ||||||
UHD Resolution or 4K | 3840 x 2160 | iPhone Pro Resolution – 2556*1179 | ||||||
HD | 1366 * 768 | |||||||
HD+ | 1600 * 900 | |||||||
Device Pixel Ratio (DPR)/CSS Pixel Ratio > | is the ratio between the physical pixels (screen size or resolution) and CSS pixels (viewport). Depending on device specification, one CSS pixel can equal one or more physical pixels. Modern devices have screens with high pixel density resulting in the difference between screen size (resolution) and viewport. | Screen Size (Resolution) = Viewport size × Device Pixel Ratio. | Viewport size = Screen Size (Resolution) / Device Pixel Ratio. | CSS Pixel Ratio = Screen Size (Resolution) / Viewport size. | ||||
27 | Manual Testing Checklist :- | |||||||
1. Web page content should be correct without any spelling or grammatical errors with correct formatting | ||||||||
2. Correct name should be there for application, database, directories, files, pages and URLs. | ||||||||
3. All the fields should be properly aligned. | ||||||||
4. Website should be responsive in all standard screen resolution including mobile. | ||||||||
5. Home link should be there on every single page. | ||||||||
6. Disabled fields should be grayed out. | ||||||||
7. Check for broken links and images. > Things that does not display on screen | ||||||||
8. Confirmation message should be displayed for any kind of update and delete operation. | ||||||||
9. Tab order should work properly. | ||||||||
10. Scroll bar should appear only if required. Or as per client desired | ||||||||
11. If there is an error message on submit, the information filled by the user should be there > Fill in data should retain on page and focus should shift on first error | ||||||||
12.Title should display on each web page. | ||||||||
13. All fields (Textbox, dropdown, buttons, radio button etc) should be accessible by keyboard shortcuts and the user should be able to perform all operations by using keyboard. | ||||||||
14. No variable/setting values will be hardcoded. | ||||||||
15. All the mandatory fields should be validated. | ||||||||
16. Asterisk sign should display for all the mandatory fields. | ||||||||
17. Test the numeric fields should not accept the alphabets and proper error message should display. | ||||||||
18. Test the max length of every field to ensure the data is not truncated. > do boundry value analysis | ||||||||
19. Test the pop up message (“This field is limited to 500 characters”) should display if the data reaches the maximum size of the field. | ||||||||
20. Amount values should display in currency format. | ||||||||
21.Tested all binding for Null/Empty values. | ||||||||
22. Test the Javascript is properly working in different browsers (IE, Firefox, Chrome, safari and Opera). | ||||||||
23. Test to see what happens if a user deletes cookies while in the site. | ||||||||
24. Test to see what happens if a user deletes cookies after visiting a site. | ||||||||
25. Test all the data inside combo/list box is arranged in chronological order. | ||||||||
28.Verify the database names of QA, DEV and Production. The names should be unique. | ||||||||
29. Verify the important information like password, credit card numbers etc should display in encrypted format. | ||||||||
30.Verify to access the secured and non secured web pages directly without login. | ||||||||
31. Verify the cookies should not store passwords. | ||||||||
32. Verify if, any functionality is not working, the system should not display any application, server, or database information. Instead, it should display the custom error page. | ||||||||
33. Verify the user roles and their rights. For Example The requestor should not be able to access the admin page. | ||||||||
34. Verify the session values are in an encrypted format in the address bar. > ref to Query string | ||||||||
35. Verify the cookie information is stored in encrypted format. | ||||||||
36. Verify the application for Brute Force Attacks. | ||||||||
37. To determine the performance, stability and scalability of an application under different load conditions. | ||||||||
38. Delete all un-used setting keys, code, data objects, files and images. | ||||||||
39. Check favicon is appearing or not. > small icons | ||||||||
28 | State Management | To manage the state of stateless pages, that technique is called State Managment | Every time new HTML comes from server when request goes to server and all variables are redeclared | |||||
Types :- | Client Side State Management | Server Side State Management | ||||||
It use client Resources, not secure and fast | It uses server resources, secure and slow | |||||||
Retains data while page reload | Retains data while page reload | |||||||
Sub Types :- | Cookies :- | Session State :- | is used to store Information & Identity. Data Stored in these modes :- InProcMode, State Server Mode, SQL Server Mode > It is a default session mode and a value store in web server memory (IIS). In this the session value stored with server start and it ends when the server is restarted / In this mode session data is stored in separate server / In this session data is stored in the database. It is a secure mode | |||||
Query String :- (accessing varaiable value from one page to another page) | Caching :- | Caching is the process of storing copies of files in a cache, or in temporary storage location, so that they can be accessed more quickly. The cache is stored on the server side. The cache has expiration time | ||||||
Hidden Field | Application State | |||||||
View State | Profile Properties | |||||||
Control State | ||||||||
29 | Cookies | are text files with small pieces of data that are used to identify your computer on network. Data stored in a cookie on browser in the form of “name-value” pairs, is created by the server upon your connection. This data is labeled with an ID unique to you and your computer. | When you return to the website, your web browser returns cookie data to the server, server reads the unique ID and recall data specific to you only and from your previous sessions | Cookie store expiry date & time. Expiry date & time mention of server side | Max. Size per cookie is 4096 bytes or 4kb & Max. nos is 20 per website | |||
Cookies used for :- | Session management: cookies let website recognize users and recall their individual login information and preferences. | Personalization: cookies use this data to deliver targeted ads that you might enjoy. They’re also used for language preferences as well. | Tracking: Shopping sites use cookies to track items users previously viewed, allowing the sites to suggest other goods that might like by user and keep items in shopping carts while you continue shopping on another part of the website. They will also track and monitor performance analytics, like how many times you visited a page or how much time you spent on a page. | |||||
Types of Cookies :- | Session Cookies > | are used only while navigating a website. They are stored in random access memory and are never written on to the hard drive. When the session ends or once user log out of their account on a website or exit the website, session cookies are automatically deleted. Session cookies have no expiration date, which signifies to the browser that they should be deleted once the session is over. They also help the “back” button work on your browser | ||||||
Persistent cookies > | These remain on a computer indefinitely, although many include an expiration date and are automatically removed when that date is reached. Persistent cookies are used for two primary purposes Authentication & Personalization. help manage user sessions; they are generated when a user logs into an account via their browser. They ensure that information is delivered to the correct user sessions by associating user account information with a cookie identifier string. Tracking. These cookies track multiple visits to the same site over time. Some online merchants, for example, use cookies to track visits from particular users, including the pages and products viewed. The information they gain allows them to suggest other items that might interest visitors. Gradually, a profile is built based on a user’s browsing history on that site. | First-party cookies are directly created by the website you are using. These are generally safer, as long as you are browsing reputable websites | ||||||
Third-party cookies are more troubling. They are generated by websites that are different from the pages that the users are currently surfing, usually because they’re linked to ads on that page. Third-party cookies let advertisers or analytics companies track an individual’s browsing history across the web on any sites that contain their ads | ||||||||
Zombie cookies are a form of third-party cookie, which are permanently installed on users computers. Zombie cookies create backup versions of themselves outside of a browser’s typical cookie storage location. They use these backups to reappear within a browser after they are deleted. They are also sometimes called “flash cookies” or “supercookies” and are extremely difficult to remove. zombie cookies can be used by web analytics companies to track unique individuals’ browsing histories. These types of cookies can be fabricated by hackers and used to infect your system with viruses and malware. | ||||||||
Essential Cookies are now synonymous with the pop-up asking you for your cookie preferences when you first visit a website. Essential cookies are first-party session cookies that are necessary to run the website (such as remembering your login credentials) |