Set-1 FAQ's
  • Requirement gathering and analysis:

    A Contract will be signed between client and marketing team. Then Business Analyst (BA) will get the requirement from the client and prepare a document called Business Requirement Document (BRD) or Software Requirements Specification (SRS) Document.

  • Design:

    Then a document will be prepared called Design Specification Document, which will give details about how the front and back end could interact.

  • Development:

    Then Developers will develop the software by the use of any coding language and they will do a test called Unit Testing.

  • Testing:

    Once developer coded for the functionality of the software then they will move their code or build into testing environment. Once developer moved the code into testing phase, testers will test the applications as per the flow STLC (Software Testing Life Cycle).

  • Deployment:

    Once a program has passed the testing phase, it is ready for deployment. Here the application will be deployed in the live environment. Typically, it happens at Non-Peak Hours.

  • Operation and Maintenance:

    Here maintenance of software can include software upgrades, repairs and fixes of the software if it breaks.

  • Software Testing Life Cycle is a sequence of activities conducted to perform Software Testing. There are five phases of STLC

    • Requirement Analysis
    • Test Planning
    • Test Design
    • Test execution
    • Sign Off

    Each of these five phases have a specific Entry and Exit criteria. Entry criteria is the prerequisite items need to start a phase and Exit criteria is the items that must be concluded when a phase is completed.

  • Requirement Analysis

    Requirement Analysis starts with BR Document and Design Specific Document. Here we need to go through the requirement in BR Document and understand the requirements. If we have any queries means we need to mark in a query log which will be clarified by Business Analyst (BA) in walkthrough session and need to get application access from the client. So here our scope will be finalized.

  • Test Planning

    With the scope in requirement analysis, we will start the test planning. Here test lead will prepare Test Plan document with inscope and outscope which means what are all the areas we are going to cover in testing. Then types of testing with what tools we are going to test will be mentioned with explanation. Then expected date will be calculated for test case preparation, test case review, execution, etc. Then resource planning like how many testers needed will be mentioned in the document. So here Test Plan document is prepared.

  • Test Design

    With Test Plan document, we will start test design. Here we will prepare test case document for our requirements using Test scenario, Test cases, Test Data, Test Step Description, Expected Result, Actual Result and Status of the test cases. Once Test case is prepared, we need to get review from our peer or lead. Then we need to get approval from Business Analyst. For Automation testing, we will do skeleton scripting. So here Test Case Document is prepared.

  • Test Execution

    Once developer provide the build to us after developing, using Test case document we will start with the execution. During execution if any defect found means, we will raise it and track it to closure. Once all test cases are pass, we will attach test results with the requirements. So here Test Results are generated.

  • Sign Off

    Once test execution is completed, in Sign Off phase, our testing team manager will analyze the test results and draft a mail to Sign off for the application to release.

In Defect Management Tool, we need to click the create button and then select the issue as bug. Then we need to provide the defect name in summary and provide detailed information about the bug in description by mentioning, the bug is found in which environment, platform and browser. Then we will provide the steps to reproduce with expected, actual result and Test Data. Then we will provide the priority of the bug. Then we will attach the screenshots with the bug and link the bug with the story, so that we can track it easily. Then we will assign the bug to the developer and click create button. Then we will track the bug using bug life cycle.
When we found a defect for the first time, we will raise it as bug. Then we will assign it to the Developer. Then Developer will open the bug and verify whether it is valid or invalid bug. If it is a valid bug means, developer will fix it. Once developer fixed the bug, they will move it to testing team for retesting. After receiving the new build, we need to Retest the bug whether it is fixed or not. If the bug is fixed, then we need to verify and close the bug. If the bug is not fixed means, then we need to reopen the bug by assigning it back to the developer. Here developers also can reject the bugs by Duplicate – if the same bug was already raised by someone. Not a bug – if developers feel it’s not a genuine bug. Deferred – if developers feel it can be fixed later on upcoming sprint or releases (mostly low priority bugs).
  • Smoke Testing: This testing is done to make sure if the build we received from the development team is testable or not. It is also called as Day 0 check. It helps not to waste the testing time to simply
  • Regression Testing: This testing is done to make sure the existing functionalities of the application is not impacted whenever any new functionalities is added to the application.
  • Sanity Testing: This testing is done during the release phase to check the main functionalities of the application without going deeper. Due to release time constraints, if regression testing can't be done to the build, in that case sanity testing does that part by checking main functionalities.
  • Retesting: Once the developer fixes the bug, we need to test again whether the bug we raised is fixed or not in the latest build.
  • Positive Testing: It is to determine what system supposed to do. It helps to check whether the application is justifying the requirements or not.
  • Negative Testing: It is to determine what system not supposed to do. It helps to find the defects from the software.
  • Formal Testing: It is a process where the testers test the application by having pre-planned procedures and proper documentation.
  • Informal Testing: It is a process where the testers test the application without having any pre-planned procedures and proper documentation.
  • User Acceptance Testing: UAT Testing is the final stage of software testing where real users test the application to make sure it meets their needs and works as expected before going live.
  • Exploratory Testing: When testers explore the application without predefined test cases, using their creativity to find issues. It’s like testing the app by just interacting with it freely, looking for bugs as they go.
  • Equivalence Class Partitioning: In equivalence case partitioning technique, the test cases are equally divided based upon positive and negative inputs.
    Example: Consider that we have to select an age between 18-56,
    Here the Valid inputs are 18-56 and the invalid inputs are <=17 and =>57.
    Here we have one valid and two invalid test cases.
  • Decision Table Technique: In Decision table technique, we deal with combinations of inputs. To identify the test cases with decision table, we consider conditions and actions. We take conditions as inputs and actions as outputs.
    Example: Login validation - Allow user to login only when both the username and password is correct.
    Condition1: Enter valid username and valid password and Click Login.
    Action1: Display home page and execute.
    Condition2: Enter invalid username and valid password and Click Login.
    Action2: Display Error message as invalid username.
  • State Transition Technique: Using state transition testing, we pick test cases from an application where we need to test different system transitions. We can apply this when an application gives a different output for the same input, depending on what has happened in the earlier state.
    Example: Login with invalid username and password three times keeps the account page blocked until change password.
  • Boundary Value Analysis: Boundary value analysis is based on testing the boundary values of valid and invalid partitions. Every partition has its maximum and minimum values and these maximum and minimum values are the boundary values of a partition.
    Example: If we want to enter an amount between 100 to 1000.
    Here we check based on boundaries for 100, we take 99,101 for valid and for 1000, we take 999,1001 as invalid.
  • Error Guessing Technique: It is used to find bugs in software application based on testers prior experience. Here we won't follow any specific rules.
    Example: In Login page validation, Wrong password format: Test by typing 12345 (instead of a strong password with letters, numbers, etc.). we expect an error like "Password too weak."
  • High Priority & Low Severity - The "Submit" button on a form is misaligned but still works perfectly when clicked. This is a high priority issue because it affects the user interface but the functionality is not affected, making it low severity.
  • Low Priority & High Severity – An application crashes when trying to submit a form, but this feature is rarely used by users. This is low priority because it affects a feature that very few users interact with. However, the high severity is because the website crashes, which is a critical issue if it occurs.
  • High Priority & High Severity - A payment gateway an application is down, and users can't complete their purchases. This is high priority because the issue directly impacts users to make purchases. It's also high severity because the system's core functionality is completely broken, leading to a severe impact.
  • Low Priority & Low Severity - A minor typo in the "About Us" section of a website that doesn't affect the overall meaning. This issue is low priority because it’s just a typo in non-critical content. It’s also low severity because it doesn't cause any functional issues, so it’s not a major concern.
In my project, initially Product Owner (PO) will create and maintain the user stories in product backlog by prioritizing it. Then in Sprint Grooming meeting, Product Owner will take a set of userstories and explain the stories to us. We need to understand the userstories. If we have any queries means, we need to clarify it with the Product Owner. Then Scrum master will conduct Sprint Planning meeting. Here points will be provided by developers to each userstories based on complexity. These points will be provided based on Pocker card technique. In my project, we usually take 40 points for a sprint. Once userstory points were given, scrum master will move the selected stories from Product Backlog to Sprint Backlog and start the sprint. Once the sprint is started, developers will be developing the userstories and we testers will be preparing test case documents. After development, when the developer provided us with the builds, we will start testing. During every sprint, we will have Daily Standup call where we will discuss about what we worked on yesterday, what we are going to work for today and discuss about if we have any blockers. During sprints we try to close all the stories. If any stories were not completed within the sprint, it will be spilled over to next sprint. Then we will have sprint review meeting with Product Owner and Scrum master. Here we testers provide demo of our completed stories to Product Owner. Then Sprint Retrospective meeting will be conducted by Scrum master. Here we discuss what went right, what went wrong and what can we do better for next sprint.
  • Product Owner defines requirements of the product, prioritize according to the market value & profitability and decides release date.
  • Scrum Master manages the team, responsible for setting up the team and scrum meeting invite.
  • Scrum Team consists of Developers and Testers.
  • Userstory: It is the short explanation of requirements.
  • Acceptance Criteria: It is the detailed explanation of requirements. Each userstory have an acceptance Criteria.
  • Product Backlog: It is the collection of userstories captured for a scrum project managed by Product Owner.
  • Sprint Backlog: It is the number of userstories taken for a particular sprint.
  • Burn-up and Burn-down Chart: Burn-up chart illustrates the amount of completed work in a project and the burn-down chart illustrates the amount of work remained to complete a project.
  • Epic: An Epic is a collection of userstories based on their functionalities.
  • Sprint: It is a set period of time to complete the user stories usually 2 to 3 weeks of time.