Notes about Automation & Testing
Last Updated: August 12, 2020 by Pepe Sandoval
If you find the information in this page useful and want to show your support, you can make a donation
Use PayPal
This will help me create more stuff and fix the existent content...
Before we start writing any test the first thing is does the thing we want to test depends only on itself or does it have any external dependencies.
For any methods or functions or classes you want to test that don't depend on anything else you're going to put them in the unit tests.
Each test suite is a class and inherits from TestCase
Unit Test test a single part of a thing, it tests a single unit that doesn't depend on any other parts of our system. If we want to know whether 2 or more things work together that is no longer unit testing
if we want to test whether two or more parts of our app work we need to have Integration Tests which test the link between two different parts.
If we want to test whether our application, which can be using multiple parts or building blocks, work then we need System test, this will test the entire system from top to bottom as if it were a client/user of the system
patching allows us to override methods used by your code. For example you could patch the print
function to change it's behavior you it will be easier to test a function that prints to screen instead of finding a mechanism that reads what was printed to stdout
with-as
construct (with function as context_alias
)print
was called with a certain valueWhen doing any level of testing it is common that will need to decide if you want to test the outcome of a certain flow of operation(s) and you can either check certain functions were called or you can check whether some data has some expected values
A setup in testing is usually a function or process that runs before each test.
unittest
in Python a setUp
method is implemented which runs before every test methodsetUpClass
method is also as setup with the difference it runs once for each TestCase, in other words once for each test classTDD (Test Driven Development) is a dev paradigm that means think how your are going to use/test your modules/functions/code before implement it, first write all the tests of they will be used then the code that make those test pass
Execute the following command:
python -m unittest .../testing/blog/tests/unit/post_test.py
Run from PyCharm
test_*.py
so if you name your files in the unit_test
folder like that you should be able to run them all, you can also change the default patternfrom unittest import TestCase from app import app class TestHome(TestCase): def test_home(self): with app.test_client() as c: resp = c.get("/")
It is common to define a base_test class that has all the common functionality your test will share but contains NO actual tests
A rest API is a web service that clients can use to request a service or in other words interact with things
Postman is an API testing tools that can make request to a web server or web API
Allows to simulate high-level system tests
pre-request (runs before request is sent) and test (runs after response is received) sections can be used to write scripts in JavaScript syntax
postman makes variables available to you which can use to automate tests
postman.clearEnvironmentVariable("access_token"); tests["Response time is less than 200ms"] = responseTime < 200; var jsonData = JSON.parse(responseBody); tests["User created success"] = jsonData.message == "User created successfully."; tests["Content-Type is present in response"] = postman.getResponseHeader("content-Type") tests["Content-Type is 'application/json'"] = postman.getResponseHeader("content-Type") === 'application/json'
var jsonData = JSON.parse(responseBody); postman.setEnvironmentVariable("access_token", jsonData.access_token); tests["Status code is 200"] = responseCode.code === 200 tests["Body has access_token"] = responseBody.has("access_token") tests["Response time is less than 200ms"] = responseTime < 200;
Acceptance testing is a layer that sits on top of system testing, it intention is the same, to test the system as if we were a user or in other, but the key difference is that acceptance testing tests should be writable by users/customers
And acceptance test is a very high level test of the entire system.
BDD (Behavior Driven Development) is a way of expressing tests very akin to what a customer would do.
Usually users/customers give requirements they give them in the form: "When something happens I want something else to happen" this can be translated to a requirement (Scenario) which must have clear assumptions, an event or action and a result(s) caused by that event or action (Steps)
Users/Customers can also give us a broader requirement that involves multiple Scenarios, a group of scenarios are referred as Features in BBD
In BDD we can express a requirement in the form of Given-When-Then
Given
means the same as another Given
BDD has the goal of making the tests readable, easy to follow and reusable
BDD can be expressed in Gherkin which is a language that uses .feature
files to document a feature we want to test which can have one or more scenarios
Given
, When
, Then
, And
,...""
so we can use regex (re
) to extract those from the test steps in out python step implementationFeature: Name of the feature here Longer description can be added here Scenario: Name of scenario here Given assumptions here When actions or events here Then results
To pass data between each steps of a scenario we normally use a context
variable
Make sure to have tests for specific features and full user scenarios don't feel the need to overly generalize everything and because that's not really necessary at the acceptance test level.
Locators and page models are just a way of structuring your acceptance tests.
This model is used to avoid repeating code for example if you do the same action in different step functions
Locators are going to describe how to find an element in a page, this will allows us to easily search for an element.
Models will describe or represent a page itself
BasePage
that has things shared between all our pages and then specific classes for each page which have extra particular stuffSpecify location of the ChromeDriver executable
PATH
env variableCHROMEDRIVER_PATH = r"C:/repos/chromedriver/v86.0.4240.22/chromedriver.exe" mybrowser = webdriver.Chrome(executable_path=CHROMEDRIVER_PATH)
The downloaded ChromeDriver is an executable and when we do
webdriver.Chrome()
we are just basically executing this executable so our code needs to be able to find this executable file, this is why we need to specify its location
selenium.common.exceptions.WebDriverException: Message: invalid argument: unrecognized capability: chromeOptions
Error make sure to check compatibility of ChromeDriver and Selenium versions..feature
file in Pycharm+
and Select pythonbehave
(this is the reason of the import behave
) script that will parse the .feature
file and call the appropriate stepsbehave
python package (if you are using a venv
must be inside there)from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions try: WebDriverWait(driver=context.driver, timeout=5).until(expected_conditions.visibility_of_element_located((By.ID, "posts"))) except: # log something raise Exception()
If you find the information in this page useful and want to show your support, you can make a donation
Use PayPal
This will help me create more stuff and fix the existent content...