Commit 6903c703 authored by Gerson Sunyé's avatar Gerson Sunyé

New project organization to meet new docker image conventions and scripts.

parent ab7d2169
Pipeline #15508 passed with stages
in 36 seconds
image: docker-registry.univ-nantes.fr/sunye-g/docker-asciidoctor-revealjs
cache:
paths:
- target
before_script:
- apt-get update -qy
- curl -sL https://deb.nodesource.com/setup_10.x | bash -
stages:
- build
- deploy
......@@ -15,17 +7,18 @@ stages:
build:
stage: build
script:
- bash ./compile.sh
- bash /home/compile.sh
artifacts:
paths:
- target
pages:
stage: deploy
script:
- mkdir -p public
- rsync -r target/site/ public
- du -sh public
- ls -la public
- mkdir -p public
- rsync -r target/site/ public
artifacts:
paths:
- public
only:
- master
- master
\ No newline at end of file
......@@ -2,7 +2,7 @@
SOURCE="src/slides/"
TARGET="target/site"
mkdir -p $TARGET
bundle exec asciidoctor-revealjs $SOURCE/*.adoc -D $TARGET
bundle exec asciidoctor-revealjs $SOURCE/*.adoc -R $SOURCE -D $TARGET
rsync -r src/images/ $TARGET/images
if test ! -d $TARGET/reveal.js; then
......@@ -10,6 +10,8 @@ if test ! -d $TARGET/reveal.js; then
cd $TARGET
git clone -b 3.8.0 --depth 1 https://github.com/hakimel/reveal.js.git
rm -rf reveal.js/.git
wget https://github.com/highlightjs/highlight.js/archive/9.18.0.tar.gz
tar cvfz 9.18.0.tar.gz
cd reveal.js
wget https://gitlab.univ-nantes.fr/bousse-e/stereopticon/raw/master/install-or-update-stereopticon.sh
bash install-or-update-stereopticon.sh
......
{
"name": "software-construction-and-evolution",
"version": "0.0.1",
"private": true,
"devDependencies": {
"grunt": "^1.0.0",
"grunt-contrib-connect": "^1.0.2",
"grunt-contrib-watch": "^1.0.0",
"grunt-contrib-copy": "^1.0.0",
"grunt-contrib-jshint": "^1.1.0",
"load-grunt-tasks": "^3.5.2",
"grunt-build-control": "^0.7.1",
"grunt-coffeelint": "0.0.16",
"coffeelint": "^1.16.0"
},
"engines": {
"node": ">=4"
},
"repository": {
"type": "git",
"url": "git@github.com:sunye/software-construction.git"
},
"scripts": {
"test": "grunt test"
}
}
......@@ -12,7 +12,21 @@
:includedir: includes
:sectids!:
= Test-Driven Development
= Agile Software Development
* *Introduction*
* TDD and unit tests
* Strategies
* Conclusion
== Agile Manifesto
[.impact]
== Test-Driven Development
== Plan
......
......@@ -12,8 +12,13 @@
:includedir: includes
:sectids!:
= Mapping UML Designs to Code
:revealjs_plugins: src/js/revealjs-plugins.js
//:revealjs_plugins_configuration: revealjs-plugins-conf.js
Behavioral Aspects
[.impact]
......
This diff is collapsed.
\end{document}
Continuous Integration
Improving Software Quality And Reducing Risk
Build Software Every Change
Build may consist of the compilation, testing, inspection, and deployment.
CI scenario typically go like
Developer commits code to version control repository
CI server detects that changes have occurred in the version control repository, and then executes a build script
CI server generate feedback by e-mailing build results to specified project member
CI server continues to poll for changes in the version control repository
Features of CI
Source Code Compilation
Database Integration
Testing
Inspection
Deployment
Document and Feedback
How could you know you are doing CI correctly?
Are you using a version control repository (or SCM tool)?
Is your project’s build process automated and repeatable? Does it occur entirely without intervention?
Are you writing and running automated test?
Is the execution of your tests a part of your build process?
How do you enforce coding and design standards?
Which of your feedback mechanisms are automated?
Are you using a separate integration machine to build software?
What prevents teams from Using CI?
Increased overhead in maintain the CI system
Too much change
Too many failed builds
Additional hardware/software cost
Developers should be performing these activities
How Do I Get to “Continuous” Integration?
Identify – Identify a process that requires automation
Build – Creating a build script makes the automation repeatable and consistent
Share – By using a version control system
Continuous – Ensure that the automated process is run with every change applied
Is it Continuous Compilation or Continuous Integration?
How much code coverage do you have with your test?
How long does it take to run your builds?
What is your average code complexity?
How much code duplication do you have?
Are you labeling your builds in your version control repository?
Where do you store your deployed software?
How does CI complement other development practices?
Developer testing
Coding standard adherence
Refactoring
Small release
Collective ownership
CI and you
Commit code frequently
Don’t commit broken code
Fix broken builds immediately
Write automated developer tests
All tests and inspections must pass
Run private builds
Avoid getting broken code
Risk: Lack of Deployable Software
Scenario: “It works on My Machine”
Solution: Use a CI server along with an automated build using tools such as Ant, NAnt, or Rake
Scenario: “Synching with the Database”
Solution: Place all database artifacts in your version control repository
Scenario: “The Missing Click”
Solution: Using script to automate the deployment process
Risk: Late Discovery of Defects
Scenario: Regression Testing
Solution: Using unit test at the business, data, and common layers and run continuously as a port of your CI system
Scenario: Test Coverage
Solution: Running test coverage tool to assess the amount of source code that is actually executed by the tests
Risk: Lack of Project Visibility
Scenario: Did you Get the Memo
Solution: Automated mechanism that sends e-mails to affected parties when a build fails
Scenario: Inability to Visualize software
Solution: Automated code documentation tool
Risk: Low-Quality Software
Scenario: Coding standard adherence
Solution: Using Checkstyle and PMD to report any lines of code that were not meeting the established standards
Scenario: Architectural Adherence
Solution: Using analysis tools such as JDepend or NDepend
Scenario: Duplicate Code
Solution: Automated inspection tools such as PMD’s CPD or the Simian static analysis tools
Integration build Scalability and Performance
Gather build metric
Analyze build metric
Choose and perform improvements
Reevaluate; repeat if necessary
Continues Database Integration
Automate Database Integration
Use a local database sandbox
Use a version control repository to share database asset
Give developer the capacity to modify database
Make the DBA part of the development team
Reduce Code Complexity
Cyclomatic Complexity Number (CCN) is a plain integer that measure complexity by coding the number of distinct paths through a method
Various studies with this metric over the years have determined that methods with CCN greater than 10 have a higher risk of defects
The most effective way to reduce cyclomatic complexity is to apply the extract method technique
Perform Design Reviews Continuously
Afferent Coupling, Fan In, an object has responsibility to too many other objects (high afferent)
Efferent Coupling, Fan Out, the object isn’t sufficiently independent of other objects (high efferent)
Instability = Efferent Coupling/(Efferent Coupling + Afferent Coupling)
Maintain Organization Standards with Code Audits
Coding standard facilitate a common understanding of a code base among a diverse group of developer
Human code reviews and pair programming can be effective in monitoring coding standards, they do not scale as well as automated tools
Popular code analysis tool for Java platform is PMD
Reduce Duplicate Code using PMD-CPD or Simian
CI Resources
Automated Inspection Resources
CI Resources
Deployment Resources
Capistrano
FeedBack Resources
Ambient Devices
Google Talk
Jabber
X10
Documentation Resources
Doxgen
JavaDoc
NDoc
Evaluating tools
Compatibility with your environment
Does the tools support your current build configuration?
Does the tool require installation of additional software in order to run?
Is the tool written in the same language as your project?
Reliability
Longevity
Usability
......@@ -203,12 +203,14 @@ image::selenium.jpg[align=center]
* Code Review can be impressively effective; however, they are run by humans, who tend to be emotional
* Pair Programming has also been shown to be effective when applied correctly
* Automated static code analysis scales more efficiently than humans for large code bases
* What is the different between inspection and testing
== Inspection and Testing
.What is the different between inspection and testing?
* Testing is dynamic and executes the software in order to test the functionality
* Inspection analyze the code based on a set of predefined rules
* Example of inspection targets include coding “grammar” standards, architectural layering adherence, code duplication, and so on
== Plan
* Introduction
......
This diff is collapsed.
......@@ -32,15 +32,14 @@ Icons made by http://www.freepik.com[Freepik] from http://www.flaticon.com[Flati
* link:construction.html[Software Construction]
* link:mapping.html[Mapping Designs to Code - Part I]
* link:behavior.html[Mapping Designs to Code - Part II]
* link:build.html[Automated Build]
* link:build.html[Build Automation]
* link:patterns.html[Design Patterns]
* link:refactorings.html[Refactorings]
* link:evolution.html[Software Evolution]
* link:unit-test.html[Unit Testing]
* link:tdd.html[Test Driven Development]
* link:patterns.html[Design Patterns]
* link:test.html[Test Automation]
* link:agile.html[Agile Software Development]
* link:ci.html[Continuous Integration]
== References
* P. Bourque and R.E. Fairley, eds., https://www.swebok.org[Guide to the Software Engineering Body of Knowledge (SWEBOK)], Version 3.0, IEEE Computer Society, 2014.
......
menu: {
side: 'right'
},
keyboard: {
67: function() {RevealChalkboard.toggleNotesCanvas()}
},
\ No newline at end of file
......@@ -12,33 +12,75 @@
:includedir: includes
:sectids!:
////
TODO
https://www.petrikainulainen.net/programming/testing/writing-clean-tests-naming-matters/
https://enterprisecraftsmanship.com/posts/you-naming-tests-wrong/
////
= Test Automation
= Software Testing
== Plan
* *Introduction*
* Unit Test
* Integration Test
* Test Automation
* JUnit
* Conclusion
== Levels of Testing
[.columns]
--
[.col-6]
.Levels of Software Testing
. Unit
. Component
. Integration
. System
. Acceptance
[.col-6]
.During Software Constrution
. Unit
. Integration
. Static Analysis
--
////
In software testing, test automation is the use of software separate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes.[1] Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical for continuous delivery and continuous testing.
////
== Golden rule
____
If a method does not have automatic tests, it does not work
____
////
Automated testing
Today’s software is very complex, often comprising hundreds of thousands of lines of code spread across many files. Such complex projects use numerous libraries and other dependencies. Changes in code and libraries tend to affect multiple functionalities in the system.
When implementing changes, software developers usually check whether the code works as intended. However, more often than not, they don’t have the knowledge of the system or the ability nor time to check whether their changes affect other functionalities of the system they are working on.
The same applies to testers as well - they are simply unable to check the entire system for errors which could have snuck in when testing new functionalities. If on a project they actually do perform all testing scenarios manually with every deploy, deploys start to be tedious and expensive. Every error that is found and subsequently patched requires changes in the codebase, which means testing the entire system from scratch or risking errors. Often, using such practices causes a long spiral of tests, errors, more tests, more new errors... Or, to make matters worse, sloppy testing resulting in errors being pushed to production.
This issue can be solved by implementing automated testing. There are many methods of creating tests and software testing, which is why I’m not going to list them here, just describe the required characteristics:
Tests should be automated, and they should be easy to run. Every developer should be able to run all the tests in their development environment.
Tests should be quick because otherwise they might be skipped due to them taking a long time to complete. Tests should be done every single time when the software is changed. In an ideal situation, each commit would be tested, but you should definitely test at least every piece of code to be pushed to the master branch.
Tests should cover the largest possible scope of the software. The number of tests and the way they are planned should enable testing all the crucial and significant functions of the software, and the team should be certain that the fact that all tests completed successfully means that the software works properly. There are many techniques for estimating test coverage, which can be used to control the number of tests.
Tests should be a part of the build. In an ideal situation, every build should automatically launch all the tests and display their results.
Every change should be checked by testing the entire system. Again, in an ideal situation you would test every single commit, but often it’s enough to test the code before merging it with the master branch. When a developer updates the master branch with code that breaks the software, they hinder the work of other people who want to start working on a new task using the latest build – it simply won’t work due to errors in code. Teams that do not use automated testing often face an issue where some new commit breaks a part of the software and everybody who wanted to work on it are stalled until the error is fixed. Automated testing prevents such situations.
////
== Plan
* *Introduction*
* Introduction
* *Unit Test*
* Integration Test
* JUnit
* Guidelines
* Conclusion
== Golden rule
____
If a method does not have automatic tests, it does not work
____
== Definition
[quote]
......@@ -122,8 +164,19 @@ A trustworthy system is made of trustworthy units.
== Plan
* Introduction
* Unit Test
* *Integration Test*
* JUnit
* Conclusion
== TODO
== Plan
* Introduction
* Unit Test
* Integration Test
* *JUnit*
* Guidelines
* Conclusion
== JUnit
......@@ -192,6 +245,7 @@ class IntervalTest {
== Improving the tests
* Check if `-1` and `11` do not belong to the interval
[source,java]
----
import org.junit.jupiter.api.Test;
......@@ -382,9 +436,12 @@ class IntervalTest {
== Plan
== Plan
* Introduction
* Unit Test
* Integration Test
* JUnit
* Guidelines
* *Conclusion*
[%notitle]
......
== Major Structural techniques are:
* Statement Testing: A test strategy in which each statement of a program is executed at least once.
* Branch Testing: Testing in which all branches in the program source code are tested at least once.
* Path Testing: Testing in which all paths in the program source code are tested at least once.
* Condition Testing: Condition testing allows the programmer to determine the path through a program by selectively executing code based on the comparison of a value
* Expression Testing: Testing in which the application is tested for different values of Regular Expression.
== Functional testing techniques:
These are Black box testing techniques which tests the functionality of the application.
Some of Functional testing techniques
* Input domain testing: This testing technique concentrates on size and type of every input object in terms of boundary value analysis and Equivalence class.
* Boundary Value: Boundary value analysis is a software testing design technique in which tests are designed to include representatives of boundary values.
* Syntax checking: This is a technique which is used to check the Syntax of the application.
* Equivalence Partitioning: This is a software testing technique that divides the input data of a software unit into partition of data from which test cases can be derived
== Error based Techniques:
The best person to know the defects in his code is the person who has designed it.
* Few of the Error based techniques
* Fault seeding techniques can be used so that known defects can be put into the code and tested until they are all found.
* Mutation Testing: This is done by mutating certain statements in your source code and checking if your test code is able to find the errors. Mutation testing is very expensive to run, especially on very large applications.
* Historical Test data: This technique calculates the priority of each test case using historical information from the previous executions of the test case.
== Guidelines
-Keep unit tests small and fast
- Ideally the entire test suite should be executed before every code check in. Keeping the tests fast reduce the development turnaround time.
* Unit tests should be fully automated and non-interactive
* The test suite is normally executed on a regular basis and must be fully automated to be useful. If the results require manual inspection the tests are not proper unit tests.
* Make unit tests simple to run
* Configure the development environment so that single tests and test suites can be run by a single command or a one button click.
* Measure the tests
* Apply coverage analysis to the test runs so that it is possible to read the exact execution coverage and investigate which parts of the code is executed and not.
* Fix failing tests immediately
* Each developer should be responsible for making sure a new test runs successfully upon check in, and that all existing tests runs successfully upon code check in. If a test fails as part of a regular test execution the entire team should drop what they are currently doing and make sure the problem gets fixed.
* Keep testing at unit level
* Unit testing is about testing classes. There should be one test class per ordinary class and the class behavior should be tested in isolation. Avoid the temptation to test an entire work-flow using a unit testing framework, as such tests are slow and hard to maintain. Work-flow testing may have its place, but it is not unit testing and it must be set up and executed independently.
* Start off simple
* One simple test is infinitely better than no tests at all. A simple test class will establish the target class test framework, it will verify the presence and correctness of both the build environment, the unit testing environment, the execution environment and the coverage analysis tool, and it will prove that the target class is part of the assembly and that it can be accessed.
* Name tests properly
* Make sure each test method test one distinct feature of the class being tested and name the test methods accordingly. The typical naming convention is test[what] such As testSaveAs(), testAddListener(), testDeleteProperty() etc.
* Keep tests close to the class being tested
* If the class to test is Foo the test class should be called FooTest (not TestFoo) and kept in the same package (directory) as Foo. Keeping test classes in separate directory trees makes them harder to access and maintain. Make sure the build environment is configured so that the test classes doesn't make its way into production libraries or executables.
* Test public API
* Unit testing can be defined as testing classes through their public API. Some testing tools makes it possible to test private content of a class, but this should be avoided as it makes the test more verbose and much harder to maintain. If there is private content that seems to need explicit testing, consider refactoring it into public methods in utility classes instead. But do this to improve the general design, not to aid testing.
* Think black-box
* Act as a 3rd party class consumer, and test if the class fulfills its requirements. And try to tear it apart.
* Think white-box
* After all, the test programmer also wrote the class being tested, and extra effort should be put into testing the most complex logic.
* Test the trivial cases too
* It is sometimes recommended that all non-trivial cases should be tested and that trivial methods like simple setters and getters can be omitted. However, there are several reasons why trivial cases should be tested too:
* Trivial is hard to define. It may mean different things to different people.
* From a black-box perspective there is no way to know which part of the code is trivial.
* The trivial cases can contain errors too, often as a result of copy-paste operations:
* Focus on execution coverage first
* Differentiate between execution coverage and actual test coverage. The initial goal of a test should be to ensure high execution coverage. This will ensure that the code is actually executed on some input parameters. When this is in place, the test coverage should be improved. Note that actual test coverage cannot be easily measured (and is always close to 0% anyway).
* Cover boundary cases
* Make sure the parameter boundary cases are covered. For numbers, test negatives, 0, positive, smallest, largest, NaN, infinity, etc. For strings test empty string, single character string, non-ASCII string, multi-MB strings etc. For collections test empty, one, first, last, etc. For dates, test January 1, February 29, December 31 etc. The class being tested will suggest the boundary cases in each specific case. The point is to make sure as many as possible of these are tested properly as these cases are the prime candidates for errors.
* Provide a random generator
* When the boundary cases are covered, a simple way to improve test coverage further is to generate random parameters so that the tests can be executed with different input every time. To achieve this, provide a simple utility class that generates random values of the base types like doubles, integers, strings, dates etc. The generator should produce values from the entire domain of each type.
* Test each feature once
* When being in testing mode it is sometimes tempting to assert on "everything" in every test. This should be avoided as it makes maintenance harder. Test exactly the feature indicated by the name of the test method. As for ordinary code, it is a goal to keep the amount of test code as low as possible.
* Use explicit asserts
* Always prefer assertEquals(a, b) to assertTrue(a == b) (and likewise) as the former will give more useful information of what exactly is wrong if the test fails. This is in particular important in combination with random value parameters as described above when the input values are not known in advance.
* Provide negative tests
* Negative tests intentionally misuse the code and verify robustness and appropriate error handling.
* Design code with testing in mind
* Writing and maintaining unit tests are costly, and minimizing public API and reducing cyclomatic complexity in the code are ways to reduce this cost and make high-coverage test code faster to write and easier to maintain.
* Don't connect to predefined external resources
* Unit tests should be written without explicit knowledge of the environment context in which they are executed so that they can be run anywhere at anytime. In order to provide required resources for a test these resources should instead be made available by the test itself.
* Know the cost of testing
* Not writing unit tests is costly, but writing unit tests is costly too. There is a trade-off between the two, and in terms of execution coverage the typical industry standard is at about 80%.
* Prioritize testing
* Unit testing is a typical bottom-up process, and if there is not enough resources to test all parts of a system priority should be put on the lower levels first.
* Prepare test code for failures
* If the first assertion is false, the code crashes in the subsequent statement and none of the remaining tests will be executed. Always prepare for test failure so that the failure of a single test doesn't bring down the entire test suite execution.
* Write tests to reproduce bugs
* When a bug is reported, write a test to reproduce the bug (i.e. a failing test) and use this test as a success criteria when fixing the code.
* Know the limitations
* Unit tests can never prove the correctness of code.
* Structural, Functional & Error based Techniques
* Structural Techniques:
** It is a White box testing technique that uses an internal perspective of the system to design test cases based on internal structure. It requires programming skills to identify all paths through the software. The tester chooses test case inputs to exercise paths through the code and determines the appropriate outputs.
'''
{src: 'mde-languages.min.js', async:true, callback: function() { hljs.initHighlightingOnLoad(); }}
\ No newline at end of file
[
"index.md",
"construction.md",
"mapping.md",
"behavior.md",
"evolution.md",
"refactorings.md",
"unit-test.md",
"tdd-en.md",
"generation.md",
"patterns.md",
"ci.md",
"maven.md"
]
\begin{frame} {Méthodes outil}
\vspace{1cm}
\begin{itemize}
\item Ce sont des méthodes sans référence à this (ou self, current, \ldots).
\item Souvent, peuvent être placées ailleurs: vérifier les paramètres.
\item Doivent au moins être marquées (e.g. «utility»).
\end{itemize}
\end{frame}
\begin{frame} {Attributs d'instance peu utilisés}
\vspace{1cm}
\begin{itemize}
\item Si certaines instances les utilisent, et d'autres non: créer
des sous-classes.
\item Si utilisés seulement lors d'une opération particulière, considérer la création
d'un objet-opérateur.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Même nom, différentes significations.}
%\framesubtitle{}
Conduit à une mauvaise interprétation du code:
\begin{itemize}
\item Identificateurs identiques: réutiliser une variable locale à une autre fin est souvent signe d'une méthode trop longue.
\item Vocabulaire surchargé (in english): Order, Serialize, Thread, etc.
\end{itemize}
\end{frame}
\begin{frame} {Paramètres rattachés}
\vspace{1cm}
\begin{itemize}
\item Cachent souvent un manque d'abstraction.
\item (e.g. Point).
\item Une fois la classe créée, il est souvent facile d'y ajouter
un comportement spécifique.
\end{itemize}
\end{frame}