This year the Eclipse Conference Europe will again feature a Project Quality Day track on Wednesday, 4 November. This is a unique opportunity to talk about quality assurance, testing and good development practices in Eclipse projects. I was asked to be a member of the Program Committee in charge of selecting talks for Project Quality Day.
The call for papers ends on the 31st of July, with an early bird selection on the 20th. Early bird selection is a great chance to stand out of the crowd and to let people know about your project in advance. Five papers had been submitted on the 20th, all of them relevant and interesting, and it was a hard task for the program committee to select only one.
Some of the criteria used to select early bird talks are scope, neutrality, relevance, and novelty. It should bring new insights, raise interest in quality topics, and provide practical help to projects and teams. Of course, there is more than that: gut feeling and fun are also considered. Note that Alex Schladebeck, our PC leader, also published a review of the early bird selection process. You may want to have a look for a complementary perspective on the selection process.
This year's early bird pick is How I learned to test without sleep, by Tobias Geyer. The remaining talks are nevertheless high quality, and I intend to give here a brief overview of them in order to show what topics are popular these days.
Testing is an activity which requires the full attention of the person performing it and which engages both sides of the brain. While this statement certainly is true, sometimes reality prevents us from living up to it. Regardless of the reason, on some days "full attention" is nowhere to be found and at least one half of the brain is in deep slumber. In this talk I'll share the techniques which helped me to provide valuable testing to my team even on those occasions.
This talk is our early bird selection: it is fun, useful, and deals with the day to day activity and tricks of quality management. Sharing experiences and testing practices is a great way to improve the overall maturity of our community and keep testing good, even when we are not at our best...
How much should we test? And when should we stop testing? Since the dawn of software testing, these questions have tormented developers. But before we are able to answer how much we should test, we must first know how much we are testing. In this talk, I am going to report the surprising findings of a large-scale case study on the state of developer testing, facilitated by the purpose-built Eclipse plugin WatchDog. The open-ended case study launched after last year’s EclipseCon and has involved more than 1,500 software developers to date, resulting in over 15 years of recorded and analyzed work time in the IDE.
The question of how much testing is enough is an old-age question. Many research papers have tried to address this, and the cost of testing and errors in the final product is so high that the issue is more than ever prominent. Both experienced testers and new comers are to benefit from the results of this survey and its conclusions. This talk also has the advantage of using Eclipse technology to bring new insights, based on the feedback of real-world situations.
RedDeer provides a way to test your eclipse applications automatically. RedDeer comes with huge set of features such as requirements for tests, screenshots and screencasting, wait conditions, code generation, RedDeer launcher, customized test flow, several levels of logging and many more. Level of absctraction is friendly to any user. It goes from low level of abstraction (widgets) to high level of abstraction (wizards, projects...).
This talk is an introduction to RedDeer, a framework to automatically test Eclipse applications. The talk covers the architecture and features of the system, and showcases its implementation on a sample project. It is a great way to discover and learn a new tool for your project's quality!
Software systems are complex, intangible structures, which are hard to understand. Therefore, visualization of software properties, structure and dependencies in different views play an important role within the lifecycle of a software system. I started to work with software visualization technologies and especially with 3d visualization during my masters thesis at the NICTA in Sydney 2012. The result of the thesis is a tool for 3D visualization of software structures and dependencies that can be integrated in the software development process. I started to analyze current platforms for software quality analysis and found SonarQube. The platform is wide spread in the industry and I started to rewrite my tool from a visualization platform to a SonarQube plugin. Now, all analysis results (metrics) can be used for the visualization.
Software metrics visualisation helps people better understand the system's architecture and behaviour. This talk is an interesting dive into software structure and dependency metrics, providing insights on what they are and how they can be put into practical use through this SonarQube's plugin.
Bonitasoft is a long-time user of SWTBot framework. The framework has been used to test UI since the beginning of Bonitasoft, 6 years ago. It was first used exclusively by "Eclipse-knowledgeable" developers because of its maintainability and stabilization difficulties. There were also limitations on the testable components. Building on our increased experience and the evolution of SWTBot it is now used by the Developers and the QA team in a smoother way.
Setting up a functional testing process is not an easy task. It involves several actors and usually requires a specific tooling for the execution of scenarios. But functional tests are also the best representatives of end-user behaviour and concerns, and thus bring a confidence that no other test can hold. With this real-world use case of SWTBot, one can get insight on how to efficiently implement functional tests and make a big difference on quality.