Stay Connected & Follow us

Simply enter your keyword and we will help you find what you need.

What are you looking for?

This blog post is a republish of how I described the book last autumn in the Indiegogo crowdfunding campaign. It gives some depth to the book. The book tells a growth story of two 10-year old children, Laura and Tom, from kids into knight apprentices. They learn about dragons and knights fighting those dragons, and get into real action themselves, too. They get help from male and female knights and a wise old sage. The stories take place in a setting resembling medieval Europe. As these fantasy stories unfold, I explain to the readers the software development and testing world in analogies or parallels. Each chapter introduces another type of dragon, which represents a software defect/bug in the real world. Knights represent

Our value. As Testers is it our fault we are underrated in organisations? Have we been underselling ourselves for years?! From speaking to numerous testers at conferences and testers I have worked with directly over the years I have noted that we are not naturally inclined to sell ourselves in our own organisations. Is this why we are being told by the jobs industry that our role can be fully automated or that we have to work in a job where our primary function is to automate?! I have absolutely no doubt that I have saved large sums of money for the companies I have worked for in my role as a Tester. I have highlighted assumptions, ambiguities and grey areas in

People think AI makes unbiased decisions People make biased decisions, because of their experiences and their beliefs. There is a common conviction that AI makes unbiased decisions. In reality, the kind of narrow artificial intelligence that exists today is far from unbiased. There are many examples of bias in systems with artificial learning capabilities that are evident, such as racist twitter bots, recruitment systems only choosing male applicants and systems that predict colored criminals will have higher risks of recidivism than they actually do. We wonder how many biased systems there are that we haven’t discovered… Good reads on this topic are Richard Fall’s articles ‘When AI goes bad‘ and ‘Algorithms and bias in the criminal justice system‘ , and the great

Scaled Agile Framework for Lean Enterprises (SAFe) is becoming the most popular framework used to help large programs and entire companies achieve business agility. It builds on well-known agile-lean principles and methodologies, and puts them together to address challenges not only on team level, but also on program, large solution and portfolio level. Although the framework is described in more details than other comparable frameworks it is not very elaborate on how testing and quality practices fit in. This raises new challenges for testers, Q&A and test managers, test architects, test specialists and people in similar roles together with the entire organisation. In this eBook Derk-Jan de Grood and Mette Bruhn-Pedersen describe what guidance SAFe actually provides and suggest additional ways testers can