Physical versus virtual
An important part of flow is the cycle time for design-build-test. To become more rapid and responsive, we can improve physically (e.g. design for agility, rapid prototyping, agile manufacturing, robot testing, etc.) and virtually (e.g. model based, digital twin) or a combination of both (e.g. augmented). Virtual building, integration and testing basically turns hardware into software enabling us to use many of the practices and tools from the software world like automated testing, BDD/TDD, etc. Both physical and virtual testing have their limitations. In the physical world it is impossible or economically unsound to build and test every configuration for any imaginable circumstance and scenario. At the same time do virtual models have their limitations of mimicking the real world and one need to invest to be build and validated them first. However fast and cost effective we become at physical building and testing, in many cases virtual testing will be faster and more cost effective. Recent developments like cloud based computing and out-of-the-box modelling tools, make virtual testing rapidly replace or enhance much of the physical testing. Virtual testing also opens up many new opportunities like: detailed time-lapsed analysis of virtual car crash test results; enabling stakeholder review or collaboration even when physically not at the same location or even time (e.g. send a augmented review file with change notes); quick A/B tests for marketing purposes, manufacturability or serviceability or durability testing via digital Twins; etc.
The decision to go for physical or virtual (model based) is based on different aspects:
- How correct is the model? Is the structure clear and with enough detail? Is it supported by clear hypothesis and mathematical formula’s?
- How valid is the model? Has it been validated by physical tests? Is the technology well known or are we in a very new area like nanometer transistors or unknown space?
- How accurate is the model? Depending on how close to the limit of a requirement we are testing, the model needs to be more or less accurate. Even if the model has flaws or inaccuracies, is it at least accurate enough for the area of interest?
- How accurate is the data we are feeding the model with? Is it rich and accurate enough and does it cover all aspects or multiple scenarios?
- Is model based testing allowed? For compliance it may not be allowed or you have to validate the model and get approval before it can be used. For example, virtual crash testing.
- Are there any trade-offs to consider? For instance, time criticality or budget influence the decision. Sometimes it’s better to have slightly inaccurate model based test results, then very poor physical results. Both are risky as we base design and other decisions on it and all should be very aware of that.
- Is there another reason for making a physical test system? For instance, the system is part of a bigger system and it’s needed for higher level system testing anyway. Or, the system is also going to be used as a demo to attract or convince potential buyers. In this case it is still valid to have a model as model based testing is much faster, can be automated, allows for multiple scenarios testing, etc.
For examples and details, please check out the articles on Virtual Testing and Physical Testing. For details on the economics of using models, check out the article model based engineering.
Integration and test strategy
The overall product flow strategy is the main determining factor for testing what and when. This is mainly based on value / effort, or better, cost of delay / duration. Many times however, strategy is also influenced by contract obligations, internal politics, budget, etc. Splitting into objectives, goals (and features/stories) then determines the product flow and with that testing flow. For instance, we can first build a basic version with ‘happy flow’, later we can extend to more complex handling and finally do some exception handling like machine failures. More on this you can find in Guide Flow and Manage Flow.
A second influencing factor is the local economics of testing (including any building and integration). Ultimately, we would like to test each increment / iteration completely end-to-end with maximum reliability. But, for given circumstances, this might not be the economically sensible thing to do. There should be a balance between holding cost (risk, learning, revenue, etc.) and transaction cost (materials, hours, etc.). The image, based on the works of Don Reinertsen, shows a U-curve with an optimum (green) area. For the given circumstances this would be the economical sensible batch size for integration and testing. To be able to become more rapid and responsive, we need to reduce the batch size by reducing the transaction costs. This can be done by for example: virtual testing (model based), test automation (virtual or physical), using proxies like stubs, mock-ups and scaled models, integrate and test risk-based (only the affected part, only to the level of detail needed), produce via in-house workshop or rapid proto suppliers or fast-lane in factory, etc.
For testing the modules that are being formed by the increments delivered during development, we can distinguish three strategies:
- Bottom Up – Each component at lower hierarchy is tested individually and then the components that rely upon these components are tested.
Advantages: Easier fault finding (less complexity), no stubs needed.
Disadvantages: Higher levels modules tested late and drive schedule, drivers needed (mimic environment of module/interface). - Top Down – Testing takes place from top to down using stubs for the lower components.
Advantages: No detailed design needed (stubs replace them), early testing of high-level, critical system parts.
Disadvantages: Stubs needed, quality of stubs may be insufficient as details are unknown, deferred detailed design (devil may be in the detail). - Hybrid (or Sandwich) – Combination of Top Down and Bottom Up.
Advantages: Priorities are set risk based, enables early delivery of vertical increments (end-to-end skeleton).
Disadvantages: Test management complexity.
Release strategy and Feature toggles
To respond and deliver value to our customers fast, we would like to push through each small increment into production. For example, when we find a small flaw in our car design, we would like to fix it fast and make sure the next car that runs of the production line has the improvement. However, since any change imposes risk, we need some mechanism to decide whether to release something yes or no. This is also known as release on demand. It simply means that you may develop continuously, release is only done after approving. When quality is build in and at a high level including a fast recovery process, the act of releasing can be very fast and decentralized. Besides risks there may be other reasons to defer release. For example, you want to release at the start of a new season, or make a market impact by releasing a combination of functions, or wait for a higher level of performance, or respond to a competitor move, etc. For more on this, see Goal – Trade-offs an keeping the balance) and there have to be some decision points and for marketing purposes it might not be.
Some improvements you want to release fast (e.g. safety issue), and some you want to release later. Some you want to release to all, some just to a small selection (e.g. pilot). There are different release strategies depending on the objective:
- Canary release – The feature, as part of an existing product, is build incrementally and each increment (or just the risky ones) is released only to a subset of users (knowingly or not). This allows early detection of technical issues (canary in the coalmine).
- Dark release – The feature, as part of an existing product, is complete and production ready but before releasing it to all, it is released only to a subset of users (knowingly or not). This allows for decoupling deployment from release, get real user feedback, test for bugs, and assess performance.
- Soft release: The new product or major new feature is released but is kept quit except for a small set of users. Especially useful for innovative new products. This allows for validating with real end users and get feedback for improvements and learn how they interact with the product and use it, and also to discover bugs you might have missed and test the readiness of your infrastructure.
- Beta release – Like a soft release, however the product is in the late stages of development but isn’t quite there yet.
To be able to control what is or isn’t released and for what/who, you can use a combination of configuration files and feature toggles. Feature toggles are commonly used in software releasing. They are conditional statements in the software where configuration settings determine who may get/see/use it. In other words, a section of software code is only activated if the conditions are met. For hardware these feature toggles are also very valuable. In case we use model based engineering or have a digital twin proto, we can of course build our feature toggles in there and run automated test cases that make use of the toggles. In case it becomes physical, we need some rethinking for it to be applicable.
- Suppose we want to test production of a small risky update to a component ourselves? We can put feature toggles in the CAD model and PDM system ‘for customer X only’. If we then make ourselves ‘Customer X’, we can send out an order and let the automated process do the rest. In case a digital twin is available from production, we would of course run it through that first.
- Suppose we want to test a new module or improved component with a certain customer (like a pilot). We can turn the feature on for this customer, let it run through production, install and test it. It’s like running a final pilot with a real customer. In this case a reliable and fast recovery process would be highly recommendable.
- Suppose we want to add sensors or holes to a component when applied in a test environment, but leave them out in when send to a customer. We could toggle the sensor or hole in the CAD model and use it in the configuration file.
Martin Fowler wrote a highly recommendable extensive article on Feature toggles and the dynamism/longevity of them: Feature toggles [Fowler]