About

N.Léveillé (b. 1977, France) programs real-time audiovisual and business software, manages projects, coaches programmers, plays and composes electronic music, and occasionally writes about his experiences.

Read more...

nicolas @ uucidl

Use Frame Time Not Frame Rate When Working On Performance

May 21, 2016 by nicolas, tagged projects

Hi, I've heard you were working on an interactive graphical application, and that you're doing your best to give it a smooth frame rate.

Try to ignore changes in frame rate when measuring or improving the performance of a component within the complete system. Remember instead your frame time budget (e.g. 16ms for a 60hz target) and track the time your component takes within that budget.

To understand why, look at the relationship between your efforts and the measurements.

Since frame rates are inversely proportional to frame times, for the same optimization (a delta in time: Δt), you will observe different frame rate deltas when you start from different frame times. (e.g. in different testing conditions or if the system changes)

This relationship is not linear, with ratehz = 1/times and thus ratehz = 1/(times + Δt), which makes it difficult to reason with frame rates.

empty Δt=4ms Max(time)=80ms

In order to deliver a smooth experience, what matters is whether your complete system meets your per-frame budget, such as 16ms for a 60hz target.

That budget is best monitored with deltas in time, as for given workloads the deltas can be added to predict the overall performance of the system.

Some examples

For the same gain in frame time budget of 1ms:

If the system was spending 15ms of rendering time per frame, an improvement of 1ms would increase the frame rate by about 5 frames or 7%.

If however that system was spending about 6ms in total the same optimization effort would come out as an improvement of 34 frames or 20%.

Thanks

Thanks to Gregor Klinke for reviewing a draft of this text.

Glossary

  • Frame: a coherent drawing (or render) on the screen representing an instant in an animation or simulation
  • Frame rate: Frame Per Second, FPS.
  • Frame time: Time Per Frame, TPF

Anti-Pattern: Blobs of test data

April 29, 2015 by nicolas, tagged testing and programming, filed under projects

During the development of automated tests, test data is sometimes represented in blobs, stored in central repositories. They are often shared across automated tests and help setting them up. The repositories can take the form of code (constructing a complete tree of objects), files or even relational databases.

The creation of a shared repository of test data is often introduced because creating and setting up test data is difficult or costly, both at development and execution time. Some reasons:

  1. the domain objects and their collaborators are hard to construct, fake or wire in a test,
  2. the domain is itself very complex and test writers have to master many aspects of the domain to create the correct test data at runtime,
  3. the creation of these objects takes time to execute.

Check if those reasons really apply to your software project. Is 2) inherent to your domain? Can 1) and 3) be remedied? Are they even maybe a result of the application of this anti-pattern?

Personal experience

I have worked on a project that had shared data in the form of a centralized database against which every unit + acceptance test suite would be run.

The database had been created at a certain date, then updated sometimes by hand (handwritten SQL) or code as well as standard database schema migrations.

When a test failed it would be because the behaviors of the code changed (intentionally or not) or because the test data had not been migrated. Finding out what nature the test data had was also difficult. Did that person write this test against object O because it was an object with a precise, intended set up or because somehow it had some property that the writer of the test liked? Those aspects were almost never documented. In effect it meant that the test would often not document what it was constructed against.

Also the test data always grew because modifying items would mean taking the risk of breaking tests that you had no idea how to fix.

Why is a blob of test-data an unit-test anti-pattern?

A good unit-test is fast, precise, readable and isolated. It brings confidence into the working state of the system under test.

Tests become hard to read, imprecise and poorly isolated

Unit tests written against a blob of test data tend to be hard to read, poorly isolated and imprecise.

When a unit-test refer to the entire blob or even part of it, they are potentially depending on the entire tree rather than isolating only a part of the system.

When the test cherry-picks one particular item of the test data blob, the precise setup that the test is using is barely described. One must read the data to find out what the test is actually doing.

When creating a new test, it is very tempting to just look around and inherit one piece of data that someone else’s has written. This becomes a liability if this item is touched further, and couples the two tests implicitely. (I.e. the test failures are correlated)

It also means the test in question never really can state what its starting state is.

And if one cherry-picks the correct data within the blob in practice each tests get its own test data within the entire blob, which means that the blob is growing with the number of tests and never shrinking.

Tests become hard to trust

Unit tests written against a blob of test data also tend to be harder to trust.

In the long run as the application changes so must the test data. When the test data is not correctly versioned or updated then it becomes difficult to trust it. Although code-generated data is superior in this way because at least it can be made to use the basic operations of the data model, leading to well-formed test data in practice it’s always a bit of a mixture of static and generated data.

Tests are still slow

Finally performance wise, although these blobs are often brought in to solve performance issues with setting up the tests, if the test data is mutable, all modifications made to the blobs must be rolled-back so as to keep each test isolated. This may undermine the expected performance benefits of the shared data.

It goes further: when the test data repository is actually a shared resource such as a database, then it is inefficient under heavy parallel testing, making the unit test suite run slowly.

Why is a blob of test-data an acceptance-test anti-pattern as well?

While a unit test tests a system, an acceptance test tests a product.

A good acceptance test embodies the specification of the product in user terms.

When written against a blob of test data, an acceptance test becomes poorly specified. It starts depending on implicit properties of the test data.

Suggestions & Example

Write tests which directly construct their own starting state.

Unit-Test Example: specifications-based setup

A concrete alternative is to write your unit-test in this way:

  • a setup phase that constructs the objects out of a concise specification (a compressed version of your test data)
  • a test phase which operates on the resulting domain objects and verifies its expectations.
  • an unwind phase where the domain objects are destructed

An example in javascript:

function test_thatNotesCanBeDeletedWithADoubleClick() {
    withMidiEditorOnNotes(
        // specification for this test's data:
        [
            { midiPitch: 64, startTime: 7.0 },
        ],
        function (midiEditor, midiNotes) {
            doubleClick(midiEditor, timeToX(7.0), midiToY(64));
            verify(midiNotes.isEmpty());
        }
    );
}

Commentary on suggestion

For unit-tests this means constructing the smallest amount of domain objects necessary for the system under test.

For acceptance tests this means dedicated setup code to move the product into a desired state via domain object manipulation. It is acceptable here to use dedicated shortcuts (using model operations) to bring the product efficiently into this state.

All in all, creating well formed domain objects should anyway not be an after thought. Types with good specification and defaults that create well-formed values allow the creation of domain object values which can be directly used by tests.

It translates into domain objects that can be created anywhere (In C++: on the stack/on the heap), objects that can live standalone without being part of a complex network of other objects. I.e. properties of a modular code base.

A proposal for tracking the health of a code base

September 13, 2014 by nicolas, tagged management and programming, filed under projects

Code as Liability, features as Asset

For a peer reviewed software development project (ideally a module/sub-module) we introduce a dashboard to track its health.

The dashboard is regularly compiled and updated and includes:

A balance listing

  • mass of code” as liability [EWD.1]
  • user features” as asset

An indicator:

  • feature density” the ratio “user features” per “mass of code” unit

It must be applied to peer reviewed projects where the review process exist to guarantee that code is and will remain easy to understand by all peers.

Only features which have are validated / tested in the software can of course be included in the dashboard.

Motivation

This metric encourages reducing the “mass of code” as well and/or the production of fine grained list of its “user features“, as both raise the feature density metric. It acts as both a trigger and a reward for the removal of cruft.

For a given module with a defined business scope, reducing the mass of code encourages finding simpler, more factored expressions of the user features in code, more compact documentation, as well as factoring out in other modules/products what is not directly linked to the domain.

For the same module, producing fine grained lists of user features encourages the understanding of its scope, and can help breaking down development into smaller deliverable units.

Application

The metric is not intended for comparaisons of software projects.

It is meant to be used by the developers themselves (software engineers, designers, documenters) to detect when and where they should direct their efforts. [EWD.2]

Tracking the derivative (its variation over time) of the metric (as for many other metrics) makes it easier to act upon.

Mass of code unit

Mass of code is voluntarily vague. Define it as you see fit. I would for instance include the code, its tests as well as documentation. All of these needs to be maintained in the name of the delivered features.

If code is by default “peer reviewed” then using lines of code is reasonable. With the peer review an additional control already exists for the readability of the code and thus the lines of code are themselves normalized somehow.

User features unit

Inside a focused module, user features can be considered equivalent and simply counted.

References

[EWD.1]: Inspiration from an E.W. Dijkstra’s quote:

From there it is only a small step to measuring “programmer productivity” in terms of “number of lines of code produced per month”. This is a very costly measuring unit because it encourages the writing of insipid code, but today I am less interested in how foolish a unit it is from even a pure business point of view. My point today is that, if we wish to count lines of code, we should not regard them as “lines produced” but as “lines spent”: the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger. — E.W. Dijkstra [EWD1036]

[EWD.2]: simplicity is difficult

Firstly, simplicity and elegance are unpopular because they require hard work and discipline to achieve and education to be appreciated. — E.W. Dijkstra [EWD1243a

Acknowledgement

Thanks to Julien Kirch for his feedback

Wireless Internet killed by bandwidth caps

September 12, 2014 by nicolas, tagged log and computers

After using it in Japan, I had a great opinion of LTE technology to deliver internet. Small modems that can be forgotten, an Internet service that could potentially be carried with you and quite good speeds.

So when I saw that here in Germany Telekom was now advertising its LTE services, I was willing to try, with the promise of an increased bandwidth and a wire-free setup. The epilogue is that today we’re back to using DSL.

Testing Telekom LTE Call & Surf via Funk

We tested, and moved out from Telekom’s lowest LTE offering (35€) due to its bandwidth caps being impossible to meet, and additionally to the horribly slow speed you end up with when such bandwidth cap has been reached.

Their lowest offering covers a bandwidth cap up to 10GB.

Overall this option was more expensive than DSL and a great deal slower.

This product is at it is completely broken, unless (and I suspect this is the reason) you have no other choice than wireless internet.

A 10GB bandwidth cap is nowhere near sufficient for a couple with three computers in today’s internet/software ecosystem:

  • Auto-updates (downloads behind your back)
  • Advertisement (especially video ones)
  • Videos
  • Websites such as Facebook or Google maps

Even when deactivating/controlling all this it is difficult not to pass over the cap. And when you do, which for us was after two weeks of use, you end up with mobile <2G data speeds.

Random things in May

June 1, 2014 by nicolas, tagged music, project and design, filed under commentary

0. I got married at the end of April

feat. on twitter

1. Tim Rogers & Benett Foddy discussing sports

In a panel (Youtube video) at Indiecade East 2014, Tim Rogers & Benett Foddy, two game designers are discussing and ranking sports in terms of the quality of their game design.

2. The common swift made its way to Berlin on 05/07

A pretty amazing bird that eat/drink/sleep/mate in the air and can go for years without ever going on the ground. We have a few nests next to our appartment.. They’re really fast.

3. Patterns, the defacto scrum standard

James Coplien on Scrum:

  • The history of scrum, patterns
  • Scrum is about responsibility, trust, self-management to improve efficiency
  • Why are you doing daily stand-ups? retros? daily scrum?
  • How do you treat bugs? When should decisions to be made?
  • Using patterns to wake up common-sense…
  • Gathering and analysing patterns in one’s organisation. Creating a dialog around it.
  • Going back to the roots
  • It’s all about becoming, not about being or about doing.

4. ElMobo (ex-Moby) new release

Where he dusts off his amiga and releases some old .mods

Dusting off the amiga on Bandcamp.

5. 4mat releases Nadir

an album which synthesizes a whole lot of influences:

Nadir on Bandcamp