Projections for PHPUnit Coverage Report

Recently at my company we’re pushing for more automated testing and one of the metrics we’re looking at is, of course, code coverage. Although I’m not a big fan of code coverage as a metric, it at least gives you a general idea how well you’re doing. If you’ve worked with PHPUnit before, you definitely have generated a code coverage report for a project. These reports are great, because you can easily spot the parts in your code missing test coverage and tackle these.

At my company, our project is a large – like 10k+ PHP files large – and we have multiple teams working on different areas of that codebase. This makes it a bit tricky when it comes to code coverage, because although it’s great to know how well we’re doing overall, me as a team lead would also like to know how well my team is doing. Besides that, we have different architectural layers and requirements regarding code coverage are different for each.

Wouldn’t it be great to have a dedicated report for each team or each layer?

The most obvious solution to this problem would be to have different phpunit.xml configs whith code coverage whitelist/exclude rules for the files you’re interest in. Yes, that would work, but it is not very efficient to run the tests X times to generate each of those reports and maintain all these config files.

I thought there must be a better way to do this …

At our company we have automated code attribution, which means that each file in the project can be attributed to a team. In addition, we follow some conventions that require code from specific architectural layers to be located under specific paths. This gives us a handy set of file paths patterns, which can be mapped to a team or architectural layer. Great, so we have a list of file path patterns that we’re interested in.

Next thing we need is a code coverage report. Not a normal one, but something that is machine readable. For this use-case PHPUnit supports a PHP report, which is a dump of all the coverage data collected. You can get it by adding --coverage-php=coverage.php when executing PHPUnit.

Now we need to get this all together, and all we need is a little script that takes this data, filters it with the file path patterns and generates a coverage report from it. Thankfully, PHPUnit is build in a very modular way, so we can do this. Here’s an example:

use SebastianBergmann\CodeCoverage\CodeCoverage;
use SebastianBergmann\CodeCoverage\Report\Html\Facade;

/** @var CodeCoverage $codeCoverage */
$codeCoverage = require 'coverage.php';

$filterFunction = function (string $filePath): bool {
    return true; // Here you need to make the decision if the file should be in the report or not

$whiteListedFilesFiltered = array_filter($codeCoverage->filter()->getWhitelistedFiles(), $filterFunction, ARRAY_FILTER_USE_KEY);
$dataFiltered = array_filter($codeCoverage->getData(), $filterFunction, ARRAY_FILTER_USE_KEY);

$coverageFiltered = new CodeCoverage;

// Generate the HTML report for the projected code coverage
$writer = new Facade;
$writer->process($coverageFiltered, $targetDir);

Voilà, there we have our projected HTML code coverage report.

Since you have all the code coverage data still in coverage.php, you can have as many projections as you want.

Introducing Tombstones for PHP

Earlier this year I took over that project at my new company. A project, that existed for many years and has been continuously growing. My first impression, it was missing some love recently. The repository was cluttered by many files, that could assumed to be dead code. Unfortunately, you never know. Although I felt the urgent need of removing stuff, I was able to keep myself from blindly deleting files and breaking everything ;). The mission was clear: Cleaning up the project, without breaking things.

Read more

Git Action Icons for PHPStorm

Today I wanted to have buttons for some Git features in my PHPStorm toolbar. Unfortunatley it doesn’t have icons for all the essential Git features, which results in empty buttons. Doesn’t look very nice. So I created some icons on my own. Use them if you like.

Update July 2016: More, cleaner icons and 2x variants available.

Update March 2019: Updated the fetch icon to match the current style.

Lessons learned at my former company – Part I

It was the 13th of November, when I made the descision to leave my current company. I can remember that date, because according to my browser history, it was the day when I’ve choosen the song for my farewell mail (we had the tradition, that everyone who’s leaving links to a song in their last email). Some days passed until I’ve finally realized, that this was the moment, when I’ve switched my mindset from “keep going” to “I’m leaving”. After three tremendous years, I will quit my job as a lead developer and reach out for something new. Propably one of the hardest decisions I had to make.

Since then, I was often thinking about what happened in those years and what I’ve personally lerned from it. So I’m doing this post mostly for myself to recap, but maybe a former colleague or someone else will find it useful. Maybe you’ve made similar experience. So, this is about my lessons learned, as a member of a startup, as a developer, as a team lead. Let’s start with number one.

1. Know Your Limits

Let me tell you a story, a “war story”, that took place in the first months of the company, when everyone had that pioneer spirit. For me it was the most intense time at that company. The time when some individuals, who haven’t worked together before, became a team.

It was early 2012, when we just had licensed our first game and announced it to the press. The date for the closed beta release had already been choosen (for the non-gamers: closed beta is when a game is released to the public, but only for a limited amount of users). The plan was though, but it was possible. Unfortuately we got more and more into trouble, when some of our partners struggled with delays. The game developer needed more time to deliver the server software and we had similar problems with other service providers. We had to wait, until we received something to work with. It was only four weeks to prepare everything for launch – and nothing was ready. No game servers with the game running, no website, no account management. Everything needed to be built within those four weeks.

At that time we’ve been only two people in the tech department. Me, responsible for development, and my colleague, responsible for IT. So we did, what everyone working in tech is doing in such a situation: crunch time. It wasn’t a problem for us. We’ve been full of power, everyone was euphoric about our first release. Usually we started working at 10 a.m. like everyone else. All the other colleagues left at around 7 p.m. and the best part of the day began. No one randomly popping in and asking for stuff, we were able to concentrate on our tasks, at last. It was us two and the CTO sitting together and pushing forward. The atmosphere was nicely startup-ish, we ordered food, had a few beers and when everyone needed a break, we played a session of Minecraft and continued after an hour or so.

Our valley on the Minecraft Server
Our valley on the Minecraft Server

We’ve usually been exhausted late at night when everyone went home. For the next weeks we continued like this, almost seven days a week. Going to work, working, food, working, food, working, going home, sleeping, going to work… Looking back, I have to admit to myself that it was totally insane. I litterally had no life. Laundry piled up at home, the fridge was empty, but at least I’ve learned a lot about Berlin’s night bus lines.

Berlin Night, shot at 16.02.2012 at 2:30 a.m.
Picture shot on the way home, 16th of Febrary 2012 at 2:30 a.m.

So why am I telling this? Because at that time I’ve permanently exceeded my personal limit. I was so fueled by the challenge of making it happen, that I didn’t care about myself. And as usual, if you drive above the limit too long, something will go wrong. In my case it was the result of me being totally wasted by the previous weeks, suddenly all that pressure fell off on the launch day and I made a decision I’d have better thought twice.

Although it was kind of harmless, many month later it made me realize, that exceeding your personal limit is a real problem. It may be ok from time to time, but you should make sure not to exceed it for too long. Otherwise you will most certainly harm yourself. A former colleague of mine did not get off that lightly – he got a burnout and needed to start a therapy. That’s certainly an experience nobody want’s to live through.

Today’s working environments make it easy to reach your limit and going further. Therefore it is even more important than ever to be aware of your personal limit and – that’s equally important – you have to be able to realize when you’re exceeding it. I’m not saying you have to avoid it under any circumstances. Sometimes it is necessary to give 120%. But if you see yourself permanently running at 120%, something is wrong and you must not hesitate to change it. I can’t tell you what to do, because it strongly depends on your indiviual situation. For me the solution simply was to keep an eye on myself and to force myself into some spare time away from the workplace, instead of doing extra hours just for fun, when I had no plans for the evening.

Altogether, find a healthy balance between work and leisure time. Oh, that sounds so much generation Y 😉

To be continued…

Books: Clean Code

I want to start a new series of posts, where I want to present some books. They’re books that I can recommend to developers and people working in the field of software development.

Let’s start with a book, that every software developer should have read. It is “Clean Code” by Robert C. “Uncle Bob” Martin.

"Clean Code" by Robert C. Martin
“Clean Code” by Robert C. Martin

As the title might tell you, it is about techniques and rules to help you to writing high quality code in such a manner, that it becomes easy to understand and and easy to maintain. The book has a chapter for each aspect of software – naming, formatting, structuring, error handling – to name some of them. After defining a set of dos and don’ts, the following two chapters demonstrate how to apply them by refactoring a piece of software step-by-step.

I consider myself having a pretty good “code sense”, which means that I naturally know, how to structure things and write good code. Therefore most the book wasn’t such a surprise for me, but I discoverd some aspects, that I haven’t thought about before. Instead I just did it. The book helped to understand, why it is a good idea to do certain things, instead of just doing it because it feels right.

Why should you read it: You’re creating software, especially together with other people in a team. You care about the quality of your work and you want to improve your sense for good code.

Composer/PHPUnit on Windows Shell

Are you tired of typing php [PATH]/composer.phar on the console all the time? Wouldn’t it be easier to just type composer and you’re done? Fortunately this isn’t very hard to configure.

Save the composer.phar file to some folder, e.g. somewhere in your “Program Files” folder. I assume it’s C:\Program Files\Composer in this little tutorial.

Add a BAT file named composer.bat to that folder, which contains:

SET "COMPOSER_INSTALL_DIR=C:\Program Files\Composer"
php "%COMPOSER_INSTALL_DIR%\composer.phar" %*

Add C:\Program Files\Composer to the PATH setting of you operating system. If you don’t know how it works, just Google it.

Note: If PHP isn’t runable on your console via a simple php, you should also add your PHP folder to the PATH setting. Otherwise you have to write the full path to php.exe in the BAT file.

Now you can run the Composer commands from any folder:

composer self-update  # (may only work if the console runs with admin privileges)
composer install
composer update

The same works for PHPUnit’s phar distribution.

Some custom Capifony tasks

Here are some custom made tasks for Capistrano, which might be helpful in the Symfony2 context. First, some cache clearing tasks, that don’t clear the whole cache but only translations or the Twig template cache.

namespace :foo do

  # Clear only translation cache
  task :clear_translation_cache do
      now =
      timestamp = now.strftime("%Y%m%d%H%M%S")
      run "[ -d #{latest_release}/app/cache/prod/translations ] && mv #{latest_release}/app/cache/prod/translations #{latest_release}/app/cache/prod/translations_#{timestamp} || echo 'no translation dir'"

  # Clear only twig cache
  task :clear_twig_cache do
      now =
      timestamp = now.strftime("%Y%m%d%H%M%S")
      run "[ -d #{latest_release}/app/cache/prod/twig ] && mv #{latest_release}/app/cache/prod/twig #{latest_release}/app/cache/prod/twig_#{timestamp} || echo 'no translation dir'"


The folder is accually not removed but renamed to not interfere with current processes. The folder is then automatically rebuilt by the framework.

Symfony2 has that great “assets version” feature, which adds a parameter to the URL of all assets. Then by changing the value, you can make sure that everyone has to load the latest version from the server. If you want to update the asset version automatically on every deploy, you might use the following setup:

Add a file app/config/assets_version.yml to your project containing:

    assets_version: AsSeTvErSiOn

Add that file to the import section of your config.yml and use the parameter %assets_version% in the configuration.

Call that Capistrano task in your deployment process. It replaces the assets version with a value generated from the release name. It is the hexadecimal representation of the YYMMDDHHII timestamp.

namespace :foo do

  # Update asset version
  task :update_version, :roles => :web, :except => { :no_release => true } do
    capifony_pretty_print " --> Update assets version"
    file_path = "#{release_path}/app/config/assets_version.yml"
    assets_version = release_name[2, 10] # Extract YYMMDDHHII only
    assets_version = assets_version.to_i.to_s(16) # Convert to int, convert to hex
    capifony_pretty_print "     Assets version is #{assets_version}"
    run "echo 'parameters:\n    assets_version: #{assets_version}' > #{file_path}"


PHPUnit: Remove non-deterministic dependencies

Unit testing is great! But sometimes there are situations where it can become really tough to write proper tests for your code. One of these situations is when your code doesn’t work totally predictable, when it has some kind of randomness in it, that is intended. Then you usually have one of those functions in your code:

  • tempnam(), uniqid()
  • rand(), mt_rand(), shuffle() or any other randomized function
  • time(), date(), new \DateTime() or anything that has to do with the current time

The list isn’t complete, just wanted to mention the most common ones. Those functions are bad for testing, because tests have to be repeatable and those ones will most likely produce a different result on every call.

So how to get rid of the randomness and create predictable and therefore repeatable unit test?
Read more

PHPUnit: contains vs. stringContains

Note to myself: If I ever see that error again when running PHPUnit tests

Invalid argument supplied for foreach()

phar://C:/Program Files (x86)/PHPUnit/phpunit.phar/PHPUnit/Framework/Constraint/TraversableContains.php:110
phar://C:/Program Files (x86)/PHPUnit/phpunit.phar/PHPUnit/Framework/Constraint.php:82
phar://C:/Program Files (x86)/PHPUnit/phpunit.phar/PHPUnit/Framework/Constraint/And.php:113

please remember that $this->contains() is not the same as $this->stringContains(). The first one is a constraint for arrays, the second one is for strings.

Welcome to the Blog

Hello everybody!

My name is Christian, I’m a developer from Berlin/Germany. Primarly I’m programming in PHP but I’m open to any kind of programming language or web technology as long as it can do the job. When creating PHP applications I prefer using the Symfony2 framework, which is why most of my blog will be related to it and the so called “Symfony ecosystem”.

This blog will be about my experience with new technologies and the lessons learned, so people like you can benefit from it. I guess it will be updated on a non-regular basis, that means it just happens when I’ve found a new technology or had a experience worth sharing. I already got some topics scheduled but I have to write them first 😉