Elon Musk’s Engineering Principles – The 5 Step Process

There’s this interview with Elon Musk, showing around the SpaceX rocket production facility in Texas. There’s a lot of talk about rockets and stuff, though in between he’s giving some fascinating insights into the design process and the principles he’s following. The “5 Step Process” as he calls it. Wanted to write it up for myself, so I though I can share it here as well.

The following is a collection of what’s said during the interview, content composed from the subtitles. Did slight adjustments for readability.



<quotes by=”Elon Musk”>

I have a rule just implement rigorously is the sort of 5 Step Process.

1) Make your requirements less dumb

  • Your requirements are definitely dumb. It does not matter who gave them to you.
  • It’s particularly dangerous, if a smart person gave you the requirements, because you might not question them enough.
  • Everyone’s wrong, no matter who you are, everyone’s wrong some of the time.

2) Try very hard to delete the part or process

  • The bias tends to be very strongly towards “let’s add this part or the process step in case we need it”.
  • If you’re not adding things back in 10% of the time, you’re clearly not deleting enough.
  • Whatever requirement or constraint you have, it must come with a name, not a department. Cause you can’t ask the departments, you have to ask a person and that person who’s putting forward the requirement or constraint must agree that. They must take responsibility for that requirement. Otherwise you could have a requirement that basically an intern two years ago randomly came up with and they’re not even at the company anymore. And actually there’s no one at the department that currently agrees with that.

3) Simplify or optimize

  • The reason it’s the third step is cause it’s very common, possibly the most common error of a smart engineer, to optimize the thing that should not exist. Why would you do that? Everyone has been trained in high school and college that you gotta answer the question, convergent logic. So you can’t tell a professor “your question is dumb”. You will get a bad grade. So everyone, without knowing it, they got like a mental straight jacket on that is they’ll work on optimizing the thing that should simply not exist.
  • There’s another important principle, which is that you really want everyone to be chief engineer. So if everyone is chief engineer means that people need to understand the system at a high level to know when they are making a bad optimization.

4) Accelerate cycle time

You’re moving too slow, go faster. But don’t go faster until you’ve worked on the other three things first. If you’re digging your grave, don’t dig it faster, stop digging your grave.

5) Automate

I have personally made the mistake of going backwards on all five steps multiple times. Literally I automated, accelerated, simplified and then deleted. Automating was a mistake. Accelerating was mistake. Optimizing was a mistake. We just deleted and just bypassed this $2 million robot cell as a complete pile of nonsense.

Bonus

I think, currently a factory is underrated and design is overrated. So people generally think that, in this Eureka moment you come up with the idea and that’s it, now it’s good. But we design like this: literally a thousand percent, maybe 10000% more work that goes into the manufacturing system than the thing itself. Basically the amount of effort that goes into the design rounds down to zero, relative to the amount of the effort that goes into the manufacturing system.

And if this was not true, I’d be like “1000 Raptors [rocket engines] please. – Oh, you can’t make them? Oh, okay :(“

So this is like just very fundamentally underappreciated. If people have not been in manufacturing, especially manufacturing of something that’s relatively new, then they don’t understand. They think the design is the hard part, and they think production is like a copier or something like that. This is completely false. I can’t emphasize enough, I’m trying to correct the misperception that design is the hard part. It is not the hard part.

</quotes>


References

PHPStorm Inspections for your Continuous Integration Process

Did you know that PHPStorm (or any other Jetbrains IDE) can run inspections from command line and generate XML files for the results? This is a great “hidden” feature of those IDEs and machine-readable output means it can be somehow integrated with a continuous integration (CI) process. So let’s do this!

Inspection Profile

First thing you need is an inspection profile. I recommend creating one in your IDE clicking together the inspections and error levels as you like, so you can see the results instantly annotated in the code editor. When you’re happy with it, save the inspection profile to the project and get the inspections configuration file from .idea/inspectionProfiles. Then, commit it to your repository, doesn’t matter where you locate it, you can also keep it under it’s original path if you want to share it with all other developers – which is not a bad idea at all.

Setting Up PHPStorm on a Server

1) Download the Linux package of PHPStorm from the offical website.

2) Unpack it to some folder on your server.

3) Edit bin/idea.properties as follows:

idea.config.path=${idea.home.path}/profile/config
idea.system.path=${idea.home.path}/profile/system
idea.plugins.path=${idea.home.path}/profile/plugins
idea.log.path=${idea.home.path}/profile/log

This step is optional. It hard-links profile relative to the PHPStorm folder, effectively making it a “portable” installation. Otherwise the profile is located in the current users home directory, which is a bit problematic if you want to run PHPStorm with different users. The portable setup also allows you to easily copy that folder between servers.

4) Edit bin/phpstorm64.vmoptions to increase XmS and Xmx memory (optional).

5) Run bin/inspect.sh once, this initializes the profile folder and will fail because of a missing license.

6) Copy phpstorm.key into profiles/config folder. The key file must be created from an “Activation Code”, which can be retrieved from the JetBrains website by logging into the account and downloading the “Activation code for offline usage”. After entering the code in your desktop IDE, the file can be copied from the local profile folder to the server.

7) Run bin/inspect.sh again, the license error should be gone and the command line options are listed.

Congratulations, now you have a headless PHPStorm on your server.

Plugins

You might want to install some plugins to make some false-positive inspection errors vanish. For example, the PHP Annotations plugin is useful to make it understand use statements for annotations, instead of detecting them as “unncessary”. Plugins can be downloaded from the JetBrains website and must be unpacked into the profile/plugins folder. Plugins located in that folder are automatically enabled, no additional config necessary.

If you want to disable bundled plugins, add a disabled_plugins.txt to the config folder. Ideally, disable the plugins in the desktop IDE and copy the content to the server.

Running Inspections

Running inspections works as described on the JetBrains website. But there’s a few things you should know.

The .idea Folder Issue

You’re litterally running the IDE and therefore it does the same thing as the desktop IDE when opening a directory, it’s looking for an .idea folder

If you have no .idea folder there, no problem, it will just take all of the code that’s present in the directory.

If you have an .idea folder there – even when it’s empty – it will think “oh, that’s an IDEA-type project, I know how to read that”. It will look for the modules and directory configuration (modules.xml and *.iml file). If they’re present, no problem. If they’re missing, well, it will ignore all code because your project technically doesn’t have any modules.

I’d recommend providing it an .idea folder in any case (even when it’s not part of the repository, add it from somewhere before starting inspections), because it helps PHPStorm to understand the project better and you can tell it to ignore unimportant stuff, which makes the start-up and indexing faster. My recommendation:

  • Provide at least modules.xml and the *.iml file
  • Provide webResources.xml if you require any “Resource Root” paths
  • Provide php.xml to set the PHP language level
  • Provide misc.xml to set the JS language level

Some code examples to give you an idea how these files look like:

modules.xml

<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
    <component name="ProjectModuleManager">
        <modules>
            <module fileurl="file://$PROJECT_DIR$/.idea/my-project.iml" filepath="$PROJECT_DIR$/.idea/my-project.iml" />
        </modules>
    </component>
</project>

my-project.iml

<?xml version="1.0" encoding="UTF-8"?>
<module type="WEB_MODULE" version="4">
  <component name="NewModuleRootManager">
    <content url="file://$MODULE_DIR$">
      <sourceFolder url="file://$MODULE_DIR$/src" isTestSource="false" packagePrefix="Foo\" />
      <sourceFolder url="file://$MODULE_DIR$/test" isTestSource="true" />
      <excludeFolder url="file://$MODULE_DIR$/build" />
      <excludePattern pattern="*.csv" />
    </content>
    <orderEntry type="inheritedJdk" />
    <orderEntry type="sourceFolder" forTests="false" />
  </component>
</module>

webResources.xml

<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
  <component name="WebResourcesPaths">
    <contentEntries>
      <entry url="file://$PROJECT_DIR$">
        <entryData>
          <resourceRoots>
            <path value="file://$PROJECT_DIR$/src/js-modules" />
          </resourceRoots>
        </entryData>
      </entry>
    </contentEntries>
  </component>
</project>

php.xml

<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
    <component name="PhpProjectSharedConfiguration" php_language_level="7.2" />
</project>

misc.xml

<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
    <component name="JavaScriptSettings">
        <option name="languageLevel" value="JSX" />
    </component>
</project>

-d Option Limitations

The -d option can only be passed once (Upvote!). If you pass it multiple times, only the last one will be inspected. And you cannot target files with the -d option (Upvote!), it only takes directories.

To work around this limitation, JetBrains suggests to use scopes. You need to have a scope defined, which is done via .idea/scopes/ScopeName.xml looking as follows. The file patterns can be clicked together in the desktop IDE when managing scopes.

<component name="DependencyValidationManager">
  <scope name="ScopeName" pattern="here some file patterns" />
</component>

Unfortunately, there is no easy way to make it use a scope, so you have to do it the hard way – via JVM options. The best way is to make use of the PHPSTORM_VM_OPTIONS environment variable to let PHPStorm read additional JVM options from a file. In this example, we create a file called build/inspect.vmoptions that contains:

-Didea.analyze.scope=ScopeName

Then, we can start inspect.sh like this.

PHPSTORM_VM_OPTIONS=build/inspect.vmoptions bin/inspect.sh

All of this could be generated before starting the inspections, so you have everything you need to run inspections on arbitrary lists of files.

Stale Caches

When you’re switching branches a lot or have a lot of change in that project in general, PHPStorm might run into the issue of stale caches, no longer being able to run inspections properly. The fix is to delete the profile/system/caches and profile/system/index folders to force a re-index. Of course, running inspections will take more time then.

Single Process Limitation

The IDE allows a single process only, so you cannot run multiple inspections in parallel. Having multiple PHPStorm installations with the portable configuration (as shown above) can serve as a workaround, you can then have a process running from each installation.

The Endless Loop

Sometimes the inspection process gets stuck in an endless loop, trying to inspect the same file over and over again, never finishing. I’ve seen this happening especially with large data files like CSV or XML. Fortunately, it is not much of a problem, since inspections are executed multi-threaded and only single threads get stuck while the remaining ones are finishing the job. I worked around it by killing the process after a certain time. XML output is written on-the-fly, so you only need to fix the XML files, which are missing the closing element. Excluding these problematic files types (as seen in the my-project.iml example) also helps.

Publishing Result

If you’re running JenkinsCI, the Warnings Next Generation Plugin is the way to go. It supports parsing for IDEA inspections XML out of the box. One thing that I want to point out here, you can publish multiple reports from the same inspection results and apply certain filters. Here’s what I have in the Jenkinsfile to publish a report for PHP files and another one for JavaScript files:

recordIssues enabledForFailure: true, tool: ideaInspection(pattern: 'build/phpstorm/*.xml', id: 'idea_php', name: 'PHP Inspections'), filters: [includeFile('.*.php')]
recordIssues enabledForFailure: true, tool: ideaInspection(pattern: 'build/phpstorm/*.xml', id: 'idea_js', name: 'JS Inspections'), filters: [includeFile('.*.js'), excludeFile('.*src/js-legacy/.*')]

If you’ve used the -d option, the file paths in the resulting XML files will be prefixed with file://. Because of this, the plugin cannot resolve the file paths and therefore does not link into the source code. I guess, this is going to be fixed at some point, but until then fix it yourself with sed before publishing:

sed -i -- "s/file:\/\///g" build/phpstorm/*.xml

If you cannot use that plugin for some reason, or you want to do some more filtering, you might want to consider scheb/idea-inspections-checkstyle-converter, which is mainly converting the IDEA XML format to the more common Checkstyle format, but also comes with some more filtering options.

Summary

That’s all you need to integrate PHPStorm inspections into the CI process. Obviously, some things are not as easy as they should be, but if you know about the pitfalls (what you do, now after reading this posts), it’s realtively straight-forward. I hope this post will help you integrating them into your development process and you can gain some increased code quality from it over time.

Projections for PHPUnit Coverage Report

Recently at my company we’re pushing for more automated testing and one of the metrics we’re looking at is, of course, code coverage. Although I’m not a big fan of code coverage as a metric, it at least gives you a general idea how well you’re doing. If you’ve worked with PHPUnit before, you definitely have generated a code coverage report for a project. These reports are great, because you can easily spot the parts in your code missing test coverage and tackle these.

At my company, our project is a large – like 10k+ PHP files large – and we have multiple teams working on different areas of that codebase. This makes it a bit tricky when it comes to code coverage, because although it’s great to know how well we’re doing overall, me as a team lead would also like to know how well my team is doing. Besides that, we have different architectural layers and requirements regarding code coverage are different for each.

Wouldn’t it be great to have a dedicated report for each team or each layer?

The most obvious solution to this problem would be to have different phpunit.xml configs whith code coverage whitelist/exclude rules for the files you’re interest in. Yes, that would work, but it is not very efficient to run the tests X times to generate each of those reports and maintain all these config files.

I thought there must be a better way to do this …

At our company we have automated code attribution, which means that each file in the project can be attributed to a team. In addition, we follow some conventions that require code from specific architectural layers to be located under specific paths. This gives us a handy set of file paths patterns, which can be mapped to a team or architectural layer. Great, so we have a list of file path patterns that we’re interested in.

Next thing we need is a code coverage report. Not a normal one, but something that is machine readable. For this use-case PHPUnit supports a PHP report, which is a dump of all the coverage data collected. You can get it by adding --coverage-php=coverage.php when executing PHPUnit.

Now we need to get this all together, and all we need is a little script that takes this data, filters it with the file path patterns and generates a coverage report from it. Thankfully, PHPUnit is build in a very modular way, so we can do this. Here’s an example:

<?php
use SebastianBergmann\CodeCoverage\CodeCoverage;
use SebastianBergmann\CodeCoverage\Report\Html\Facade;

/** @var CodeCoverage $codeCoverage */
$codeCoverage = require 'coverage.php';

$filterFunction = function (string $filePath): bool {
    return true; // Here you need to make the decision if the file should be in the report or not
};

$whiteListedFilesFiltered = array_filter($codeCoverage->filter()->getWhitelistedFiles(), $filterFunction, ARRAY_FILTER_USE_KEY);
$dataFiltered = array_filter($codeCoverage->getData(), $filterFunction, ARRAY_FILTER_USE_KEY);

$coverageFiltered = new CodeCoverage;
$coverageFiltered->setData($dataFiltered);
$coverageFiltered->setTests($codeCoverage->getTests());
$coverageFiltered->filter()->setWhitelistedFiles($whiteListedFilesFiltered);

// Generate the HTML report for the projected code coverage
$writer = new Facade;
$writer->process($coverageFiltered, $targetDir);

VoilĂ , there we have our projected HTML code coverage report.

Since you have all the code coverage data still in coverage.php, you can have as many projections as you want.

Introducing Tombstones for PHP

Earlier this year I took over that project at my new company. A project, that existed for many years and has been continuously growing. My first impression, it was missing some love recently. The repository was cluttered by many files, that could assumed to be dead code. Unfortunately, you never know. Although I felt the urgent need of removing stuff, I was able to keep myself from blindly deleting files and breaking everything ;). The mission was clear: Cleaning up the project, without breaking things.

Read more

Git Action Icons for PHPStorm

Today I wanted to have buttons for some Git features in my PHPStorm toolbar. Unfortunatley it doesn’t have icons for all the essential Git features, which results in empty buttons. Doesn’t look very nice. So I created some icons on my own. Use them if you like.

Update July 2016: More, cleaner icons and 2x variants available.

Update March 2019: Updated the fetch icon to match the current style.

Lessons learned at my former company – Part I

It was the 13th of November, when I made the descision to leave my current company. I can remember that date, because according to my browser history, it was the day when I’ve choosen the song for my farewell mail (we had the tradition, that everyone who’s leaving links to a song in their last email). Some days passed until I’ve finally realized, that this was the moment, when I’ve switched my mindset from “keep going” to “I’m leaving”. After three tremendous years, I will quit my job as a lead developer and reach out for something new. Propably one of the hardest decisions I had to make.

Since then, I was often thinking about what happened in those years and what I’ve personally lerned from it. So I’m doing this post mostly for myself to recap, but maybe a former colleague or someone else will find it useful. Maybe you’ve made similar experience. So, this is about my lessons learned, as a member of a startup, as a developer, as a team lead. Let’s start with number one.

1. Know Your Limits

Let me tell you a story, a “war story”, that took place in the first months of the company, when everyone had that pioneer spirit. For me it was the most intense time at that company. The time when some individuals, who haven’t worked together before, became a team.

It was early 2012, when we just had licensed our first game and announced it to the press. The date for the closed beta release had already been choosen (for the non-gamers: closed beta is when a game is released to the public, but only for a limited amount of users). The plan was though, but it was possible. Unfortuately we got more and more into trouble, when some of our partners struggled with delays. The game developer needed more time to deliver the server software and we had similar problems with other service providers. We had to wait, until we received something to work with. It was only four weeks to prepare everything for launch – and nothing was ready. No game servers with the game running, no website, no account management. Everything needed to be built within those four weeks.

At that time we’ve been only two people in the tech department. Me, responsible for development, and my colleague, responsible for IT. So we did, what everyone working in tech is doing in such a situation: crunch time. It wasn’t a problem for us. We’ve been full of power, everyone was euphoric about our first release. Usually we started working at 10 a.m. like everyone else. All the other colleagues left at around 7 p.m. and the best part of the day began. No one randomly popping in and asking for stuff, we were able to concentrate on our tasks, at last. It was us two and the CTO sitting together and pushing forward. The atmosphere was nicely startup-ish, we ordered food, had a few beers and when everyone needed a break, we played a session of Minecraft and continued after an hour or so.

Our valley on the Minecraft Server
Our valley on the Minecraft Server

We’ve usually been exhausted late at night when everyone went home. For the next weeks we continued like this, almost seven days a week. Going to work, working, food, working, food, working, going home, sleeping, going to work… Looking back, I have to admit to myself that it was totally insane. I litterally had no life. Laundry piled up at home, the fridge was empty, but at least I’ve learned a lot about Berlin’s night bus lines.

Berlin Night, shot at 16.02.2012 at 2:30 a.m.
Picture shot on the way home, 16th of Febrary 2012 at 2:30 a.m.

So why am I telling this? Because at that time I’ve permanently exceeded my personal limit. I was so fueled by the challenge of making it happen, that I didn’t care about myself. And as usual, if you drive above the limit too long, something will go wrong. In my case it was the result of me being totally wasted by the previous weeks, suddenly all that pressure fell off on the launch day and I made a decision I’d have better thought twice.

Although it was kind of harmless, many month later it made me realize, that exceeding your personal limit is a real problem. It may be ok from time to time, but you should make sure not to exceed it for too long. Otherwise you will most certainly harm yourself. A former colleague of mine did not get off that lightly – he got a burnout and needed to start a therapy. That’s certainly an experience nobody want’s to live through.

Today’s working environments make it easy to reach your limit and going further. Therefore it is even more important than ever to be aware of your personal limit and – that’s equally important – you have to be able to realize when you’re exceeding it. I’m not saying you have to avoid it under any circumstances. Sometimes it is necessary to give 120%. But if you see yourself permanently running at 120%, something is wrong and you must not hesitate to change it. I can’t tell you what to do, because it strongly depends on your indiviual situation. For me the solution simply was to keep an eye on myself and to force myself into some spare time away from the workplace, instead of doing extra hours just for fun, when I had no plans for the evening.

Altogether, find a healthy balance between work and leisure time. Oh, that sounds so much generation Y 😉

To be continued…

Books: Clean Code

I want to start a new series of posts, where I want to present some books. They’re books that I can recommend to developers and people working in the field of software development.

Let’s start with a book, that every software developer should have read. It is “Clean Code” by Robert C. “Uncle Bob” Martin.

"Clean Code" by Robert C. Martin
“Clean Code” by Robert C. Martin

As the title might tell you, it is about techniques and rules to help you to writing high quality code in such a manner, that it becomes easy to understand and and easy to maintain. The book has a chapter for each aspect of software – naming, formatting, structuring, error handling – to name some of them. After defining a set of dos and don’ts, the following two chapters demonstrate how to apply them by refactoring a piece of software step-by-step.

I consider myself having a pretty good “code sense”, which means that I naturally know, how to structure things and write good code. Therefore most the book wasn’t such a surprise for me, but I discoverd some aspects, that I haven’t thought about before. Instead I just did it. The book helped to understand, why it is a good idea to do certain things, instead of just doing it because it feels right.

Why should you read it: You’re creating software, especially together with other people in a team. You care about the quality of your work and you want to improve your sense for good code.

5 years later (2020-08-17): I’ve found this blog post, which is taking a critical view on the recommendations from the book. I have to say, I do agree with the arguments made in the blog post, but I still believe there’s some usful advice in the book that can help you improve your code. So my advice would be: Read it, but be critical about it. You don’t have to follow everything by heart to produce better code. If some rules don’t make sense to you, don’t do it. The important thing to take away is: be critical about your own code. Imagine someone else reading it without the knowledge you have (right now) and write your code in a way to help them understand your intention and what’s going on.

Composer/PHPUnit on Windows Shell

Are you tired of typing php [PATH]/composer.phar on the console all the time? Wouldn’t it be easier to just type composer and you’re done? Fortunately this isn’t very hard to configure.

Save the composer.phar file to some folder, e.g. somewhere in your “Program Files” folder. I assume it’s C:\Program Files\Composer in this little tutorial.

Add a BAT file named composer.bat to that folder, which contains:

@ECHO OFF
SET "COMPOSER_INSTALL_DIR=C:\Program Files\Composer"
@ECHO ON
php "%COMPOSER_INSTALL_DIR%\composer.phar" %*

Add C:\Program Files\Composer to the PATH setting of you operating system. If you don’t know how it works, just Google it.

Note: If PHP isn’t runable on your console via a simple php, you should also add your PHP folder to the PATH setting. Otherwise you have to write the full path to php.exe in the BAT file.

Now you can run the Composer commands from any folder:

composer self-update  # (may only work if the console runs with admin privileges)
composer install
composer update
...

The same works for PHPUnit’s phar distribution.

Some custom Capifony tasks

Here are some custom made tasks for Capistrano, which might be helpful in the Symfony2 context. First, some cache clearing tasks, that don’t clear the whole cache but only translations or the Twig template cache.

namespace :foo do

  # Clear only translation cache
  task :clear_translation_cache do
      now = Time.now
      timestamp = now.strftime("%Y%m%d%H%M%S")
      run "[ -d #{latest_release}/app/cache/prod/translations ] && mv #{latest_release}/app/cache/prod/translations #{latest_release}/app/cache/prod/translations_#{timestamp} || echo 'no translation dir'"
  end

  # Clear only twig cache
  task :clear_twig_cache do
      now = Time.now
      timestamp = now.strftime("%Y%m%d%H%M%S")
      run "[ -d #{latest_release}/app/cache/prod/twig ] && mv #{latest_release}/app/cache/prod/twig #{latest_release}/app/cache/prod/twig_#{timestamp} || echo 'no translation dir'"
  end

end

The folder is accually not removed but renamed to not interfere with current processes. The folder is then automatically rebuilt by the framework.

Symfony2 has that great “assets version” feature, which adds a parameter to the URL of all assets. Then by changing the value, you can make sure that everyone has to load the latest version from the server. If you want to update the asset version automatically on every deploy, you might use the following setup:

Add a file app/config/assets_version.yml to your project containing:

parameters:
    assets_version: AsSeTvErSiOn

Add that file to the import section of your config.yml and use the parameter %assets_version% in the configuration.

Call that Capistrano task in your deployment process. It replaces the assets version with a value generated from the release name. It is the hexadecimal representation of the YYMMDDHHII timestamp.

namespace :foo do

  # Update asset version
  task :update_version, :roles => :web, :except => { :no_release => true } do
    capifony_pretty_print " --> Update assets version"
    file_path = "#{release_path}/app/config/assets_version.yml"
    assets_version = release_name[2, 10] # Extract YYMMDDHHII only
    assets_version = assets_version.to_i.to_s(16) # Convert to int, convert to hex
    capifony_pretty_print "     Assets version is #{assets_version}"
    run "echo 'parameters:\n    assets_version: #{assets_version}' > #{file_path}"
  end

end

PHPUnit: Remove non-deterministic dependencies

Unit testing is great! But sometimes there are situations where it can become really tough to write proper tests for your code. One of these situations is when your code doesn’t work totally predictable, when it has some kind of randomness in it, that is intended. Then you usually have one of those functions in your code:

  • tempnam(), uniqid()
  • rand(), mt_rand(), shuffle() or any other randomized function
  • time(), date(), new \DateTime() or anything that has to do with the current time

The list isn’t complete, just wanted to mention the most common ones. Those functions are bad for testing, because tests have to be repeatable and those ones will most likely produce a different result on every call.

So how to get rid of the randomness and create predictable and therefore repeatable unit test?
Read more