Docker is a handy tool to setup multiple light-weight development environments or to virtualize your server.

Too bad, docker has a steep learning curve and demands you to know some ops-stuff, certainly when docker up doesn’t docker up, but shows an obscure error message instead 🙂

In order to avoid these time-consuming setups, you can use the preconfigured docker compose libraries from DockerWest (github).

I’m not a docker-specialist, far from it, but these libraries always work for me, and don’t require much configuration. In my eyes, they are the easiest way to get a php-docker up and running.

Each ‘setup’ consists of several docker containers that communicate together. There’s an nginx, mailcatcher, mysql, redis and php (application) container.

This manual applies for Ubuntu (or other Linux distributions).

DockerWest provides docker environments for:

  • Laravel 5
  • Symfony 3 + 4
  • Magento 2
  • Pimcore 4 + 5

Install DockerWest

Make sure you’ve installed docker and docker-compose before you continue with this tutorial.

1) Clone the repo you need, in this example: Symfony

git clone

2) Navigate to the newly created folder

cd compose-symfony

3) Rename the .env-sample file

mv .env-sample .env

4) Edit the .env file

vim .env

Configure DockerWest

Setup the correct parameters in the .env file. Most of the options speak for themselves. I listed the most important options:


Type the user-id and group-id that the container must use. It’s easiest if they match your user account.

To find your local UID you can run id -u and to find your local GID you can run id -g.


How do I know which PHP version to choose?
You can choose between these php versions.
For a local environment, it’s best to use the PHP version that runs on your server. If it’s just to toy around, use the most recent version.


These are the local domains you want to access the website from. These domains (+ their IP’s) must be configured in /etc/hosts.


The relative path to the wwwroot on your host machine.

6) Save the file and exit (escape+ :wq + enter in Vim)

7) Run ./environment

Environment allows you to start your environment with an updated PATH
This opens a tmux window (install tmux if you don’t have this on your system)

8) Say a little prayer and type:

run up

This will pull and install the docker boxes. Once it stalls and displays this you’re all set 🙂

application_1  | -  02/Jun/2019:14:10:59 +0000 "GET /ping" 200

Navigate in your browser to the host-url you’ve setted in the .env file.

‘Inside’ the docker container

If you want to ‘enter’ the docker box, you can using tmux. If you’re in the ./environmentwindow,
type: ctrl + b, followed by c
This enters a new ‘tab’ inside your tmux.

To go back to the previous screen, type: ctrl + b, followed by 1 (the number of the tab).

Framework commands

Once you’re in the environment, you can use the commands from Laravel, Symfony or whatever you use.


If you want to access the database, just type mysql in the tmux console. It will auto direct you to the docker mysql instance. You can also use mysqlimport (with a file on your host machine!).

Many-to-many-fields work in Pimcore GraphQL (data-hub), though it might take you some time to figure out how it exactly works.

If you haven’t added data-hub, read this post first about data-hub.

Many-to-one field

Say we have object Fridge and object Brand. Object Fridge has a many-to-one field with Object Brand.

In order for this to work you have to add both object Fridge and Brand in the Schema Query.

Select the fields you want to be available in GraphQL for both objects (Fridge and Brand).
Select also the folders of the objects that should be available (in tab Security Definition under "objects"). They need to include the folders from both objects .

Click save.

Click "Open in Tab", this opens the editor ‘GraphiQL’ in a new browser tab.

Enter a variant of this query:

    getFridge (id: 141) {
        brand {
            ... on object_Brand {

The Many To One field requires:

  • the field name as it’s defined in the Fridge class (brand in this example)
  • 3 dots (…) followed by the fixed word object_ followed by the class name of the linked object (... on object_brand in this example)
  • the field names that you want to return from the brand.

Many-to-many fields

If you work with a many-to-many fields, you work exactly as in many-to-one fields. The only difference is an extra line in the query.

  • the fixed word element
    getFridge (id: 141) {
        brand {
            element {
                ... on object_Brand {

I discovered in some cases it also works without the "… on object_" line.

Possible errors:

Cannot query field "name" on type "object_Brand".

This happens when you didn’t add the object in the Schema Definition of the Datahub configuration.
The linked objects must also be defined.

Error "type definition … not found"

This was a bug on "Advanced Many-To-Many Object Relation" types. It’s fixed in the dev-mater branch of data-hub. See:

Execute the following command to fix this:

composer update pimcore/data-hub

Pimcore 5 offers a new package to use GraphQL. That way you can make Pimcore 5 headless. This means you can access Pimcore data, without having to visit that website, and without using REST or SOAP. The package for GraphQL is called data-hub and is still in dev, though it already works.

Add GraphQL (data-hub) package to Pimcore

  1. Add the "GraphQL" package to Pimcore with Composer. The package is called data-hub.

    composer require pimcore:data-hub:dev-master
  2. The package needs to be activated an installed. Go in Pimcore to Tools (tool-icon) > Extensions

  3. Click on the green plus-button on the line "PimcoreDataHub" (column Enable/Disable).

  4. You see a confirmation window. Click that away.

  5. Click on the install-icon on the same line.

  6. Refresh the browser.

Configure the data-hub.

If all goes well, you should see the "Datahub config" option under the Settings menu (gear-icon). If not, ensure the package is properly installed (or try to reinstall it).

To add a GraphQL configuration, clicking the "Add Configuration" button and select "GraphQL".

Scheme Definition

Click the Schema Definition tab .

First add the object-type you want to be queryable. In this example I use a custom object called "Fridge". (step 1)

To configure this object, click the gear icon (step 2).

You see the fields of this object in the left column. Drag and drop the fields you want to enable for the GraphQL query to the right column.

Security Definition

Click the Security Definition tab. (step 1)

Your api (= GraphQL service) must be protected by an api-key, to limit unauthorized access. The party (or application) that needs to query the data, must have a copy of that key. (step 2)

Now that you’ve configured the object, you must also select the folder of the objects that must be query-able. Objects outside those folders won’t be found. Drag and drop a folder in "Objects". (step 3)

Click "save" (step 4).

Enter queries

Click "Open in tab" to start experimenting.

The query-language of GraphQL is explained on the official GraphQL website, so I won’t go into detail about that.

  getFridge (id: 141) {

You can enter this as a test and you should see the results after clicking on the play-button (arrow) in the top bar.

You say "getFridge" with id 141; show the fields: name, id, description

You can only select the fields you’ve configured in the scheme definition. So if you want more fields, you have to go back to the configuration window.

Access api

You can access the api through the following url:

Where you replace: "", "fridge" (by the name of the configuration) and "abc" by the api-key.

Then you send to that api the complete query (with the braces), for instance with an ApolloClient (not covered in this blogpost). You should get the appropriate response.

In order to understand OAuth2 authentication, you must get familiar with the terminology. Because terms like resource owner are rather vague at first, I use a concert ticket analogy to explain it to you.

A quick overview:

These are the essential players in OAuth2 authentication. They are explained further in the article.

  • resource owner = you (mostly)
  • client = app/website
  • auth server = token creator
  • resource server = data creator (or the server with the data you need, but the data is secured)

Using the concert ticket anology

  1. You (=resource owner) buy a ticket (=token) from a vendor (=auth server).
  2. You tell the vendor which gig you want to see (=scope)
  3. The ticket (=token) is only valid at a certain time, and only to that particular concert
  4. Next, you go to the concert venue (= resource server), and you show your ticket (=token).
  5. Eventually, the security guy (= resource server) checks if your ticket is valid, and lets you in to see the concert (= the data you want).
  • If you forge a ticket, or go with a different ticket, you won’t get access (well, unless you are very inventive).
  • You keep the ticket in your wallet because that’s the safest (=https), you don’t leave it in the middle of the street where a thief can see and steal it (=http)
  • It is likely in some cases that the auth server and the resource server are the same server (like you’d buy a ticket at the venue), but it’s not always the case.

Using computer world analogy

  • In computerworld, you’re on a website or app (=client) and the client needs some data from another server (perhaps it wants your Facebook profile data).
  • You login with your username and password at the auth server (Facebook in this example) and in the background your client will pass a scope, the application name, etc.
  • The clients gets an authorization code and exchanges that for a token.
  • The token can contain encoded info like your name, session-id and other stuff. It can also contain a refresh token.
  • The client then sends the token to the resource server (which in the example is also Facebook)
  • The resource server validates the token.
  • The client gets access to the data of the resource server.

There is no official method within OAuth2 on how to validate the token. Possible methods are a self containted token like JWT (json web token) or a call from the resource server to the authorization server to validate the code.

Refresh token, concert ticket analogy

In some cases the client can also get a refresh token. It is used to prolong the session.

Compare it with a stamp you get when you want to get out and get in again in a concert. People with a stamp can go in and out, without checking your original ticket.

  • It contains less information than the initial access token.
  • The refresh token can be exchanged from the auth server by another access token (depends on the implementation)

Some more on refresh tokens

Refresh tokens:

  • don’t make the original access token valid again. It just requests a new access token.
  • are only called when the access token is not valid anymore.
  • are valid longer than access tokens, how long exactly depends of the configuration of the authorization server, but it’s way longer than the access token, otherwise there would be no point in having refresh tokens.
  • are not connected to access tokens. They do not "refresh" a given access token.
  • are (initially) provided by OAuth providers alongside access tokens in certain circumstances that vary by the provider. Typically you’ll be able to get a refresh token when using the Authorization Code Grant and requesting an "offline" scope.

Refresh tokens:

  • can expire, although their expiration time is usually much longer than access tokens.

  • can become invalid in other ways (for example if your user revokes your OAuth client app’s access) . In this case all your refresh tokens and access tokens for that provider would be invalidated).

  • can’t be used for read/write access to a user’s information. Only access tokens do this. Refresh tokens are only used to get new access tokens, not read data.

  • You do not provide an access token when using the refresh flow. You just send a valid refresh token and you get an access token back.

  • If your refresh token is invalid and you also don’t have a valid access token for a user, you must send them through an OAuth authorization flow again.

Json Web Tokens (JWT) are a means to send json objects between 2 parties. They can be secured with keys, so the receiver can verify the source.

It consists of a concatenation of 3 json-strings that are encoded according to an algorithm.

First part

The first json (up to the first point) is a header with info, such as the algorithm used.

Second part

The second part is the actual data (payload) that is encoded with the algorithm from the header.

Registered claims

The payload may contain registered claims. The claims are only 3 characters long. This was done to keep the data as small as possible.


  • iss = the issuer (=the sender)
  • exp = expiration time (seconds), how long the token remains valid
  • aud = audience
  • iat = the timestamp of when the token was created
  • jti = unique identifier of the token
  • sid = session id

Third part

The third part is a mechanism to check whether the data was sent by the correct party (verify signature).

It contains a key that is known by the sender and receiver.

This is important, because if someone else would send a token, or change it, but he does not know the key, then that third-party json will not be correct, and so the recipient can know that fraud has occurred.

However, the third part does not protect the data from being decoded. Even without the third part, the data can be decoded.


  • At the left side you see the json-web token.
  • At the right side you see the decoded 3 parts.
  • Without the 3rd part (the signature) you can also decode the message, you just can’t verify if it was altered by a malicious party.


To try it out by yourself, check the official website of JSON web tokens.

If you locked yourself out of your WordPress account, and you can’t figure a way to log back in (even not with recover password), you can get around it by using the command line.

This will only work if you have ssh access and have installed WP-CLI. WP-CLI is such a handy tool, for instance, to automatically update your WordPress installations, that I would recommend it to everyone.

Once you have installed WP-CLI, enter this command in your WordPress project root folder.

wp user update myuser --user_pass=mypassword

Replace myuser and mypassword by your username and password.

Remember that it’s unsafe to enters passwords in plain text through the command line. It is for example tractable with the history command. Once you did this, login to WordPress and change your password the normal way.

Testing with Lorem Ipsum texts is a still a common practise for testing designs and code, certainly in early stages.

Yet, Lorem Ipsum texts has the following downsides:

  • Latin has ñò àççénts or $þ€cial characters. Anything for other non-English languages will not cover your tests.
  • The text also contains no apostrophes. SQL injections won’t be covered in your tests.
  • Latin has fairly short words. Your design may look different if you inject a long (Dutch) word, such as “havenpolitiecommisariaat”.
  • Your typical Lorem Ipsum contains no formatting, no bold or colored texts. What if your customer pushes in the old <font> tag?

But most important of all: it’s not real data. It’s a lie you’re telling yourself.

Users will never inject “Lorem ipsum dolor sit amet” into any field. Your final design won’t look like a Latin course. Therefore, my advise: use as much real data as possible.

The same goes for names. People are not called “John Doe” or “Foo Bar”, some are called “Frederiçus D’Ollande”. A complex name, or better, a list of real complex names, can give much faster insight in how good your code or design is.

I feel some developers expect their users to enter simple, predictable and correct data, while in reality users dump just about any garbage they can find into your controllers. And Lorem Ipsum is not a good preparation for that.

If you don’t have any real customer data, then use a different text than Lorem Ipsum. Find some public domain ebooks, preferably with long words, accents and strange characters. Maybe something Icelandic would be good.

Another example of why real data, or having a sense of real data is very important, even from early on. I worked once with a customer who could add colors to a CMS, colors that would be attributes for products. On top of that, there needed to be a faceted search, so users could filter on those colors. You’d think they’d add 50 colors, 100 maybe, but certainly not more than 500. Well, they provided us the list with colors. It turned out to be more than 10.000! This has so much implications for the import, the data structure, and most importantly the faceted search. So, again: real data is important. For a designer, a developer, and in the end, it matters the most for the customer.

Hi there! After long absence I will reboot this blog. All articles stay here, and I’m writing new ones.

Some updates

At first, this website is now on https. Quite a shame this wasn’t the case yet. I was surprised what a walk in the park this was, with the Certbot tool and Let’s Encrypt. I reserved like an hour or 2 to switch, but it was done is less than 10 minutes. You basically install the program, run a single command, answer some question, and badaboum, your site is in https!

Adapting my posts content to https worked pretty well with the Search And Replace plug-in for WordPress.

Updating WordPress to the latest version works like a charm with the WP-CLI plugin. No hassle, no errors.

wp plugin update --all
wp core update

My cat that has always been features on this blog has sadly disappeared some years ago. One day it didn’t return home. Some months later, I got a new cat from the asylum, named Havana. A lazy, yet dominant and social cat. I’ll post a picture of her later.

The past years I worked mainly in Magento, Pimcore and did I some Raspberry Pi development in my free time. Expect more posts about those subjects.

I’m currently reading a -very- interesting book about working as a software developer. It’s called: Soft Skills: The software developer’s life manual. It’s one of these books I wish I had read 10 years ago.

I also need a new picture for on top. Boy, that’s a boring picture right now 🙂

That’s it for now.


If you use the debugger in PHPStorm, there will be browser icons in the top right corner of the code editor. I found these distracting and unnecessary.

To remove them:

  • Go to menu File > Settings > Tools > Web Browsers
  • Uncheck “Show browser popup in the editor”
  • Click “Ok”
  • Published:December 19th, 2015
  • Category:Firefox
  • 1 Comment

*sigh* Firefox, it’s like every update I love you a little less. You used to be this technically advanced lightweight browser that showed Microsoft. But ever since Chrome got popular, you’re just running behind whatever the guys at Google are implementing.


So now Mozilla removed the option that restored the classic search bar. That “classic” search bar was one of the main reasons why I liked Firefox and this new thing is just a failure.

When you enter a search query, you can’t see which search engine is selected:


Am I searching through Google, Youtube, Wikipedia? I have no idea, it only displays the hourglass icon. In order to know which search engine is selected, I have to click on it.


Then I have to click on the icon of the search engine, which means I need to know which icon is which site, because the name of the site is not displayed (except when I hover it, but that causes an unnecessary delay in my workflow).

Though the new search bar is not as bad as Ubuntu’s Unity or Window’s Metro, I can’t understand why software companies simply don’t keep what uses like and improve what they complain about.


The only way to have the old search bar back is to install the Classic Theme Restorer extension. Yes, you have to install an extension to get basic functionality.

You must configure the extension in order for it to work. Go to the preferences (about:addons > “Preferences” button), click “General UI (1)” and check “Old search”.


If you want to keep everything else the way was, uncheck all the checkboxes in the all tabs and set “Tabs (1)” to “Curved tabs (Firefox default)”.


Now the question is how long this extension will continue to work, because every Firefox update means that some extensions will stop working.

And now back to Chrome.

Next Page » clearPaper by Copyright © 2012-2019 Robin Brackez. All rights reserved. By visiting this site you agree to accept cookies that are purely used to check how many visitors I have. Theme by: creativebits. Custom adaptations by Robin Brackez.