• Published:September 18th, 2015
  • Category:Lavarel4

If you want to match an entire word in a route pattern for Laravel 4, use the underneath snippet.

Route::pattern('my_word_pattern', '^myword$');
Route::get('{my_word_pattern}/...

I spent a crazy amount of time on figuring out how to validate form arrays in Laravel. There is some official documentation, but like most official documentation of Laravel, it only covers the bare minimum of what you need to know. This is an advanced article on how to validate form arrays in Laravel.

I have a form where people can enter 3 iban and bic numbers (these are EU bankaccounts). That makes 3 pairs of textboxes:

  • iban[]
  • bic[]
  • iban[]
  • bic[]
  • iban[]
  • bic[]

… and other form elements …

My desired form validation rules:

  • Maximum 3 couples of iban & bic can be submitted
  • The user is not obligated to fill in any of the iban & bic numbers.
  • When an IBAN is filled in, the user also needs to fill in the BIC.
  • The IBAN and BIC can only contain alphanumerics and spaces.

Out of the box, Laravel can validate form arrays with the dot character.

The next form rule will work out of the box:

'iban.0' => 'required'

In your views, you can check for the error:

$errors->has('iban.0')

The same goes for the second iban:

'iban.1' => 'required`'

But I don’t need that in my setup. The fields are not mandatory.

To check for alpha+num+spaces you have to create a new ValidationRule. The most decent way is to extend the default Validator class and add your own rules. Then you have to create a service provider that returns the ExtendedProvider. Finally you have to add the serviceprovider to app.php and run composer update.

I’ll walk you through each of the files you have to create:

File: ExtendedValidator.php

The function validateAlphaNumSpaces() will listen to the rule alpha_num_spaces. The three parameters are standard parameters for validate-functions. We only use $value and not the other parameters because this is a simple rule. $value is what the user entered in the form field.

The function checks $value with a regex and returns true if it matches.

File: ExtendedServiceProvider.php:

app->validator->resolver(function($translator, $data, $rules, $messages)
         {
             return new ExtendedValidator($translator, $data, $rules, $messages);
         });
     }
 }

Then in app.php, at the end of the providers array:

<pre class="wp-block-syntaxhighlighter-code">'providers' => array(
 ...
 'App\Services\Validators\ExtendedValidatorServiceProvider',
 ),

Run composer dump-autoload -o The -o is for faster performance.

Now we can change the validation rule to:

<pre class="wp-block-syntaxhighlighter-code">'iban.0' => 'required|alpha_num_spaces'

Array max size

I want to make sure that a hacker/user can submit no more than 3 iban numbers. There is no boilerplate code for that so we have to write it ourselves. I continue with the files I created in the previous steps.

In ExtendedValidator.php:

public function validateArraySize($attribute, $value, $parameters){
    $data = array_get($this->data, $attribute);`
    if (!is_array($data)) {
        return true;
    } else {
        $sizeIsOk = (count($data) <= $parameters[0];)
        return $sizeIsOk;
    }
}

This function will listen to the rule array_size. You can use it like this:

'iban' => 'array|array_size:3'

This makes sure that the iban field is an array, and can only contain 3 keys.

To create a nice error message to the user, go to app/lang/en/validation.php

app/lang/en/validation.php

return array(
    'array_size'     => 'You can only enter :array_size different values for :attribute.');

You might wonder how the system knows what :array_size is. Well, it doesn’t. We have to tell Laravel what it is.

Go to ExtendedValidator.php

Enter the following:

/**
    * Replace all place-holders for the min rule.
    *
    * @param  string  $message
    * @param  string  $attribute
    * @param  string  $rule
    * @param  array   $parameters
    * @return string
    */
   protected function replaceArraySize($message, $attribute, $rule, $parameters)  {
       return str_replace(':array_size',  $parameters[0],  $message);
   }

This will replace :array_size with the value you entered in the validation rules.

Almost there.

I also want the following condition: if a user enters an iban, he also has to enter a bic.

You can use this out of the box working rule:

'bic.0' => 'alpha_num_spaces|required_with:iban.0',

required_if with form arrays

For the real daredevils: what if the user first has to check a box before he can enter the iban?

To be able to do this we need a multi_required_if that we have to write ourselves. I based it loosely on validationRequiredIf from Validator.php

In ExtendedValidator.php

/**
 * Required if element corresponds in an array
 */
protected function validateMultiRequiredIf($attribute, $value, $parameters){
    $this->requireParameterCount(2, $parameters, 'multi_required_if');
    $parameterKey = substr($parameters[0], strpos($parameters[0], '.') + 1);
    $parameterName = substr($parameters[0], 0, strpos($parameters[0], '.'));
    $data = array_get($this->data, $parameterName);
    if(!is_array($data)){
        return true;
    }
    $values = array_slice($parameters, 1);
    if (in_array($data[$parameterKey], $values))
    {
        $isEmpty = $this->validateRequired($attribute, $value[$parameterKey]);
        return $isEmpty;
    }
    return true;
}

You can use it like this:

'iban.0' => 'multi_required_if:checkbox-element.0,1',

This means that the first iban textfield (iban.0) must be filled id when the first checkbox element (checkbox-element.0) is checked.

At last, you have to make a rule for each form array element. Too bad I didn’t have the time to figure out how the validation rules can work on each element in the array. With the last example, you have to write a rule for each array element:

'iban.0' => 'multi_required_if:checkbox-element.0,1',
'iban.1' => 'multi_required_if:checkbox-element.1,1',
'iban.2' => 'multi_required_if:checkbox-element.2,1',

Sometimes it can be desirable to remove a database column that hosts a foreign key relationship (eg: in reverse migration scripts). It can be a bit of a hassle to get that done.

Here’s how to do it:

1) Log in to your database and lookup the name of the foreign key relationship. If you use phpmyadmin, go to the table, click the “Structure” tab, click the link “Relation View” and wait a few seconds for it to load. Search for the field “Constraint name”. In my example this is: “contribution_copyright_id_foreign”

2) Go to the Laravel migration script (or create one). The trick is to first drop the foreign key relationship and then drop the column.

public function down()
{
Schema::table('contribution', function(Blueprint $table){
$table->dropForeign('contribution_copyright_id_foreign');
$table->dropColumn('copyright_id');
});

If you want to remove a table where a foreign key is present, you also first have to drop the foreign key relationship.

A quick tip: if you want autocomplete (intellisense) when writing migrations in Laravel, you can add type hinting for the $table variable.

Just add Blueprint before $table in the function argument. Blueprint is the type of the $table variable. You’ll be able to see all the options and don’t have to check Laravel docs anymore.

class Payments extends Migration {

public function up() {
Schema::create('donor_account', function(Blueprint $table){
$table->engine ='InnoDB';
$table->increments('id');

This is how it looks in PHPStorm 8:

phpstorm_table_autocomplete

The dutch tv-documentary Digital Memory Loss (Tegenlicht, Digitaal geheugenverlies) handled some interesting topics concerning the loss of libraries and internet data.

To sum things up:

  • Because of budget cuts, the dutch government destroys some of its own libraries. The information in these books (some of which are centuries old) gets lost.
  • The counter argument is that “everything can be found on the internet”.
  • But what will be saved on the internet? Every day petabytes are uploaded in the cloud. What will we keep in the future? What if a server crashes or the cloud goes down?
  • Big parts of the internet (1995 – 2002) are already gone. A lot of sites (like fansites) are already offline. Pages are edited or removed in 2 month spans.
  • Initiatives like archive.org try to save the old internet. Archive.org also tries to have a copy of every book every published.
  • Companies like Google also try to scan every book but keep them behind a paywall. Google can remove that archive any time they want without any consequences.
  • Software updates happen constantly; old software gets lost. Data that can only be read by old software gets lost as well.
  • The readers of old data-carries (large floppies, microfilms) are getting very rare. This means that the data on these carries gets lost as well.
  • Digitalizing doesn’t mean it is saved for the future. Digital archives get outdated as well. Once you start to digitize, you will have to perform constant updates to keep the archive alive.

http://www.npodoc.nl/speel.program.44238920.html

Composer is a major part of the Laravel MVC Framework, but it also exists without Laravel. In fact you could use it in any project. This article digs into the different files that are used by composer. It’s important to understand what these files are and what these files do.

composer.json

This is the only file you have to edit manually. In this file you can lists which packages you want and which versions of that package you want to install. Versions can be vague (1.x.x) or specific (1.1.2).

vendor/composer/autoload_classmap.php

  • This file (no class) returns an array of all aliasses and files based on the autoload section in composer.json.
  • This file is regenerated on each dump-autoload. If you have a new class somewhere in your project it will not be loaded unless it is included in autoload_classmap (hence you have to execute composer dump-autoload)
  • In Laravel, composer.json includes all controllers, models, commands, migrations, seeds, services and facades in your root folder structure. If you want a custom folder to dump files, you have to add it to the autoload-section in composer.json. That way it will be included in the autoload_classmap.php
  • autoload_classmap.php also includes the providers in config/app.php
  • In Laravel, the autoload_classmap is included inside app/bootstrap/autoload.php (as /../vendor/autoload.php which includes the autoload_classmap)

composer.lock

  • This file is not, as it might suggest, an indication of an update of an install going on. It’s not.
  • composer.lock lists all exact versions of each vendor package that is installed.
  • If you run composer install and there is a lock file present, it will download the versions of composer.lock no matter what’s inside composer.json
  • If you run composer install and there is no lock file, it will generate a lock file of all the vendor versions it has installed based on composer.json
  • If you run composer update it will overwrite the composer.lock file with the newest available vendor packages based on composer.json
  • This means that if you include composer.lock in your GIT repository; clone and execute composer install on another computer, it will download the exact same versions as in composer.lock

What’s the difference between composer dump-autoload, composer update and composer install?

The above text already explains the difference between those commands, but for fast readers:

  • composer install installs the vendor packages according to composer.lock (or creates composer.lock if not present),
  • composer update always regenerates composer.lock and installs the lastest versions of available packages based on composer.json
  • composer dump-autoload won’t download a thing. It just regenerates the list of all classes that need to be included in the project (autoload_classmap.php). Ideal for when you have a new class inside your project.
    • Ideally, you execute composer dump-autoload -o , for a faster load of your webpages. The only reason it is not default, is because it takes a bit longer to generate (but is only slightly noticable)
  • Published:May 7th, 2014
  • Category:Lavarel4
  • 9 comments

Mail:send() and Mail:queue() don’t work the same when it comes to passing $data to a view. At least not when you’re passing an eloquent object (or a “model” object).

Eg:

$data = array();
$data['myObject'] = $eloquentObject;

Mail::send('emails.hello', $data, function($message) use ($toAddress) {
...
});

This will pass $data to the view so you can access $myObject in the view.

But when you change Mail::send to Mail::queue the $myObject isn’t accessible as expected. This happens when you’re passing an eloquent (inherited) object. To make this work with queue you have to serialize the $eloquentObject first and later unserialize it in the view.

$data = array();
$data['myObject'] = serialize($eloquentObject);

Mail::send('emails.hello', $data, function($message) use ($toAddress) {
...
});

In the view:

{{ $serializedObject->property }}

If you want Laravel to show cached content from Varnish on public pages (so without a cookie), but still want to use a cookie on admin pages, and switch between them, config the following:

Put every admin page on a subdomain: admin.mysite.com

in routes.php add the following:

Route::group(array('domain' => 'admin.mysite.com'), function()
{
//admin routes
}

Route::group(array('domain' => 'www.mysite.com'), function()
{
//public routes
}

Set cookieless session for public pages

in app/config/session.php

  • Set ‘driver’ to ‘array’. The option “array” will not write cookies. This is what we want for the public pages.
  • Set ‘cookie’ to a decent name.

Leave everything else default.

Override the session driver for admin pages.

The Laravel Session is initialized at the very beginning of each webserver request. There’s no point in overwriting the session driver in a controller or in a route filter (as strangely suggested on the github) because the session is already loaded and initialized before the route filter kicks in.

To overwrite the session config, you have to edit bootstrap/start.php

In bootstrap/start.php

Right after this line

require $framework.'/Illuminate/Foundation/start.php';

write a code snippet that looks like this:

if(\Request::server('HTTP_HOST') == 'admin.mysite.com'){
Config::set('session.driver', 'native');
}

By doing this we change the session.driver to “native” (so with a cookie) for the admin pages and not on the public pages.

There is one potential pitfall:

On your admin pages, every asset (css, js, image) must be called from the admin subdomain (except assets from other domains or the cloud).

On your public pages, not a single asset (css, js, image) should be called from the admin subdomain. (so don’t use a “http://admin.mysite.com/images/login.gif” on a www.mysite.com page)

Otherwise, if an assets happens to be a 404 and goes through the webserver, it might conflict or create unwanted cookies.

The above example is a stripped down version of my own implementation. You should care for authentication (I use the Sentry2 package for Laravel). With Sentry and the above setup, you also have to put the login page (or certainly the POST-action) on the admin subdomain. Otherwise the login won’t work (because it will try to write an authentication cookie on the public pages, but can’t because of the “array” session driver, so the user will never be able to login).

There might be other ways to accomplish the same result but this setup definatly works.

Sorry for the clickbait title, but now you’re here, you might as well read about my thoughts (or "issues") on Open Source

1. Open source is the new demo

Companies used to make private software, but now they tend to create more open source. Though the open source product is only maintained by the company and is used as a step-up to the paying version.

Take OpenX for example, a package for online advertising. It comes in two flavors: an open source version (the original) and a priced private version. The open source version is far less superior and less maintained than the private.

That idea is inherited from the demo-age: a demo was a free version that missed the features to be useful. Today the demo version is licensed as open source, but just because open source is popular.

The open source package isn’t made to be perfect, no, it’s only made to get people warmed up for the paying version. (in terms of OpenX: the open source version has many security holes, which makes it hard to consider).

2. Open source is company karma

Companies get popular by releasing their open source libraries next to their private software. I may be cynic, but I feel like a lot of these packages are made for company karma. A lot of companies sponsor open source projects to gain karma from the community and eventually sell their services to them.

Because every company wants to have their own open source library, instead of contributing to a library from somebody else, you create a wide field of all sorts of packages that might be abandoned as soon as the company loses interest.

The real, well working open source projects are the ones that are supported and used by a wide range of people over a long period of time. These are not the ones that are created because marketing told us so.

3. Open source gets sold

I hear you thinking, if private companies want to contribute to open source, why shouldn’t they?

When MySQL was sold to Sun, they didn’t know that Oracle would buy Sun. Widenius, main-developer of MySQL, tried to avoid that Oracle would takeover MySQL at all costs. Right before Oracle bought Sun, he forked MySQL into MariaDB. As soon as Oracle bought MySQL they started adding closed source modules. So there are 2 software packages that are about the same: MySQL owned by Oracle which is partly open and partly closed, and MariaDB owned by Widenius, which is entirely open.

The danger of open source bought or created with private money, is that it might be transformed into closed source software or be taken away from the community. The open source version could be stopped, put on low priority, or be degraded to "demo".

These moves also cause confusion amongst users. Should they use Open Office or Libre Office? And do they care/know what the difference is? And what about organizations that use an open source package which suddenly transforms into closed source?

The idea behind open source (or community initiatives like Wikipedia or non-technology ones like Transition Network) is: you take from the community, you give to the community. Not necessary in terms of money, but in terms of your skills and your time – whatever your skill may be (even mouth to mouth publicity).

Most initiatives need money, so money will be welcomed, but your input is of most importance for the success rate of the project. Wikipedia needs money to run its server and pay its few employees, but even with that money they wouldn’t have made it without the help of all the voluntary writers and readers.

4. Forks create chaos

The open source community splits into branches. Splitting into branches is a human thing that has been around since the beginning of politics and religion. Splitting up creates quantity but not quality.

Just take a look at the discussions about Unity, the new desktop layout of Ubuntu. A part of the community solved it by suggesting another Ubuntu that didn’t implement Unity: Linux Mint. And while Linux Mint is great (I use it daily), why couldn’t we simply agree to stick to Ubuntu and implement the option to disable Unity. It’s open source so it’s possible.

This is where Open Source should make the difference with Microsoft. Microsoft did an equal move by removing the start-button and implementing a dysfunctional desktop (Metro) without any way to "change back to normal" (while Windows users crave for a solution to make their pc’s go faster and don’t care about a new desktop).

Instead of creating one successful well supported product, we create forks, versions that are just slightly different than the original.

All these branches, "doing this different because we believe it’s better" make it impossible to maintain oversight. This is the comparable to Microsoft trying to push their "standards" just for the sake of having an own (in their case: patented) standard.

There are dozens of ways (libraries) to upload a file on a website. If I really want to have the best solution, I have to go to all these projects, demo it or install it. It would be better to have one or two projects that are flexible and well supported by all browsers. Developers just have to learn working with 2 packages and can start working for any employer or follow up any project. It could be taught at school, it could be far more popular and better than any of the dozens of libraries today.

jQuery kind of goes into that direction by creating 1 flexible good javascript library that is wildly supported. But the jQuery-libraries by 3rd party developers make it a mess. There’s no oversight in all these modules, the quality is very different amongst projects, they could conflict with each-other or not be compatible with a new/old version of jQuery.

This is the real pain: "wild" libraries as opposed to "well supported" libraries. This is what gives open initiatives a bad name: the lack of equal quality. Because everybody can create open source, there’s no control, hence no quality assurance.

I am well aware of that contradiction. It’s a debate: do you allow anyone to contribute (democratic) and risk quality instabilities, or do you select the contributors that probably will assure quality but make it less open?

5. The question of what to do with “bad” contributors/modules?

At my [previous] job, an alternative online newspaper, we have a comparable problem. Many of our writers are volunteers, some of them can write good articles, some of them don’t. But what do we do with bad writers? There are 2 schools of thought:

  1. We allow bad writers to continue to write for an open democratic website, where everyone can report what they want, with the risk that bad articles can harm our quality level (and reputation). Bad writers take a lot of time and effort (it’s more work to rewrite a bad article than to write a good article yourself).

  2. We only keep the good writers. That would transform our website into a closed medium and conflict with our basic ideas. By maintaining a high standard we could scare away potential new volunteers who think they’re not good enough but might be.

Keep in mind that some volunteers are bad writers but have interesting things to say. Though, there aren’t enough resources to train every volunteer who fits that category.

We’ve discussed this for hours and it’s hard to figure out a middle way. Currently we have to idea to "star" contributions which we think are good, a quality label. We only want to make that clear with layout changes, because we don’t want to add a textual "warning-this-article-sucks-disclaimer". That kind of disapproval would make the volunteer displeased, if not angry.

I think that idea would work for Open Source as well, and some of them have started such an idea. Drupal contributors, for example, start with a sandbox project that has to be reviewed by an admin. If your sandbox is alright, it will be transformed into a module. Too bad, too many modules have features that are just slightly different than another. This confuses people: "what module should I use? Google Analytics Integrator? SEO Expert? or just the module named Google Analytics?"

6. Loosing sight of “The bigger plan”

Just "starring" doesn’t work if you allow every module by the simple rule that the contributor must be a good coder. There needs to be a bigger plan:

  • What modules do you want?
  • Are the current modules good enough?
  • Which modules should be replaced by better ones?
  • Who wants to manage that?
  • Do we allow everyone to contribute? Or how will we select?
  • Is the project “owned” by a private investor? And do we allow that?
  • How do we collect money in case we need it?
  • How do we get people to contribute?
  • How do we handle requests for certain modules that might not fit our software?
  • Do we risk losing users by not implementing certain features or do we implement everything just for the sake of attracting as many users as possible?
  • Who will decide what to implement? How is that process defined?
  • How do we handle bad content/contributors?
  • Is their a “leader”, someone who pulls the strings? A decision maker? And if not, how do we organize?

I know this comes scarily close to management, but these are questions any serious open project will have to answer some day. It would be a pity if open source projects fail by not thinking these through. These type of questions should be answered for every community project, and not just tech ones.

7. The lack of management skills amongst developers

The reason I think why these questions are left unanswered, is because it’s not a pleasant task and it doesn’t add production value right away. If I spent one week thinking about the questions, I loose one week of coding. And, maybe my time is limited to one week. In case of open source, most contributors are developers. And developers want to develop. They don’t want to waste time on the above questions, no, they want to code, rather now than tomorrow.

Many developers, like me, don’t like to "manage". They get behind their computer, start coding, and hope someone will spontaneously say "hey, can I contribute?". That someone would be a great coder with the exact state of mind as ourselves, and not some sucker who just created his first html-page.

Bonus! The scare of loosing the project to someone else

If I look deeper into myself, the thought that someone would "take over" my project, scares me. That’s perhaps another reason why some questions don’t get answered. If other people involve, I could loose the project, my name in bold on the about-page.

Of all the payed web-projects I left, every now and then, I check back on that site to see how things went on. What did they implement? What did they cut? How did they handle that complex js-problem?

It happens that nothing changed at all: the bug that was reported 5 years ago is still in it, the "temporary" solution has become older than my cat, and the space looks frighteningly… dead. Is this what I created? Did someone forget to turn that server off? Is it all forgotten?

Or, the other side, the project is gone, replaced by something flashy else, dumped on a backup harddisk in a basement.

Luckily, most of the times, the project appears to be in good shape, nice features have been added, developers clearly knew what they were doing. It has been handled respectful. This is what well managed open source projects should become. This is why the questions are important.

I better start thinking about the questions right away but first I want to code that feature that will make the project look awesome.

  • Published:November 4th, 2013
  • Category:nginx

We wanted these redirections:

  • project.example.com => www.example.com/project (without changing the url)
  • project.example.com/whatever => www.example.com/whatever (with changing the url)

In other words:

  • I wanted a subdomain that was nothing more than a page on the main site (or a subdirectory). But, the user shouldn’t know that.
  • Every link that on the subdomain should visibly redirect to the main site.

Turns out easy in Apache, but hard to accomplish with Nginx.

This is how you do it

Continue reading “Nginx: redirect a subdomain to a subdirectory without changing the url”…

« Previous PageNext Page »

Wordpress.org clearPaper by CreativeBits.it Copyright © 2012-2019 Robin Brackez. All rights reserved. By visiting this site you agree to accept cookies that are purely used to check how many visitors I have. Theme by: creativebits. Custom adaptations by Robin Brackez.