software Archives - developed.be

  • Published:December 19th, 2015
  • Category:Firefox
  • 1 Comment

*sigh* Firefox, it’s like every update I love you a little less. You used to be this technically advanced lightweight browser that showed Microsoft. But ever since Chrome got popular, you’re just running behind whatever the guys at Google are implementing.

Problem

So now Mozilla removed the browser.search.showOneOffButtons option that restored the classic search bar. That “classic” search bar was one of the main reasons why I liked Firefox and this new thing is just a failure.

When you enter a search query, you can’t see which search engine is selected:

firefox_what

Am I searching through Google, Youtube, Wikipedia? I have no idea, it only displays the hourglass icon. In order to know which search engine is selected, I have to click on it.

firefox_search

Then I have to click on the icon of the search engine, which means I need to know which icon is which site, because the name of the site is not displayed (except when I hover it, but that causes an unnecessary delay in my workflow).

Though the new search bar is not as bad as Ubuntu’s Unity or Window’s Metro, I can’t understand why software companies simply don’t keep what uses like and improve what they complain about.

Solution

The only way to have the old search bar back is to install the Classic Theme Restorer extension. Yes, you have to install an extension to get basic functionality.

You must configure the extension in order for it to work. Go to the preferences (about:addons > “Preferences” button), click “General UI (1)” and check “Old search”.

classic_theme_restorer

If you want to keep everything else the way was, uncheck all the checkboxes in the all tabs and set “Tabs (1)” to “Curved tabs (Firefox default)”.

Screenshot-4

Now the question is how long this extension will continue to work, because every Firefox update means that some extensions will stop working.

And now back to Chrome.

Composer is a major part of the Laravel MVC Framework, but it also exists without Laravel. In fact you could use it in any project. This article digs into the different files that are used by composer. It’s important to understand what these files are and what these files do.

composer.json

This is the only file you have to edit manually. In this file you can lists which packages you want and which versions of that package you want to install. Versions can be vague (1.x.x) or specific (1.1.2).

vendor/composer/autoload_classmap.php

  • This file (no class) returns an array of all aliasses and files based on the autoload section in composer.json.
  • This file is regenerated on each dump-autoload. If you have a new class somewhere in your project it will not be loaded unless it is included in autoload_classmap (hence you have to execute composer dump-autoload)
  • In Laravel, composer.json includes all controllers, models, commands, migrations, seeds, services and facades in your root folder structure. If you want a custom folder to dump files, you have to add it to the autoload-section in composer.json. That way it will be included in the autoload_classmap.php
  • autoload_classmap.php also includes the providers in config/app.php
  • In Laravel, the autoload_classmap is included inside app/bootstrap/autoload.php (as /../vendor/autoload.php which includes the autoload_classmap)

composer.lock

  • This file is not, as it might suggest, an indication of an update of an install going on. It’s not.
  • composer.lock lists all exact versions of each vendor package that is installed.
  • If you run composer install and there is a lock file present, it will download the versions of composer.lock no matter what’s inside composer.json
  • If you run composer install and there is no lock file, it will generate a lock file of all the vendor versions it has installed based on composer.json
  • If you run composer update it will overwrite the composer.lock file with the newest available vendor packages based on composer.json
  • This means that if you include composer.lock in your GIT repository; clone and execute composer install on another computer, it will download the exact same versions as in composer.lock

What’s the difference between composer dump-autoload, composer update and composer install?

The above text already explains the difference between those commands, but for fast readers:

  • composer install installs the vendor packages according to composer.lock (or creates composer.lock if not present),
  • composer update always regenerates composer.lock and installs the lastest versions of available packages based on composer.json
  • composer dump-autoload won’t download a thing. It just regenerates the list of all classes that need to be included in the project (autoload_classmap.php). Ideal for when you have a new class inside your project.
    • Ideally, you execute composer dump-autoload -o , for a faster load of your webpages. The only reason it is not default, is because it takes a bit longer to generate (but is only slightly noticable)

If you want Laravel to show cached content from Varnish on public pages (so without a cookie), but still want to use a cookie on admin pages, and switch between them, config the following:

Put every admin page on a subdomain: admin.mysite.com

in routes.php add the following:

Route::group(array('domain' => 'admin.mysite.com'), function()
{
//admin routes
}
 
Route::group(array('domain' => 'www.mysite.com'), function()
{
//public routes
}

Set cookieless session for public pages

in app/config/session.php

  • Set ‘driver’ to ‘array’. The option “array” will not write cookies. This is what we want for the public pages.
  • Set ‘cookie’ to a decent name.

Leave everything else default.

Override the session driver for admin pages.

The Laravel Session is initialized at the very beginning of each webserver request. There’s no point in overwriting the session driver in a controller or in a route filter (as strangely suggested on the github) because the session is already loaded and initialized before the route filter kicks in.

To overwrite the session config, you have to edit bootstrap/start.php

In bootstrap/start.php

Right after this line

require $framework.'/Illuminate/Foundation/start.php';

write a code snippet that looks like this:

if(\Request::server('HTTP_HOST') == 'admin.mysite.com'){
    Config::set('session.driver', 'native');
}

By doing this we change the session.driver to “native” (so with a cookie) for the admin pages and not on the public pages.

There is one potential pitfall:

On your admin pages, every asset (css, js, image) must be called from the admin subdomain (except assets from other domains or the cloud).

On your public pages, not a single asset (css, js, image) should be called from the admin subdomain. (so don’t use a “http://admin.mysite.com/images/login.gif” on a www.mysite.com page)

Otherwise, if an assets happens to be a 404 and goes through the webserver, it might conflict or create unwanted cookies.

The above example is a stripped down version of my own implementation. You should care for authentication (I use the Sentry2 package for Laravel). With Sentry and the above setup, you also have to put the login page (or certainly the POST-action) on the admin subdomain. Otherwise the login won’t work (because it will try to write an authentication cookie on the public pages, but can’t because of the “array” session driver, so the user will never be able to login).

There might be other ways to accomplish the same result but this setup definatly works.

These are some wild thoughts about Open Source, what it is, and what is should be.

Some trends:

Open source is the new demo

Companies used to make private software, but now they tend to create more open source. Though the open source product is only maintained by the company and is used as a step-up to the paying version.

Take OpenX for example, a package for online advertising. It comes in two flavors: an open source version (the original) and a priced private version. The open source version is far less superior and less maintained than the private. That idea is inherited from the demo-age: a demo was a free version that missed the features to be useful. Today the demo version is licensed as open source, but just because open source is popular.

The open source package isn’t made to be perfect, no, it’s only made to get people warmed up for the paying version. (in terms of OpenX: the open source version has many security holes, which makes it hard to consider).

Open source is company karma

Companies get popular by releasing their open source libraries next to their private software. I may be cynic, but I feel these packages are only made for company karma. A lot of companies sponsor open source projects only to gain karma from the community and eventually sell their services to them.

Because every company wants to have their own open source library, instead of contributing to a library from somebody else, you create a wide field of all sorts of packages that might be abandoned as soon as the company loses interest. The real, well working open source projects are the ones that are supported and used by a wide range of people over a long period of time. These are not the ones that are created because marketing told us so.

I hear you thinking, if private companies want to contribute to open source, why shouldn’t they?

When MySQL was sold to Sun, they didn’t know that Oracle would buy Sun. Widenius, main-developer of MySQL, tried to avoid that Oracle would takeover MySQL at all costs. Right before Oracle bought Sun, he forked MySQL into MariaDB. As soon as Oracle bought MySQL they started adding closed source modules. So there are 2 software packages that are about the same: MySQL owned by Oracle which is partly open and partly closed, and MariaDB owned by Widenius, which is entirely open.

The danger of open source bought or created with private money, is that it might be transformed into closed source software or be taken away from the community. The open source version could be stopped, put on low priority, or be degraded to “demo”.

These moves also cause confusion amongst users. Should they use Open Office or Libre Office? And do they care/know what the difference is? And what about organizations that use an open source package which suddenly transforms into closed source?

The idea behind open source (or community initiatives like Wikipedia or non-technology ones like Transition Network) is: you take from the community, you give to the community. Not necessary in terms of money, but in terms of your skills and your time – whatever your skill may be. Most initiatives need money, so money will be welcomed, but your input is of most importance for the success rate of the project. Wikipedia needs money to run its server and pay its few employees, but even with that money they wouldn’t have made it without the help of all the voluntary writers and readers.

Forks create chaos

The open source community splits into branches. Splitting into branches is a human thing that has been around since the beginning of politics and religion. Splitting up creates quantity but not quality. Just take a look at the discussions about Unity, the new desktop layout of Ubuntu. A part of the community solved it by suggesting another Ubuntu that didn’t implement Unity: Linux Mint. And while Linux Mint is great (I use it daily), why couldn’t we simply agree to stick to Ubuntu and implement the option to disable Unity. It’s open source so it’s possible.

This is where Open Source should make the difference with Microsoft. Microsoft did an equal move by removing the start-button and implementing a dysfunctional desktop (Metro) without any way to “change back to normal” (while Windows users crave for a solution to make their pc’s go faster and don’t care about a new desktop).

Instead of creating one successful well supported product, we create forks, versions that are just slightly different than the original.

All these branches, “doing this different because we believe it’s better” make it impossible to maintain oversight. This is the comparable to Microsoft trying to push their “standards” just for the sake of having an own (in their case: patented) standard.

There are dozens of ways (libraries) to upload a file on a website. If I really want to have the best solution, I have to go to all these projects, demo it or install it. It would be better to have one or two projects that are flexible and well supported by all browsers. Developers just have to learn working with 2 packages and can start working for any employer or follow up any project. It could be taught at school, it could be far more popular and better than any of the dozens of libraries today.

jQuery kind of goes into that direction by creating 1 flexible good javascript library that is wildly supported. But the jQuery-libraries by 3rd party developers make it a mess. There’s no oversight in all these modules, the quality is very different amongst projects, they could conflict with each-other or not be compatible with a new/old version of jQuery.

This is the real pain: “wild” libraries as opposed to “well supported” libraries. This is what gives open initiatives a bad name: the lack of equal quality. Because everybody can create open source, there’s no control, hence no quality assurance.

I am well aware of that contradiction. It’s a debate: do you allow anyone to contribute (democratic) and risk quality instabilities, or do you select the contributors that probably will assure quality but make it less open?

What to do with “bad” contributors/modules?

At my job, an alternative online newspaper, we have a comparable problem. Many of our writers are volunteers, some of them can write good articles, some of them don’t. But what do we do with bad writers? There are 2 schools of thought:

1. We allow bad writers to continue an open democratic website where everyone can report what they want, with the risk that bad articles can harm our quality level (and reputation). Bad writers take a lot of time and effort (it’s more work to rewrite a bad article than to write a good article yourself).

2. We only keep the good writers. That would transform our website into a closed medium and conflict with our basic ideas. By maintaining a high standard we could scare away potential new volunteers who think they’re not good enough but might be.

Keep in mind that some volunteers are bad writers but have interesting things to say. Though, there aren’t enough resources to train every volunteer who fits that category.

We’ve discussed this for hours and it’s hard to figure out a middle way. Currently we have to idea to “star” contributions which we think are good, a quality label. We only want to make that clear with layout changes, because we don’t want to add a textual “warning-this-article-sucks-disclaimer”. That kind of disapproval would make the volunteer displeased, if not angry.

I think that idea would work for Open Source as well, and some of them have started such an idea. Drupal contributors, for example, start with a sandbox project that has to be reviewed by an admin. If your sandbox is alright, it will be transformed into a module. Too bad, too many modules have features that are just slightly different than another. This confuses people: “what module should I use? Google Analytics Integrator? SEO Expert? or just the module named Google Analytics?

The bigger plan is of most importance

Just “starring” doesn’t work if you allow every module by the simple rule that the contributor must be a good coder. There needs to be a bigger plan:

  • What modules do you want?
  • Are the current modules good enough?
  • Which modules should be replaced by better ones?
  • Who wants to manage that?
  • Do we allow everyone to contribute? Or how will we select?
  • Is the project “owned” by a private investor? And do we allow that?
  • How do we collect money in case we need it?
  • How do we get people to contribute?
  • How do we handle requests for certain modules that might not fit our software?
  • Do we risk losing users by not implementing certain features or do we implement everything just for the sake of attracting as many users as possible?
  • Who will decide what to implement? How is that process defined?
  • How do we handle bad content/contributors?
  • Is their a “leader”, someone who pulls the strings? A decision maker? And if not, how do we organize?

I know this comes scarily close to management, but these are questions any serious open project will have to answer some day. It would be a pity if open source projects fail by not thinking these through. These type of questions should be answered for every community project, and not just tech ones.

The reason I think why these questions are left unanswered, is because it’s not a pleasant task and it doesn’t add production value right away. If I spent one week thinking about the questions, I loose one week of coding. And, maybe my time is limited to one week. In case of open source, most contributors are developers. And developers want to develop. They don’t want to waste time on the above questions, no, they want to code, rather now than tomorrow. Many developers, like me, don’t like to “manage”. They get behind their computer, start coding, and hope someone will spontaneously say “hey, can I contribute?”. That someone would be a great coder with the exact state of mind as ourselves, and not some sucker who just created his first html-page.

If I look deeper into myself, the thought that someone would “take over” my project, scares me. That’s perhaps another reason why some questions don’t get answered. If other people involve, I could loose the project, my name in bold on the about-page.

Of all the payed web-projects I left, every now and then, I check back on that site to see how things went on. What did they implement? What did they cut? How did they handle that complex js-problem?

It happens that nothing changed at all: the bug that was reported 5 years ago is still in it, the “temporary” solution has become older than my cat, and the space looks frighteningly… dead. Is this what I created? Did someone forget to turn that server off? Is it all forgotten?

Or, the other side, the project is gone, replaced by something flashy else, dumped on a backup harddisk in a basement.

Luckily, most of the times, the project appears to be in good shape, nice features have been added, developers clearly knew what they were doing. It has been handled respectful. This is what well managed open source projects should become. This is why the questions are important.

I better start thinking about the questions right away but first I want to code that feature that will make the project look awesome.

  • Published:November 4th, 2013
  • Category:nginx

We wanted these redirections:

  • project.example.com => www.example.com/project (without changing the url)
  • project.example.com/whatever => www.example.com/whatever (with changing the url)

In other words:

  • I wanted a subdomain that was nothing more than a page on the main site (or a subdirectory). But, the user shouldn’t know that.
  • Every link that on the subdomain should visibly redirect to the main site.

Turns out easy in Apache, but hard to accomplish with Nginx.

This is how you do it

Continue reading “Nginx: redirect a subdomain to a subdirectory without changing the url”…

Laravel works out of the box with Memcached. However there’s a difference between the linux programs Memcached and Memcache (without the D). To get Laravel to work with Memcache you can write a package yourself.

First of all: don’t edit the original files of the Laravel package. I know you can just find/replace every instance of Memcached and replace it by Memcache, but as soon as you’ll update your project, every change you’ve made to the core files will be overridden and lost. That’s why you have to create a separate package that adds functionality to the system instead of blindly editing the system.

Continue reading “Laravel use Memcache instead of MemcacheD”…

It felt like a Monday morning. After my alarm clock didn’t get off (+ 1 hour), I noticed there was only a train 1 hour later, so instead of arriving at half past 9, I arrived at 11 and missed the first speaker.

Lightswith + Drupal

Anyhow. I picked up the last quarter of using Drupal together with Lightswitch (= Visual Studio). Apparently not a lot of devs had interest in the subject because there were only 20 people in the room. And those who didn’t attend were right, because all the speaker could tell us was that marrying Drupal and Lightswith could only result in a divorce.

Lightswitch can create a HTML5-admin environment based on a data layer. That data layer could be your Drupal database. Nothing works out the box for Drupal because MS of course wants to integrate their software (SharePoint, Office) and not someone else’s software.

Another downsides of all these MS-Drag-And-Drop-Automatic-Data-Layer-Builder-stuff, is that when you change your database, something on the other side might break and you could end up writing the data layer yourself (as a attendee commented). Plus, the actual html output looks weak and is unusable in a serious professional environment. Don’t try this at work, pro’s!

Drupal 8 discussion panel

Three Belgian core-devs (swentel, Wim Leers, aspilicious) had a one hour Q&A hour about Drupal 8. They all had a lot to tell so the number of actual public questions was limited.

You had to know some Drupal 8 in forehand, because new projects (say WSCCI, PSR or TWIG) were discussed without being explained.

The main message was that Drupal 8 is ready to port your modules to. But, there’s still a lot of work to be done. There are still upcoming API-changes, you can’t translate a node’s title yet and there are various other big and small release blockers. But: Views should be finished. Ah!

And why Drupal 8 should be better that its processors:

  • PSR proof. PSR is a PHP coding standard. (aimed for PSR 4 however, it’s uncertain if it the project will get there)
  • Display Suite is now a core module (however, is this really such a big plus?)
  • Getting rid of the hooks in favor for a more object oriented way (however, hooks still exist)

Continue reading “DrupalCamp Leuven 2013, a brief Saturday review.”…

  • Published:September 12th, 2013
  • Category:Disqus
  • 3 comments

It’s like a virus these days. Every blog I come across uses Disqus as a comment system (if they don’t use the Facebook comment system). So why shouldn’t you do this?

Pro

Ok, I get it, people are lazy, they don’t want to register at every site just to post a comment. You, as a blogger, don’t want to pay askismet or mollom for decent spam protection or don’t want to waste time moderation the comments.

But this is the price you pay:

  • You outsource your comments. On the web, when you get something “for free”, this means Disqus (or Facebook, or IntenseDebate or any other comment system) is owning your comments and can do whatever they want with it. In other words:
    • You provide Disqus the possibility to use your visitor’s comments as a source of ad-income.
    • If Disqus would quit, you might loose all the comments on your blog and comments you posted on other blogs. (maybe they’d provide a backup, but then you still need to integrate them in your own comment system)
    • Disqus might get out of fashion or a new “fancy” comment system could come up. Will you be able to transfer your Disqus comments to the new system?
  • People still need a profile somehow. Maybe some people (you know, those crazy privacy freaks) don’t want to leave traces of what they commented on query-able on the web for the rest of their lives (Disqus makes public profiles of people’s comments).
  • You still need to moderate.
  • Your website might slow down if Disqus is hardly reachable.
  • Everybody uses the same comment system! Come on! All websites get so predictable and look all the same.

So stop using Disqus or Facebook comments; Thanks!

  • Most fonts are located in /usr/share/fonts
  • But there isn’t just one folder. You can find the location of all the font folders in /etc/fonts/fonts.conf
  • You can save custom fonts in the folder ~/.fonts . It’s possible that the folder doesn’t exist, so you have to create it.
  • The filenames of some fonts are different from the actual fontname. Search in the directory for parts of the fontname. Eg: the “Monospace” font in Ubuntu is actually an alias for DejaVuSansMono and is called ttf-dejavumono.ttf
  • sans” or “sans serif” like Arial, is a font with no accents attached to the characters. (sans is French for “without”). Use for screens and websites.
  • serif“, like Times New Roman, is with accents attached to the characters. Use this for printing text or books.
  • mono” or “monospaced” like Courier New, is a font which characters are equally wide. Use this for coding and html.
  • To clear the font cache, like when you’ve downloaded a new font, use this command:
    sudo fc-cache -vf

 

This tutorial explains how to secure your Dropbox files with Truecrypt in Ubuntu (or Linux Mint). It assumes you know Truecrypt already and have a basic understanding of the Unix folderstructure.

Why securing your files in Dropbox?

I use Truecrypt for keeping my personal files. Basically all my important files are in a 50GB volume. My Dropbox folder was located inside the Truecrypt volume.

Like this:

/media/truecrypt1/Dropbox/all_my_files/

I wasn’t satisfied with the system. Who knows what happens with your data when you submit it to Dropbox. A hacker could get access to my account, or a Dropbox-employee or the government (not all stories are conspiracies).

There wasn’t really a point of securing my data with Truecrypt, when everything inside the Truecrypt-volume was copied to “the cloud”.

What even bugged me more were the credentials on my filesystem. Some folders need different credentials (www-root, root-owned files, mysql-files). When Dropbox faces a file it can’t access, it keeps on indexing and consuming cpu.

How does it work?

I came up with the following script:

Continue reading “Secure Dropbox by using Truecrypt volumes”…

Next Page »