Archive for February, 2011

Very busy days as usual before leaving for one week of vacations


It’s was very dense at work this week, to prepare some customer meetings my colleagues will deliver next week, with our Symposium preparation, some meetings around the OSSI, …

More over, in order for all of these activities to work fine, I wanted to publish 0.11.1 and MondoRescue So I’m happy to finish the week-end very tired, ill but with a success… still in progress. Most of pb is already on the ftp server, and MondoRescue will follow soon.

So I think that my next week with my wife in Turkey will be relaxing and enjoyable as I really need it.

I’ll make the official announces of the 2 projects next week I think, as I won’t have time before leaving very early tomorrow.

Next version will support the Mageia distribution


With the announcement of the first Mageia ISO, I thought it was ready enough to try to add Mageia support to So after the patch rev [1213], I produced my first mageia packages made with for pb itself.

A good start and a good preparation for me to allow for an even smoother migration when time will come to adopt this new distribution.

I’ve also made lots of various patches to better support additional sources when needed for the build, fix errors related to parallel build which were in 0.10.1, but that created again an incompatibility forcing people to update again pb inside their VM/VE/RM to work flawlessly with the pb outside.

Speaking of RM, this is a new concept in pb ! Remote Machines are now supported. Well not everything works fine as of now. setuprm has improved but has still bugs, but we are starting to see the light and being able to produce packages not only locally inside VMs or chroots (aka Virtuel Environments), but also by launching pb on a Remote Machine which can run whatever supported OS (HP-UX being the next one), which opens the door to work with buildfarms very easily.

All of that in the version 0.11.1 which should be out before the 14th of March, for our HP Technical Excellence Symposium in Grenoble, where I’ll present it, and also make a lab for our Solution Architects. and infrastructure update


I took the opportunity of a Data Center planned shutdown to do a task a reported way too long: migrating my existing old TC4100 NetServer which was hosting the projects I’m working on in our Solution Center to a much less obsolete ProLiant system with much more capacities.

I also updated the underlying distribution to the latest Mandriva 2010.2 (Mageia is still not ready for production usage), and seeing that trac, my main tool of choice for helping e manage these projects, was still at version 0.11.x, I decided to look at cooker, take the latest 0.12.1 and backport it on to my system (python-genshi was also needed).

Was not too complicated, as long as you follow the upgrade guidelines for trac. In short do upgrade, wiki upgrade and repository resync ‘*’. I also took that opportunity to have a single instance of trac to manage the 3 projects in a consistent way, sharing at maximum what could be. Notice the new [inherit] possibility provided in the trac.ini file as an easy way to do it.

Lots of new features are provided now ! Inerface is still very nice and performant (well new HW helps as well 😉 Once the WebAdmin plugin has been installed and the trac.ini files cleaned up, I was at work again, with a much more maintainable environement. So kudis to the trac team here !

The Web sites and FTP services are working just fine, but I still need to work on re-enabling Sympa to be completely operational.
Last item I really need to look at is a way to reduce spam in tickets and wiki pages, so the captcha method seems to be the way to go. Other ideas welcome !!

So most of my Debian friends or Fedora friends would argue with me why I’m still choosing Mandriva for that ? Well the answer is for me simple: it provides all the tool I need to do what I have to do on that machine. Which means in addition to the services already mentioned, creating yum repo (createrepo is there), or for dpk (dpkg-scanpackage and apt-ftparchive are also there) or for urpmi (genhdlist is also there. And except Mageia, I don’t know of any distribution that would allo wme to do that.

And anyway, if it’s not in it, I just have to add it 😉 And the Mandriva/Mageia ecosystem is still IMHO the most friendly to receive contributions. So, even if the future may be seen as uncertain, it’s still fo me the way to go.

Services are nearly all back online, so thanks should go to my HP colleagues, the Solution Center for hosting, the trac, vsftpd, Apache, Sympa and Mandriva teams. Please use the projects, report bugs, write documentation, share and enjoy !

Fosdem 2011 Report – Day 2


Second day at Fosdem. This year, I decided to go and visit the perl community, as I’m more and more coding in perl, and would like to learn more about news, additional modules, and meet more perl hackers !

Gabor SzaboUsing Perl6 today
No real 6.0 announce. Already published and improving. Gabir gave lots of info on perl5/perl6 differences:

  • hash element prepended with % (including pointers)
  • array element prepended with @ (including pointers)
  • Expressions in {} are interpreted and executed
  • Variable may be typed
  • Chaining conditions 23 <= $age <= 42 (avoids and)
  • Junction if $age == 2|3|4 (avoids or)
  • Easier access to array elements (including in pairs, …) with missing parts managed – matrix type of computation
  • New ‘Z’ operator which allows array combination
  • lazyness allow infinity to exist: doesn’t generate the full list anymore from scratch. my @x = 1..Inf exists !
  • functions parameters number are checked, type as well. So we can now pass multiple arrays in functions without attribution issue. It also manages optional params.
  • Types can be defined by the user. Constraints can be put on params.
  • Operators can also be combined (Z and ~ – concat) on arrays e.g. (Notion of meta operator, hyper operator, user created operator)
  • Perl 6 manages classes methods (public, private)
  • Regex also have evolved a lot – Grammars can also be defined (based on regex) and inherited as well.

One quote I really liked: “Perl6 is fully buzzword compatible”.
A very intresting presentation on perl’s future, well present in fact, even if I’ve not tested it up to now. Looks promising but a huge change.

Damien Krotkine (Curses::Toolkit)

Damien advertized the French perl event (French Perl Worskhop 2011) in Paris, Perl Dancer community, his book, Perl Moderne. As it was also warmly recommended by Dominique Dumont (author of Config::Model), I bought it, and even had the author’s signature on it !! I started it in the train back from Brussels, and indeed it’s a good one, focussing on specific topics, (so not a bible), but very nice to read, and informative. Of course, as you guessed with the name, it’s in french 😉

Damien then talked briefly about Curses::Toolkit, as all these advertizing took a bit too much time IMO. He covered:

  • Curses::Toolkit curses binding for perl inspired by GTK
  • Why Curses::Toolki ? Existing Curses is too low level, Curses:UI buggy and inflexible; Curses::Widget
  • Curses::Toolkit real toolkit with widgets and events, OO, kbd, mouse, timer events driven, using POE.
  • Curses::Toolkit uses themes and is very easy to customize

Damien then concluded by making an impressive demo of modern caracter based interface

Mark Overmeer (Perl Data Structures)

Excellent talk by Marc who covered in 1 hour as many topics as he had on his list, without exhausting it !

He underlined:

  • the importance of scalars in perl. Used for everything. Consumes 28 bytes each (because they can store multiple values (dual var/$!)
  • False values in perl: undef, 0, 0.0, “”, “0” (the most dangerous). die "no fn" unless $fn is a mistake as filename “0” is false even if it exists. Recommends using length instead. $x = $temperature || 20 is also a mistake if temperature is 0. Recent perl provides operator // for that for undefined
  • An array is a copy of a list of params (he insisted on difference between list and array)
  • Context is the specificity of perl. An expression can not be understood without its context in perl. There is a void context. An array in scalar context provides its length. There is list context (@a = 3; works). Good way to loop on arrays foreach my $x (@a) { print $x; }. $x[3] is a promise of a scalar.
  • Array affectatation: @x[1, 3, -1] = 6..8. Also valid is (getpwnal $x)[3,-1] (gives uid/shell). getpwnam in list context gives the 10 elements. In scalar gives the uid (you only know by reading the man page).
  • In hashes, you can delete elements with delete $a{b} . $a{b} = undef is different. if exists $a{b} gives true with undef. if defined $a{b} gives false.
  • Array knows the order, hash doesn’t and is 20 times less performant. $h{time} is a hash but $h{(time)} calls the time function and get result as a key. @h{'x','y','z'} = 1..3 works (again promise of an array) and creates a hash initialized.
  • Tip my %str2errno = reverse %errno2str (list context). Another tip: @h{keys %y} = values %y;

Dense and useful session.

I took a pause and came back half an hour later, time to say helllo to Bdale, and some Mageia friends.

Stefan Hornburg (Template::Zoom)

I had more problems finding that talk interesting. I may not be the right audience, but also the monocord sound of the voice wasn’t helping. Stefan explained that:

  • Base is separation of Web design and programming
  • Some templates do not respect this (Template::Toolkit, HTML::Zoom)
  • T::Z provides static HTML file and spec file
  • Use Interchange (FLOSS e-commerce server) and ITL language
  • Config can be done with XML or Config::Scoped

Examples were given.

SawyerX (Moose: Postmodern metaclass-based object system for perl5)

After the previous talk, it was refreshing to see the enthusiasm that SawyerX deployed to convince us how Moose was wonderful. And I must confess that even if I’m not a big fan of Object Oriented approach (showing my age here !), I was ready to try after that talk. Exceelent one IMO.

First he advocated Then he went on explaining that objects in perl 5 is a blessed hashref (bless {}, __PACKAGE__; (name space of the package), that the new method is manual, self is the invocant and he underlined problems with std Object in perl. And hows that half code is really needed.

He went on with Moose.
define an object in Moose is as simple as:

package ...
use Moose

has name => (
is => 'rw', # or ro
isa => 'Str', # Attributes have type constraints Str, Int, ArrayRef, Regexp, HashRef[Str] + inheritance + own types
# there are setter/getter methods

inheritance is as simple as:
extends 'ParentClass'; (may have multiples)

Roles are behaviours (not a class of its own) is as simple as:
with 'a_role'; (may have multiples)

Hooks are ways to change the behaviour from inside Moose:

before leaving => sub {
my $self = shift;
after leaving => sub {
my $self = shift;
around login => sub {
my $orig = shift;
my $self = shift;
$self->ecurity_check and $self->$orig(@_); # runs the login method only when security check is ok

Some attributes:

default => 3,
default => sub { {} }, # but rather use builder
required => 1, # it's required
lazy => 1, # Only do the action as late as possible, especially for infinite loops
builder => 'build_it' # Moose doesn't know it so you have to code
sub build_it {
my $self = shift;
clearer => 'clear_it', # Moose knows it so you have to code, it clears the value as it never existed,
# but do not go back to default
predicate => 'has_it' # it checks that an attribute value exists (including undef) - doesn't create anything
lazy_build => 1 # same as lazy =>1 + lots

The final quote said: “Moose produces beautiful, clean and stable code and is here to stay“.

Additional modules:
MooseX:: SimpleConfig automatic creation of structure from config file
Catalyst is now based on Moose. Perf penalty is minimal especially for long running apps.

Incredible talk. Worth having no lunch for hearing it.

Then I attended an unplanned session, not on the paper program:

Alex BalhatchetWriting readable and maintainable perl

That talk covered some good tips and tricks on how to write perl code that is just here to stay as well !
Generic advises:

  • use strict is mandatory (avoids typos)
  • use warnings
  • use autodie # make open() and others die on error
  • use feature ‘say’;
  • Advertise CPAN (90000 – mostly well documented) makes code more readable/maintainable. Problem of choice (use testers report, rating date). Task::Kensho gives good recommendations.

Best practices:

  • code in paragraphs
  • throw exceptions (rather than error codes) with die (catch them with eval, or now try/catch
  • use builtins (use readline($fh), warn instead of print STDERR, glob instead of
  • use Scalar::Util List::Util LIst::MoreUtils (comes with most perl) brings min, max, last, first, …
  • Be consistent with existing code base (inconsistent is worse than unreadable)
  • Make sure there are tests, and good tests.
  • Perl::Critic (and perlcritic) does static analysis (referes the whole Perl Best Practices book !!)
  • Perl::Tidy (and perltidy) makes code more readable.

The presentation is available at

Gabor SzaboPadre
Gabor came back on scene to present Padre, IDE in perl5 for perl. Focus on beginers or occasional users.
I was interested for my son to whom I teach some perl now, and who would be a perfect candidate to use that tool (/me being a vi/vim/gvim user since 24 years, and not ready to change ;-))
He underlined that use diagnostics improves error msgs. He mde a demo of multiple Padre features. ALt-/ shows the contextual menu (for keyboard people !). Variable replacement based on content (differentiate $x from $x[0] and $x{“foo’} and replaces the right one.

Paulo CastroPackaging Perl and it’s deps

Paulo explained his problem: delivering a version of perl with all its modules (400+) for its applications.
He showed multiple possibilities and the one finaly retained, which consists of having a single package including perl and all the CPAN modules built for that version, using a local mirror to build it.
He also mentioned pmtool for inventory, CPAN::Site module and the CPAN Distroprefs features orientation.

Even if interesting, as a packager myself, I still find odd to use that approach, instead of using the packaging format of the underlying Linux distribution to perform this. Even for 400+ packages. In fact for that type of work, I’d myself pick and chose the Linux distribution having the most perl packages already done, and provide the rest myself. Debian or Mandriva/Mageia could be a good start in that perspective.

It was then time to pack everything and go to the train station and back to Grenoble. The 4 and half hours in the train were used to clean up my perl code in, as that series of presentations gave me lots of ideas and energy to do it !!

i look forward participating next year, and hopefully, doing submissions earlier as a speaker again.

Fosdem 2011 Report – Day 1


This year, I was able, thanks to my mangement support, to attend the Fosdem 2011 in Brussels. HP was a sponsor this year, and I think this event deserves it, so I’ll recommend it for sponsoring again next year, as this is one of the best community event in EMEA I’ve been able to attend (with the RMLL/LSM.

The list of speakers is impressive, with key developers of most famous projects. However more taregtting sysadmin, apps devs rather than Kernel Hackers, such as at Linux Conf Australia.

I arrived at Fosdem Friday evening by train, where I was lucky to be with Dominique Dumont who explained to me in details how I could use his Config-Model perl module to manage my configurations files (I have quite a lot with lots of info in it) in We decided to join the restaurant where other fosdemers were eating for the Devops Meetup @ Fosdem, including the project leaders of FusionInventory Gonéri Le Bouder, David Durieux and Walid Nouh. We went back to the hotel not too late, as a loaded week-end was at the horizon 😉

I attended a lot of interesting sessions. First day, it started with the keynotes:

Eben Moglen – Keynote Why Political Liberty Depends on Software Freedom More Than Ever

Extremely interesting talk (as expected), but as it was the first time I was able to attend one of his session live, I was really happy to see him in action, so passionate and energizing the community listening to him. The Room was full, so hundreds of people were exposed to his great talk.
Some points he underlined:

  • Social media we have today belong to private companies.
    Recent events in Tunisia, Egypt show importance of the Internet and the Social media.
    If states shut down the Internet, then no revolution is possible anymore.
  • So he underlined the importance of putting in place mesh network to support liberty.
  • He refered to the Washington Post “Top Secret America”
  • He also insisted on the # of private google versions exisqting WW, especially in the US, mentioning the huge mass of data mining done there.
  • When states discussed around the Internet, they consider it under the Cyber-War angle, exfiltration (spying) being considered as normal by most of them, however disruption of the Internet seen as normal is not shared across them.
    So social media services needs to be federated and not centralized if we want to sustain liberty and freedom (so not a la Facebook or Twitter) and the network, is disrupt, should not be trusted per se.
  • He then advocated “plug-servers” as a way to go on supporting privacy and personal freedom (hosting Asterix, tunneling, and various services). He also underlined the importance to be able to preserve anonymity.

He concluded on calling for help from free software developers as key people in the fight to support freedom globally and ask their help to develop meshed, potentially anonymous, tools helping people preserve their data while still distributing them. He obtained a copious round of applauses.
A question of J. Zimmermann of La Quadrature du Net allowed him to praise the role this group plays for the Net neutrality.

I met with Bdale Garbee at the end of this presentation. This is for me a real honor to be treated as a friend by him, and I always enjoy talking with him around Free Software as he has one of the most interesting view, and vision of our ecosystem. We stayed together up to the lunch, discussing about projects, HP, Open Source, … I wish it could have been longer !

Chris Lattner – Keynote on LLVM and CLang

Chris talked about the approach this project has around new compiler paradigms, layer oriented, more modern than the 30+ years old gcc. Their projects are delivered under the BSD license.
He explained the advantage of his architecture with OpenGL optimization examples.
He then described more in details the advantages of the Clang C/C++/Obj-C compiler.
Clang has a large set of features, brings performances improvements wrt gcc, while keeping gcc compatibility.
It currently compiles itself and Firefox (1 hour less than with gcc) and aims at compiling FreeBSD.
Chris underlined the performance gains obtained in O2/O3/O4 in compile time (2 to 3 times) as well as in execution time of resulting binaries (+5% to +20%).
Particularly interesting at least to me were the examples of error messages brought by Clang, which are much more precise and explicit than gcc, by really pointing to the place of the error, and giving comprehensive messages. And Clang also provides a static analysis tool, that can help detect programmation errors.
Probably worth checking on your prefered apps to see if it improves your user experience as Chris suggested.

After the lunch with Bdale, I went to the Cross-Distro Dev Room

Jared Smith (Fedora Project Leader) – Swimming upstream
This was a failry entry level presentation, but enjoyed it anyway. Only photos and words. Of course, 3 months later that pres doesn’t mean anything for you, but that’s entertaining, and I should try that at least once 😉
It consisted of:

  • Comparison between salmons swimming upstream and work with upstream projects.
  • Used the quote: “None of us is as smart as all of us
  • Why do we care about distributions, is because we all live downstream.

Hans de Goede (Fedora) – Michal Hrusecky (openSUSE) – Downstream packaging collaboration

Problem of dead projects without upstream or downstream collaboration.
Tool to put in place to run a new upstream.

Ideas around sharing patches between different distributions
Topic already discusses last year without big improvement.
Bdale proposed to store centraly pointers to patches, instead of patches themselves.

Bdale Garbee (HP) – HP and Community Linux
Bdale made a short presentation on how HP helps all distributions working at their best on our ProLiant platforms, from the HP supported commercial RHEL/SLES, or community Debian, passing on partners supported ones such as OEL, Ubuntu and Asianux to the community supported ones such as Fedora, OpenSuSE, CentOS, …

Before attending the next session I took time to discuss with the project leaders and some contributors of Mageia. The build environment has made progresses, and will allow me to start working on packages for Mageia, as the tools to use are now available without conflict for Mandriva. Estimation of a usable Mageia version that could replace Mandriva is for later, around Q3 CY11.

Ralf Treinen – Jaap BoenderMancoosi

They came back on on the results of the previous EDOS project. What interested me the most was the mention of the debcheck/rpmcheck tool I didn’t know before. rpmcheck is for example included in Mandriva.

They also covered some of the results obtained by the following Mancoosi project. It soon derived into a discussion of the resue of the work done around these european projects. Problem being the perennity of the source code published without the scientists community remaining after the end of the projects.

Of course, some Linux distributions are very well represented in these projects, so some results benefot directly to them. However, it is always frustrating to see public budget sustaining useful research, with interesting results, without looking at how to transform those into long time projects beneficial for european citizens and our community.

I ended the day by joining the FusionInventory Anniversary party held downtown. A lot have been done and the future looks promising. I’ll try to help the project on my side by providing access to HP network equipment and servers to improve discovery, work with them on packaging with, and also invite them to the TES (HP event) so they could present Fusion Inventory to the HP OSL folks and also start collaborating with people from Combodo working on iTop, a promising CMDB Open Source tool as well.

After all that, was time to make some patches and sleep a bit before starting Day 2.

Continuous Packaging Build Cloud with


So now that I have your attention with this interesting Cloud buzzword, I can develop the idea i’m adding to at the moment.

Currently supports building in Virtual Machines (VM) or Virtual Environments (VE) aka chroot, which are all managed by the machine running the pb command. But with the expansion of the project I’m working on in our joint HP/Intel collaboration, we want to be able to support Continuous Packaging also on machine for which we can’t have a VM or a VE hosted, such as an Itanium HP-UX one or a Sparc Solaris one.

Of course, it’s always possble to log on one of these systems, install pb, give access to the VCS/CMS repository and voila, you’ve shiny new packages. But you (as I) want more no ?

So I’m realizing a new series of patches so that pb can launch build operations on a Cloud of machines, as long as you can connext to them through SSH. In fact the process is very similar to what pb currently does to build in a VM, except that this time, it can be a remote machine. I found it much easier to support than I thought in the code, proof that the design is not too bad and allows for easy improvements.

Also I made recently some good code cleanup after my stay at the FOSDEM where I attended a lot of perl sessions, that gave me energy to do that … during the 4 and half train trip I had to go back home.

And today, it took me less than 20 minutes and no cloud in the horizon to prepare 2 new VMs in order to support Debian 6.0 for future project releases. Tooling stuff is really helpful.

So expect a new version of pb RSN, in order to provide that additional support.