Lightbox implementation, first steps into AJAX.

I try to help with the website and part of it was getting rid of a popup we show for showing an image.
I knew what it was what I wanted but didn’t know what it was called, well after some searching I found out it’s called a lightbox (DUH). So called for the resemblance of the real lightbox used in photography.

As my programming skills in javascript aren’t that good I have to rely on some already made ones. And so I entered the world of javascript frameworks, there are several out there and everybody claims the one they use is the best, of course. As I won’t be doing programming myself I could care less which framework my to be implemented lightbox was using. Now that’s not entirely true because I’m sure as we start using the lightbox we’ll probably want to add different features in AJAX as well. I settled on using protoype.js and scriptaculous.js as the framework. It seems to highly popular and so I’ll probably be able to find some nice scripts that utilizes this framework.

It isn’t an easy task as there are more lightbox implementations as there are ways to design one and I’m still checking some of them out. I started with something called beatbox which looked great but as we use dynamic graphics I had to hack the script a bit. At the moment I’m using Lightwindow 2.0 and it seems to be very powerful, maybe to much for our needs at this time. Another one I will try out is Lightbox 2.0

I have to admit, the more I’m looking into AJAX the more I’m beginning to like it but I should stop myself from over using AJAX, which is very easily done.

Using git – Part II

Well I can say I’m very happy with git. I actually use it now too to maintain this blog, not the posts them self but the layout, additional plugins etc. While doing this I ran into something odd.
Maybe a little bit of an explanation on how it’s et up right now. I have a main repository folder and a working repository, maybe it’s not the ideal situation but that’s the way it is.
Whenever I make changes I can check them locally as the working folder is also the folder for my local Apache, so I can see changes while I’m working. When satisfied I push the working repository and I update my remote website from the main repository, at least that’s the way I wanted it to work.
The first time I made changes I pushed them out and uploaded the folder to my remote website with FTP. I checked the remote website and the changes I made weren’t there. So I checked my local testsite and sure enough the changes were there. I pulled the main repository and it said everything was up to date, I pushed again and the same reply, everything is up to date. OK, now I’m confused. I checked the changed file in my working folder and the changes were there but when I checked the same file in the main repository folder and the changes weren’t there! What was happening? A git-status in the main repository showed the changes I pushed earlier weren’t committed. Time to search the Internet πŸ™‚

The FAQ on the git website gave me the answer:

Why won’t I see changes in the remote repo after “git push”
The push operation is always about propagating the repository history and updating the refs, and never touches working tree files. Especially, if you push to update the branch that is checked out in a remote repository, you will not see the files in the work tree updated. This is a conscious design decision. The remote repository’s work tree may have local changes, and there is no way for you, who is pushing into the remote repository, to resolve conflicts between the changes you are pushing and the work tree has. However, you can easily make a post-update hook to updating the working copy of the checked out branch. The main problem with consensus of making this a default example hook is that they only notify the person doing the pushing if there was a problem. (see or the earlier, more easily cut-able  and past-able version A quick rule of thumb is to never push into a repository that has a work tree attached to it, until you know what you are doing. See also the entry (How would I use “git push” to sync out of a firewalled host?) in this FAQ for proper way to work with push with a repository with a work tree.

Ok, that explains it and it makes sense but I didn’t think about it. It just tells me I should RTFM before using software πŸ™‚

Is Python the solution?

Before I got involved in the Ubuntu community I was briefly involved in the Fedora Infrastructure. Most of the tools they used were written in Python. When I got more involved in the Ubuntu community I noticed that Python was used a lot there too. I’m curious why two major Linux distributions choose to use Python so intensively. During my Fedora time I noticed many Python related activities weren’t being picked up and I asked the following on the Fedora mail list:

Just out of curiosity but why are our webapps written in Python and not in Perl for example?

I have a feeling there is more Perl knowledge among infrastructure specialists as there is Python.

The reply was simple, somebody started in Python a while back and Python was structured and when using Perl it’s very easy to create unreadable code.

I don’t agree that Python won’t lead to unreadable code as well. Now I don’t know Python but I know several other programming languages. I’ve been programming for over 25 years and have seen my share of good code and absolute garbage and it didn’t matter what language was used. I truly believe the difference between good code and garbage is the programmer, not the programming language. Sure certain programming languages can help in setting up a good structure and therefor it should make it a little bit easier to write readable code. I used to program in Cobol and I can say that was one programming language with lots of rules and structure. It sure helped but I had coworkers who’s code was horrible to debug or near impossible to extend.

Again, I don’t know Python, I have seen some programs and that’s it, so I don’t know how easy it is to write good code or make it completely unreadable. I will be teaching myself Python over the next few months as I would love to help out with some of the Python issues I see in the Ubuntu community as well. Maybe I even start a new blog series: “Teaching myself Python”. I don’t know how big the learning curve is but I’ll give it a shot.

Using git – Part I

I have been using git for a while now and I have to say I like it a lot. It’s quick, easy to use and very informative when using the webgui as well.

The one thing that is kind of odd is the fact most, if not all commands are duplicated, for example the command git-pull can also be git pull. I know it’s the same but why have two options? I know it’s not a big deal, I just hope doesn’t get out of control.

The other thing is that when you updated your files and created a tag and you push your updates to the repository it doesn’t push the tag, you have to push tag manually. The developers know about it and I believe they are working on it. Otherwise I love git.

Even though git is designed for big projects and I don’t manage a big project at all, it doesn’t stop me from using git. After Linux Linus delivered another great piece of software.

Picking a SCM

I needed a SCM and I knew there were some others out there besides CVS and SVN.

I did some searching and what it comes down to is that every SCM program has it’s supporters that will tell you that there choice is the best.
I looked at the big open source projects to see what they were using and that limited my choices a bit, I couldn’t really find a lot of projects using Mercurial, bzr.
I decided to go with git and so far it’s been a bumpy ride

I don’t have a lot of experience with SCM and that contributes to the bumpy ride, but I’m not giving up and I have solved some major issues by myself, I couldn’t clone my main repository anymore for example. I’ll keep posting about my experience with git.